This publication uses cookies

We use functional and analytical cookies to improve our website. In addition, third parties place tracking cookies to display personalised advertisements on social media. By clicking accept you consent to the placement of these cookies.
Scroll down
Automate to illuminate

Automation should buy humans time and clarity. If it doesn’t, it’s decoration.

Scroll down

Dr. Lea Sophie Trampitsch-Vink, Human Performance Management Lead at Austro Control

Automation is already a part of key ATM areas, including predictive vigilance and fatigue risk. There is, for example, a move from static fatigue risk management systems (FRMS) dashboards to dynamic, forward-looking indicators that combine task/airspace complexity, time-in-position, time-since-break, circadian factors and recent workload.

Austro Control uses live data to compute performance and fatigue risk indices for optimising staffing and sector management. Automation helps manage complex flow variables, triggering proactive measures before demands become unsafe. Context-aware safety nets prioritise issues for supervisors, easing the burden on air traffic controllers (ATCO) and aiding safety investigations and continuous improvement.

“Automation should also show its assumptions and data confidence – perhaps there is a degraded radar or unusual speed profiles – so supervisors know when to trust or override the system,” says Vink. “Automation should buy humans time and clarity. If it doesn’t, it’s decoration.”

“In air traffic management (ATM), automation should surface risk sooner, help humans decide better and never take accountability away from operators or supervisors,” says Dr. Lea Sophie Trampitsch-Vink, Human Performance Management Lead at Austro Control and Chair of the CANSO Human Performance Workgroup.

Automation evolution

There is still work to be done. Although deterministic safety nets are mature technology, such predictive and adaptive tools as complexity forecasting, human performance prediction and decision support are in shadow trials and early operations in a few air navigation service providers (ANSP), according to Vink.

“The biggest leap isn’t algorithmic,” she says. “It’s operational integration – reliable data pipelines, latency guarantees, human-centred human-machine interfaces, and governance. Automation is still evolving – especially around drift monitoring, which involves detecting when a model’s world no longer matches reality, and transparent, operator-usable explanations.”

For greater automation to be accepted and integrated, Vink insists it must be human-led and machine-supported. But that doesn’t mean the human isn’t also subject to scrutiny.

How the human is faring is rarely measured with the same intensity as traffic. Austro Control’s studies with ATCOs show task complexity initially sharpens focus but then drives fatigue and error risk when sustained without adequate breaks. Automation must understand these curves and moderators and not just rely on averages.

There are also workforce concerns when integrating automation and success requires transparent policies, controllers involved in the design and clear limits on use.

“Interaction between human and machine will work well when designed around the operator’s strategy, rather than the algorithm’s elegance,” says Vink.

“To support higher human performance in the future we must also look for opportunities where automation does not play a role yet, or a limited role,” she adds. “This would help to lift roles in our network that contribute to performance but are not yet given the priority or support that ATCOs or pilots receive. Our ATM system is a very complex network of people, and all of their performance matters.”

Scroll down
Training

Dr. Lea Sophie trampitsch-Vink

“The training load is surprisingly moderate if the user interface is right,” says Vink “The heavy lift is the mental model alignment – helping operators know when to lean on automation and when to challenge it. At Austro Control, we train for failure modes, not just normal operation.”

For Vink, automation works when:

  • Controllers and supervisors are involved in design from day one.

  • The interface is “one-glance” and integrates with current workflows so there is no tool-hopping.

  • There are operational explanations concerning system recommendations. If ATCOs can’t see why a tool recommends an action, they’ll either under-use or over-trust it.

  • Micro-training is embedded in operations, such as 10-minute drills complementing longer classroom blocks.

  • Trust is built, initially through bounded recommendations, before expansion as confidence grows.

Scroll down

“Automation should buy humans time and clarity. If it doesn’t, it’s decoration.”

Close

Reliability

Dr. Lea Sophie Trampitsch-Vink, Human Performance Management Lead at Austro Control

Automation also relies on there not being any missing, late, or biased inputs. Systems must be reliable and Vink highlights two key dimensions.

First, is whether an automated system is constantly available and fast enough in its operations. Advisory planning tools should be high-availability and low-latency – a minimal delay between input and response. Second, there must be functional reliability, meaning a system must behave within known bounds even under extreme cases.

“Practically, that means there should be shadow-mode back-testing on large historical datasets and live parallel runs with operators before activation,” says Vink. “We also need continuous drift and performance monitoring, and versioned, auditable models with quick rollback. Numbers are derived from function hazard assessments, so the key is that reliability is engineered and evidenced, not assumed.”

This touches on the notion of graceful degradation, which essentially is a default to conservative, safe rules.

As for automation during disruption, making sense of what is happening is automation’s role, says Vink. Automation should triage and declutter, highlight conflicts and any other essential information, then get out of the way.

“In real emergencies, expertise, improvisation, and cross-coordination are human strengths,” she concludes. “Automation should stabilise the basics so the team can lead the recovery.

“Keep humans in charge!”

Automate to illuminate

Automation should buy humans time and clarity. If it doesn’t, it’s decoration.

Dr. Lea Sophie Trampitsch-Vink, Human Performance Management Lead at Austro Control

Automation is already a part of key ATM areas, including predictive vigilance and fatigue risk. There is, for example, a move from static fatigue risk management systems (FRMS) dashboards to dynamic, forward-looking indicators that combine task/airspace complexity, time-in-position, time-since-break, circadian factors and recent workload.

Austro Control uses live data to compute performance and fatigue risk indices for optimising staffing and sector management. Automation helps manage complex flow variables, triggering proactive measures before demands become unsafe. Context-aware safety nets prioritise issues for supervisors, easing the burden on air traffic controllers (ATCO) and aiding safety investigations and continuous improvement.

“Automation should also show its assumptions and data confidence – perhaps there is a degraded radar or unusual speed profiles – so supervisors know when to trust or override the system,” says Vink. “Automation should buy humans time and clarity. If it doesn’t, it’s decoration.”

“In air traffic management (ATM), automation should surface risk sooner, help humans decide better and never take accountability away from operators or supervisors,” says Dr. Lea Sophie Trampitsch-Vink, Human Performance Management Lead at Austro Control and Chair of the CANSO Human Performance Workgroup.

For greater automation to be accepted and integrated, Vink insists it must be human-led and machine-supported. But that doesn’t mean the human isn’t also subject to scrutiny.

How the human is faring is rarely measured with the same intensity as traffic. Austro Control’s studies with ATCOs show task complexity initially sharpens focus but then drives fatigue and error risk when sustained without adequate breaks. Automation must understand these curves and moderators and not just rely on averages.

There is still work to be done. Although deterministic safety nets are mature technology, such predictive and adaptive tools as complexity forecasting, human performance prediction and decision support are in shadow trials and early operations in a few air navigation service providers (ANSP), according to Vink.

“The biggest leap isn’t algorithmic,” she says. “It’s operational integration – reliable data pipelines, latency guarantees, human-centred human-machine interfaces, and governance. Automation is still evolving – especially around drift monitoring, which involves detecting when a model’s world no longer matches reality, and transparent, operator-usable explanations.”

Automation evolution

There are also workforce concerns when integrating automation and success requires transparent policies, controllers involved in the design and clear limits on use.

“Interaction between human and machine will work well when designed around the operator’s strategy, rather than the algorithm’s elegance,” says Vink.

“To support higher human performance in the future we must also look for opportunities where automation does not play a role yet, or a limited role,” she adds. “This would help to lift roles in our network that contribute to performance but are not yet given the priority or support that ATCOs or pilots receive. Our ATM system is a very complex network of people, and all of their performance matters.”

Dr. Lea Sophie trampitsch-Vink

“Automation should buy humans time and clarity. If it doesn’t, it’s decoration.”

For Vink, automation works when:

  • Controllers and supervisors are involved in design from day one.

  • The interface is “one-glance” and integrates with current workflows so there is no tool-hopping.

  • There are operational explanations concerning system recommendations. If ATCOs can’t see why a tool recommends an action, they’ll either under-use or over-trust it.

  • Micro-training is embedded in operations, such as 10-minute drills complementing longer classroom blocks.

  • Trust is built, initially through bounded recommendations, before expansion as confidence grows.

“The training load is surprisingly moderate if the user interface is right,” says Vink “The heavy lift is the mental model alignment – helping operators know when to lean on automation and when to challenge it. At Austro Control, we train for failure modes, not just normal operation.”

Training
Regulatory oversight

Dr. Lea Sophie Trampitsch-Vink, Human Performance Management Lead at Austro Control

As for automation during disruption, making sense of what is happening is automation’s role, says Vink. Automation should triage and declutter, highlight conflicts and any other essential information, then get out of the way.

“In real emergencies, expertise, improvisation, and cross-coordination are human strengths,” she concludes. “Automation should stabilise the basics so the team can lead the recovery.

“Keep humans in charge!”

“Practically, that means there should be shadow-mode back-testing on large historical datasets and live parallel runs with operators before activation,” says Vink. “We also need continuous drift and performance monitoring, and versioned, auditable models with quick rollback. Numbers are derived from function hazard assessments, so the key is that reliability is engineered and evidenced, not assumed.”

This touches on the notion of graceful degradation, which essentially is a default to conservative, safe rules.

Automation also relies on there not being any missing, late, or biased inputs. Systems must be reliable and Vink highlights two key dimensions.

First, is whether an automated system is constantly available and fast enough in its operations. Advisory planning tools should be high-availability and low-latency – a minimal delay between input and response. Second, there must be functional reliability, meaning a system must behave within known bounds even under extreme cases.

Reliability

Dr. Lea Sophie Trampitsch-Vink, Human Performance Management Lead at Austro Control

As for automation during disruption, making sense of what is happening is automation’s role, says Vink. Automation should triage and declutter, highlight conflicts and any other essential information, then get out of the way.

“In real emergencies, expertise, improvisation, and cross-coordination are human strengths,” she concludes. “Automation should stabilise the basics so the team can lead the recovery.

“Keep humans in charge!”

“Practically, that means there should be shadow-mode back-testing on large historical datasets and live parallel runs with operators before activation,” says Vink. “We also need continuous drift and performance monitoring, and versioned, auditable models with quick rollback. Numbers are derived from function hazard assessments, so the key is that reliability is engineered and evidenced, not assumed.”

This touches on the notion of graceful degradation, which essentially is a default to conservative, safe rules.

Automation also relies on there not being any missing, late, or biased inputs. Systems must be reliable and Vink highlights two key dimensions.

First, is whether an automated system is constantly available and fast enough in its operations. Advisory planning tools should be high-availability and low-latency – a minimal delay between input and response. Second, there must be functional reliability, meaning a system must behave within known bounds even under extreme cases.

Reliability
Fullscreen