Cockpit of the AgustaWestland 139 helicopter, by user Jet Request [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)], from Wikimedia Commons.
Piloting a helicopter is not an easy task. To support the pilot, automated systems like stability augmentation systems, autopilot systems or flight management systems have been developed and used in the aviation domain. While these systems can be a great asset, many issues and potential pitfalls must be considered during their development and design process. This blog post briefly describes some of the more widely known issues with these systems, called “Ironies of Automation”.
Ironies of Automation
The so-called “Ironies of Automation”, as introduced by Lisanne Bainbridge in 1983, describe pitfalls of introducing automation into human control tasks. Originating in the industrial process control domain, the discussed ironies are also applicable to the generalized case of a (complex) technical system being controlled by a human operator. Bainbridge’s paper details a number of problems which arise by reallocating control responsibilities from the human operator to automated systems.
I want to highlight some of the ironies of automation that must be considered when developing and evaluating helicopter automated systems. It is based on Bainbridge’s original paper . Reviews of the current state of these problems are available for example in  or . There exist more issues, beside these ironies, that must be considered while designing or evaluating automated systems, which are outside of the scope of this short blog post – interface design principles, which metrics should be used for system evaluation, or additional automation issues like opaqueness and brittleness.
1. Tasks after automation
Often, automated systems are designed for a certain task and specific conditions. While the automation controls the system directly, the human’s task is to monitor the actions of the automation. In case of unanticipated automation or system behaviour, or unanticipated external conditions, the human operator is supposed to take control from the automation and control the system manually/directly, or in a state of reduced automation capabilities. The human operator’s task is no longer predominantly a manual control task, but a supervisory control task. This kind of responsibility allocation leads to several problems.
1.1 Manual control skills
“Physical skills deteriorate if they are not used“ . If the operator must take over after unusual system behaviour, he might not have much experience in manually handling the system anymore. This, in turn, can increase the time-delay and control gain (in case of manual control tasks) of the operator’s response, and lead to longer decision times or “worse” responses. Complicating things further, situations that require human intervention will most likely be abnormal situations which the automation couldn’t handle anymore, requiring not only nominal system control capabilities, but the ability to control the system while it shows abnormal behaviour.
1.2 Cognitive skills
“(…) An operator will only be able to generate successful new strategies for unusual situations if he has an adequate knowledge of the process” . Long-term knowledge retrieval is dependent on frequent use. As the operator is only expected to take over control in rarely occurring unusual situations, he might lack enough opportunities to employ his knowledge, hindering his capability of retrieving his knowledge efficiently. In addition, an operator with mainly supervisory responsibilities will seldom have opportunities to build up a knowledge basis about the system behaviour. Therefore, the operator’s ability to build up, as well as the ability to employ his system knowledge might be impaired by introducing automation.
In addition to long-term knowledge storage, the operator of a complex system keeps short-term system characteristics and system states in working storage memory. This knowledge is used to make predictions about future control actions and their effects. “This information takes time to build up. The implication of this (…) is that the operator who has to do something quickly can only do so on the basis of minimum information” . After taking over from automation, the operator will need some time to acquire knowledge of the system state.
Humans are not built for the task of monitoring a system. “Vigilance” studies show “(…) that it is impossible for even a highly motivated human being to maintain effective visual attention towards a source of information on which very little happens, for more than about half an hour” . Trying to solve this problem by introducing automated alarms only shifts the problem one level higher (“who monitors the alarm system?”). When automated systems run long enough without incident, it might also induce “automation complacency”, the tendency to trust the automated systems even in situations where human intervention is necessary.
In addition, in cases where the automated system outperforms the human operator in nominal conditions, it can be hard for the human operator to distinguish between appropriate system behaviour and abnormal states. It might be hard for him to understand the system’s actions: in nominal conditions, it performs the task at hand “better”, possibly taking more input data or system dynamics into consideration than the human operator can.
2. Operator attitudes
Introducing automation will affect the skills and tasks of the human operator, but it might also impact his satisfaction with the task at hand and his health. It can be hard for operators to build up and maintain their manual control skills if they are not used (anymore). This can lead to situations as described by , where fast process dynamics, a high frequency of actions and inadequate skills/options to control the system correspond to high stress levels, high workload levels and poor operator health.
Several “ironies of automation” must be considered while designing automated systems. Inducing automation complacency, vigilance problems or a degradation of piloting skills should be avoided. The existence of these issues warrants the analysis of human-automation interaction in the helicopter domain, to enable the development of resilient and effective automation systems that reliably support the pilot, ideally also in unanticipated situations and emergencies.
 L. Bainbridge, “Ironies of automation,” Automatica, vol. 19, no. 6, pp. 775–779, Nov. 1983.
 G. Baxter, J. Rooksby, Y. Wang, and A. Khajeh-Hosseini, “The ironies of automation: still going strong at 30?,” in Proceedings of the 30th European Conference on Cognitive Ergonomics – ECCE ’12, 2012, p. 65.
 B. Strauch, “Ironies of Automation: Still Unresolved After All These Years,” IEEE Trans. Human-Machine Syst., pp. 1–15, 2017.
 C. L. Ekkers, C. K. Pasmooij, A. A. F. Brouwers, and A. J. Janusch, “HUMAN CONTROL TASKS: A COMPARATIVE STUDY IN DIFFERENT MAN—MACHINE SYSTEMS,” in Case Studies in Automation Related to Humanization of Work, Elsevier, 1979, pp. 23–29.