abstract
| - The main goal of any feedback control system is essentially to remove humans from the loop.
This has always been the goal in the engineering of control systems.
The MAPE-K loop is the embodiment of a feedback control loop in self-adaptive software systems, but the complete removal of humans from the control loop has not been thoroughly debated.
One of the reasons is that, software systems are social-technical systems, and as such, humans need to be considered right from the inception of such systems, otherwise their deployment is bound to fail.
However, as software self-adaptation progresses, enabling to place higher assurances on the deployment of these systems to the point humans become dispensable, some ethical questions need to be raised.
Similar questions have been raised in past when the first automatic systems became intrinsic to the industrial fabric. The difference between then and now is that then the impact was confined to portions of the society, but now the implications are much wider, if we consider, in particular, software systems that are able to change themselves. If humans are not aware of those changes, and their implications, humans cease to be in tune with the system they are operating, and inevitably accidents will ensue. The point of no return in self-adaptive software systems refers to the moment in their technical maturity when any human involvement with the operation of a system is perceived to create more harm than benefit. Confronted with this situation, software engineers need start asking themselves some basic ethical questions. Do we really need to consider humans as an integral part of self-adaptive software systems? If humans are removed from the control loop, what kind of assurances will be needed for society to accept such systems?
|