While decision authority within IT environments currently resides with human operators, this paradigm may not remain feasible as the size and complexity of our IT landscape continues to grow. To keep pace with this accelerating growth, augmenting the work of human operators with autonomous AI agents will become unavoidable. Agent-based architectures—designed to handle non-deterministic scenarios through integrating specialized AI agents capable of perceiving their environment, making decisions, and taking actions to achieve specific goals—are poised to become more prevalent across industry.
The Endsley Situational Awareness model provides insights on how best to enable agent autonomy. As organizations begin to appreciate the immense capabilities of these AI systems, the role of humans in the decision-making process will undergo a significant transformation.
This transition isn’t just a matter of technological capability but also of necessity. According to the Oracle study, 70% of business leaders would trust a robot more than a human to make financial decisions. This startling statistic shows the growing recognition that AI systems may be better equipped to handle certain complex decision-making tasks, particularly in data-rich environments like IT operations.
Initially, humans will remain in the loop, actively overseeing and guiding the actions of AI agents to ensure accuracy and alignment with predefined goals and ethical standards. AI agents will become responsible for perceiving the environment, analyzing the data to determine its significance, and providing possible projections about what may happen next. But humans will retain the decision authority. Teams of agents working to create recommendations for human consideration is a crucial first step in building trust and confidence in the AI’s capabilities. It also allows human operators to intervene when necessary.
As the AI agents work together and demonstrate increasing reliability and effectiveness, humans will transition to on-the-loop roles. In this capacity, they will provide oversight and intervention only when required. The same teams of agents will still create recommendations structured by perception, comprehension, and projection; however, another agent will oversee these insights and make decisions about which course of action to take. Maintaining the same human-centric situational awareness structure will be critical for building trust in the AI agent’s decision making because it allows humans to better understand the “thinking” of the AI agent.
Trust is not a binary state but a continuum that develops over time through consistent, reliable performance. Research by Zhang, Liao, & Bellamy (2020) on AI-assisted decision making has shown that providing explanations for AI recommendations significantly improves accuracy and trust calibration. By further structuring the decision-making thought process in human terms, the development of trust can accelerate, and human oversight becomes natural rather than investigative.