The rapid evolution of agent-based systems will likely disrupt traditional software architectures as AI agents become increasingly capable of performing a broad spectrum of tasks. With applications beginning to incorporate more agents, common multi-agent architecture patterns will emerge, crystallizing a need to:
- Document and test the applicability of each architecture pattern and what use cases it enables
- Establish enterprise functions for agent orchestration, monitoring, management, security, and more
- Methodically establish best practices that calibrate use case, value, compute cost, and architecture complexity
The adoption of agent-oriented architectures may signify a shift away from service-oriented and API-based architectures. While API integrations will remain necessary for certain types of transactions and functions (e.g., deterministic and recurring), agent-oriented architectures could lend themselves well to performing nondeterministic functions and tasks, especially for tasks that haven’t been tackled before. One day a team of agents may even collectively decide which GenAI algorithm to use for their next project, almost like picking the best “brain” for the task.
Given the utility and impact of agent-oriented systems, agentic components will become pervasive in all applications and software systems at some point in the foreseeable future. It’s likely that commercial software products will adopt agent orientation and provision access to not just APIs but to agents for complex interactions and functions. Leveraging these new capabilities to solve customer problems and successfully achieving product market fit will be critical to technology businesses.
Numerous challenges remain unaddressed regarding agent-to-agent communication protocols, mechanisms for agent discovery and registration, skill refinement based on environment feedback, and more. For example, current agent solutions are predominantly designed for human interactions, such as conversational AI in natural languages, rather than the machine-oriented communications typified by web API calls.
To safely and productively operationalize agentic AI systems, organizations will likely focus on basic risk mitigation strategies, with thorough testing and validation of agentic AI systems before deployment and the creation of protocols for human oversight and intervention when necessary. New research into data governance and ethical standards related to autonomy will help prevent misuse of agents or ethical breaches that create financial and reputational damage. Effective control systems will be urgently needed to monitor and regulate agents’ behavior, preventing them from deviating from their intended functions or engaging in harmful activities. Research on auditing platforms, traceability, and techniques will be required to log every interaction agents have with each other.
It’s also important to consider that the decentralized nature of agentic AI systems may be especially difficult to reconcile, at least initially, with the need for strict accountability within the federal sector. Ensuring that autonomous agents operate inside defined ethical and legal boundaries means building robust governance frameworks to begin with. The scalability of such systems must also be carefully managed to handle the vast amounts of data and complex interactions characteristic of federal operations.
There aren’t yet production systems with hundreds of agents working together at once, but agentic AI is a revolutionary leap in innovation, and it is moving so fast that organizations should start experimenting with these tools now, in a sandbox environment, to test their capacity to understand and govern agents effectively. By implementing the right prototyping, testing, and risk mitigation strategies, organizations will position themselves to harness the expansive technical benefits of agentic AI while ensuring that these systems operate in ways that are ethical and innovative, transformational, and safe.