Early in my career, I worked on a program that endeavored to use neural networks to teach very early generation autonomous subs how to navigate on their own. That project failed to achieve its objective, but it clued me in to an important point about AI. The foundational vector and scalar math we were using for the navigation neural network was basically the same as the foundation for today’s AI systems. What we didn’t have was the requisite data storage capacity and compute power to adequately train the neural networks to achieve the desired results.
Today, the compute power and data storage capacities available to federal agencies are exponentially greater, and advances in GenAI have opened new pathways to innovation. To return to the challenge of self-navigating machines, in November 2024 MIT Technology Review reported that a trio of researchers had used GenAI models and a physics simulator to teach a robotic dog to go upstairs and climb over a box without first training the robot on real-world data. This is one example of how AI, when smartly paired with other technologies, can radically redefine the art of the possible.
To scale AI so that it can be applied to the biggest challenges our nation faces— from managing the national debt to outpacing near-peer adversaries in terms of technological supremacy—the government should look beyond the algorithms and AI models and invest in the infrastructure that will enable AI to flourish. Just as the construction of networks of fiber optic cables helped create the foundation for the internet boom, taking the following steps will position agencies to unleash the full power of AI.
- Prioritize data: Connect all enterprise networks and devices (e.g., satellites, drones) to collect and store as much operational technology (OT) data as possible. Additionally, start extending the volume of information technology (IT) data that is saved. Both OT and IT data can and should be used for AI model training and simulation. The more clear, relevant data you feed a model, the denser its parameter sets will be, which will increase its accuracy and capabilities.
- Leverage software-defined environments: Apply software engineering practices to transform hardware-dependent systems into dynamic, software-defined environments. Certain organizations have used software-defined processes to automate complex tasks in enterprise resource planning systems. Now, it’s time to extend this concept to all aspects of physical and virtual deployments.
- Use digital twins: Use digital twins to create realistic virtual environments that can host simulations and testing, then use AI to optimize the performance of these systems.
This paradigm can be understood in terms of something I call the modern technology flywheel. Imagine a frictionless enterprise, where every piece of information that’s collected feeds into a virtual machine. That virtual machine uses digital twins to train AI models through thousands or even millions of simulated outcomes. It then pushes those models out to an edge device or back into the cloud where the models learn from real-world deployments and feed those insights back into the real and synthetic data environments. This sequence repeats itself over and over. Each turn generates more data that can be used to improve AI and the system software, and that new data optimizes the next turn of the wheel. By harnessing this intricate but achievable paradigm, government agencies can achieve the flexibility, scale, and acceleration essential to unleashing AI and tech at full speed.
The individual components that drive the flywheel—real and synthetic data, software-defined environments, digital twins—have matured on different paths, which is why its outsized potential wasn’t fully recognized until recently. Its interdependencies are also tremendously complex. But each time the flywheel turns, the performance gets better and the enterprise benefits—growing more efficient, more innovative, and less vulnerable to dislocation.