Picture a pair of glasses, nondescript in appearance but purpose-built for use by frontline LEOs. When activated and worn, the glasses biometrically authenticate the wearer, which provides secure access to advanced capabilities. Specific mission data, such as suspect profiles, maps, license plates, or building floor plans, are pushed to them, enabling offline operation in the event of gaps in wireless coverage.
With a crisp and wide visual, the LEO sees information relevant to their mission overlaid in their field of vision. The data ranges from route guidance and traffic flow to the locations of suspects and team members. LEOs are also alerted to threats within the vicinity. During an operation, the glasses visually identify "friend or foe" at range, thereby reducing the chance of friendly fire on fellow on- and off-duty LEOs. The glasses also estimate crowd size and density, identifying key characteristics such as movement patterns. Facial recognition is performed on-device against the preloaded suspect profiles, reducing inefficient round trips while protecting the privacy of law-abiding citizens. A simple verbal command prompts the glasses to capture and digitally catalog data along with its surrounding context in time and space—information that supports future forensic analysis.
With its onboard scene intelligence capabilities, a pair of AI-powered smart glasses continuously maintains a temporal-spatial simulation of the world around it. The glasses stream delta updates of its local “world understanding” to C2, which gets fused with geographic information system data and other sources, such as drone and closed-circuit television (CCTV) feeds, to create a live, centralized digital twin of the area of coverage. This provides a unified view along with historical analytics and predictive simulation to support data-driven decisions, and the resulting central state is regularly synchronized back out to LEOs to give each individual real-time, swarm-like insight. A quick attachment of an add-on sensor enables the glasses to detect beyond the visible spectrum, including the presence of chemical, biological, or nuclear material, which enables the LEO to see and instantaneously report their findings. All of this happens without a word being said.
The underlying data streams can be captured as historical record and will be invaluable not only for after-action reports but also for training new recruits and synthesizing future scenarios that have yet to happen.
In addition to the passive assistance, LEOs can actively issue queries by voice, asking questions and giving directions such as: What is the last observed location of the suspect? Are there civilians in the area? Get me an aerial view of where 911 calls related to this incident are coming from. What is the quickest and safest route out of the area? With an understanding of both the individual and group contexts, the glasses can respond with answers tailored for the situation at hand in milliseconds. Up-to-date policies and procedures can be requested, which can be challenging to recall during or after stressful events.
The construction and deployment of such a platform lies within the realm of possibility. Commercially available smart glasses such as the Ray-Ban Meta collection, Snap Spectacles, or DigiLens ArgoTM feature high-resolution electro-optical sensors and a Qualcomm Snapdragon® processor that can run AI models in parallel with a full-fledged physics-based simulation at 60 frames per second. They have the wireless connectivity for low latency reachback to secure infrastructure. They can already perform speech to text, text to speech, computer vision, stream voice, and video. The next generations of these glasses will offer even more. Making them mission-effective will boil down to stitching the underlying technologies together seamlessly to serve a purpose.
Within the context of law enforcement missions—which run the gamut of public safety, including facilitating the lawful movement of people and goods, disrupting illicit activity, or responsively investigating innumerable tips and leads—the primary challenge boils down to data fusion and distribution and the user experience. Any information relayed to a frontline LEO must do so in a way that highlights relevant context and removes noise without serving as a distraction or a cognitive burden.
To be effective in the field, this platform will have to be built under tight size, weight, and power limits. LEOs will not wear glasses that are fragile, require frequent battery changes, or are tethered to cords that inhibit movement. Many LEOs are already overloaded by gear: weapons, body cams, first aid gear, lights, and more. Updates to the devices, client apps, private cloud microservices, and corresponding infrastructure must be seamless and scalable as utilization increases, making “software-defined everything” (SDE) a necessity. If it’s not overwhelmingly and obviously effective, adoption will be difficult, if not impossible, to achieve.
Furthermore, in an emergent situation like executing a warrant or pursuing a suspect, the glasses must be able to anticipate what the LEO needs. The integration of multimodal AI will significantly increase the value proposition by directly supporting task execution. For law enforcement missions, glasses that leverage AI’s advanced analytical abilities to proactively identify what a specific situation demands— information about the surrounding area, requests for backup—rather than solely wait for verbal commands will enhance LEOs’ SA and accelerate decision making in critical situations.