Good morning. Uh So as Jenny said, we're gonna talk here in our first panel about artificial intelligence and how it applies to space Domain awareness. And what I first want to do is introduce our three panelists starting with Lieutenant Colonel Ashton Harvey. Colonel Harvey is currently the Chief technology officer at the national work ons office specifically in the inw ground Systems program office. Prior to that, he was a services chief fellow at DARPA and he's held many roles across the Department of Defense and the Air Force. He also holds a Ph.D. in engineering and operations research from George Mason University. Welcome, Colonel Harvey. Next up is Major Sean Allen. The major is currently at Space Systems Command where he has the honor of being the inaugural chief of the SD, a tap lab. Prior to that, he was a mission director at the NS DC. He's done a lot of work on hardware and software prototyping and when he was at the Space Security defense program, and he also worked a lot on OPIR and SAC systems. Welcome, Major Allen. And our third Panelist is Doctor Pat Biltgen Doctor Biltgen is a leader in Booz Allen's Artificial Intelligence Group where they apply artificial intelligence across the defense intelligence and space clients. Uh He spent more than 15 years in the defense industrial base working on lots of national security missions. We have the honor of realizing that he just released his book on AI for Defense and Intelligence that's out there on Amazon. So I'm sure we're all gonna quickly pick up a copy in the lobby or rush out to read it so we can talk to pat about it. Doctor Biltgen holds a bachelor's, a master's, and a Ph.D. in aerospace engineering from Georgia Tech. Welcome, Doctor Biltgen. Thank you. So, before we dive into this topic of artificial intelligence, I want to say, it's fascinating to me that we're here talking about this. Growing up as a kid prior to the internet prior to cell phones, mobile phones, as we know it today. UmArtificial Intelligence. The only thing we ever heard about it was when we were watching Star Trek every week and we heard Captain Kirk in the, and the group talk about this all and powerful computer with all this intelligence on the enterprise. And so I find it amazing that we are living it now. And where they were foreshadowing in TV series and movies about what artificial intelligence would do in space. This team and the, the team across the nation is making it happen. So we're gonna talk about what is that making it happen, and have a little bit of discussion of what it is and where we should be focusing. So we're gonna dive in, the first topic I'm going to ask Major, I want to start us off with and I think we start with the basics, since this is a very complex topic, umand we only have a short amount of time. Major, maybe you could start our discussion a little bit about talking about just the basic foundational needs that we have in the government with regards to AI. Before we start diving into maybe the solutions and the priorities. OK. Thanks, appreciate that. Something I will say is I'm the only person here who doesn't have a Ph.D. So I, I have done, you know, some technical program management, but I am an operator. So all of my perspectives are me sharing the observation about how we implement AI from the operator's perspective. So, one question that we were talking about earlier just before we came in, why don't we see widespread adoption of AI, you know, ChatGPT took off this last year. This is not new. What makes it hard to adopt from in the operational context Instead of giving you the list of reasons why I think it, you know, it's hard to adopt. I saw something change when Colonel Raj Agrawal took over this last summer as the space based Delta Two, the SD A and battle management commander. He made one simple change in how he described kill chains. But one of the buzzwords has been, let's get the kill chains to close. We've said that for several years. He turned that and he said my priority to avoid operational surprise is to detect the start of a kill chain. And for whatever reason, that small change in language has converted very well or translated very well to software engineers, machine learning ops, folks, data scientists to say I can do event detection. That's something that I can measure. And so, that's where I've been focused has been on. What does it mean to avoid operational surprise by detecting the start of a kill chain? And then how can I break that down into a set of tasks that we could implement AI to help us with? So is that getting at it no small challenge, right? Is any of your others want to add to that? Well, well, Tony, one aspect would be you know, it's major Allen's point if you're trying to detect those early precursors, uh the adversary is trying to prevent you from doing that. And so a lot of their signatures are very weak signatures or they're trying to deceive you with certain things and humans tend to have cognitive biases or preconceived ideas of this is what I think that means. And the enemy knows that too. So an area where AI can help in trying to identify that back to the comment about big data is that it's really good at finding latent patterns and weak signals. And OK, it may not always be right, but it can at least suggest those to the operator and say here are five things that I thought were weird and the operator would go, oh, you know what, I didn't notice four of those. And that's a very common thing that we see in a lot of cases. in almost every domain where the AI will find non intuitive things, the human usually dismisses those as the AI being wrong. And in many cases, actually, it was just something we never imagined. And in that case, Pac maybe you could talk about or others. So as I understand it, AI really in some ways has the potential to be more powerful in the human brain, but it needs time, right? It needs data to learn and takes time. So how does that work for and against the desires? We have to speed up and improve the kill chain and speed of the mission at the same time? Be accurate. So I'd like to share my thought on that. If you can't clearly define what those objective functions are that you're trying to train a model to detect. then more data is not gonna help you more training time, more CPU hours, whatever is not you, we need to think more clearly and be more specific in what are the tasks that are required for battle management. So I think one challenge in the Space Domain Awareness ops community is it's complex. There are many, many things that must go that, that we have to do correctly to, to achieve space superiority. But then it had, there's a tendency for it to be all things to all people. So if SDA is all things to all people getting more AI sprinkled in on top of that and hoping the operators are gonna adopt it, it's probably not an effective strategy. So I think Colonel Gerald's comments about, you know, the way that we are going to achieve space superiority, avoid operational surprise means we first must detect the start of a kill chain. That, that is a slightly different way of thinking about this problem because now I can say how are, what are the finite list, the number of ways I could be surprised what are the specific attack vectors by which an adversary can um can, can come at me, right? And now we can start saying to detect an event or what are those weak signals? Those may end up being indicators that an adversary is attempting to surprise me by mimicking a payload when he a different type of system or pretending to be debris when he's actually a payload. Those are the, those are things I think we can actually specify get data train models and incorporate them into battle management functions, right? Yeah, I think that's a great point. So well, Sean is more on the operation side and I've spent many hours playing at phone calls where he's out on the floor trying to work through different problems. I sit in a spo and we build things and so I try to think about how do I create a problem that I actually can solve? Someone hands me a stack of nebulous requirements. We need, we need more things, more, faster and more better. How do I turn that into something I can actually turn around and give to folks like Pat and say, OK, here is a decomposed problem. Here are objective measures, here's where you're going. So I think setting, you know, your example is Colonel Ol setting a good framework to scope a problem to give shape to it and to decompose it into subportions that make them chewable, understandable problems that reduce the the mental complexity of trying to frame the problem for someone. So I don't have to solve all of SDA I can focus on. How do I detect that this event is about to happen? How do I detect that a maneuver has happened? How do I classify that? A maneuver? It is non nominal. How do I do these different types of things? building that systems engineering structure around decomposing the problem providing good IC DS API S between those problems allows you to scope it down to a manageable size where people can really start to chew on it. And I think that that's, you know, one of the things you've seen out of space systems command and the NRO there's been a lot of thought leadership in recent years trying to understand how to decompose that problem well and communicate that to our industrial base. So that folks like Pat or others out there can actually put real code behind that and turn that into approaches that can automate things that are typically manual processes. And as we have trusted automated methods, we can then get to the point where we can gain the operators, operators trust to institute very understandable AI approaches to solve those problems where it's appropriate. Yeah. And, and Tony, I think that's a critical point. I'm glad that Major Allen used the word Sprinkle. That's, that's a word that we use a lot is like, oh, let's just Sprinkle a little AI on it or like, let's do AI but to the colonel's point, it's very important. Like systems engineering has fallen a little out of favor cause you're like, oh it's got these processes and I got to decompose these things and let's just prototype it and see what happens. There is a place for that about getting things in front of the operator and saying, what do you think? But you spend a lot of cycles bringing the wrong rock if nobody can articulate the problem. So most of the systems that are successful in the Department of Defense are where the operators and the nerds can work together and say, hey, I got this requirement. It passed down through Colonel Harvey's spoe and was mistranslated 19 times from the original person you talked to. So now that we see it, what do we really mean by maintain custody of object? Let's figure out what each word means. Sounds really pedantic to say we're gonna do a definition exercise of each word in that sentence. But then you understand, like to maintain custody, you need to see it all the time, one shot every five minutes, you don't want to lose it for an hour. Like help me understand what this word means from an operational standpoint because a lot of those nerds don't have the operational experience. And so we can imagine from a physics or math standpoint, this is what I think they mean, but those systems are never successful, you have to talk to the operators and say, show me what you're doing and let me try to understand how I might make your life better. And it sounds like what we're, we're talking a lot about here is the challenge that both government and industry have working together really is to, to harness the technology push that's going on in artificial intelligence, but responsibly apply it in a rigorous methodology. So we can get incremental advances to our operational community to help the mission, right? Not just spend a lot of time and money, just pushing the edge and hoping something is gonna give us because we're learning, but we also need tangible outcomes fast. So I think that leads us into the next kind of topic to dive in just a little deeper. And, and that is, is where do we see right now? And in the coming years, some areas that you think that AI is particularly suited for Space Domain awareness, you, you started talking a little bit about the fact that there's so much out there we want to collect on. and it'll be more and more as time goes on and part of this will help us to make sense of it and then decide, you know what to do in the back end. Can you talk a little bit about? and maybe kind of Harvey lead us off here is where do you see some specific areas that maybe not big art requirements but smaller requirements we see evolving that can point people that are trying to bring solutions to the table. Yeah, absolutely. So, I think Sean again, you know, had a really good insight there about defining the objective function. If the government is not clear with what we want. And I, and I don't mean, you know, in words, I mean, in, in the mathematical where we're trying to go, what are the numbers that we're trying to hit? What are the things we're trying to do? Folks will never really get there, right. So you see that with there's been a lot of research post public public publicized in Amos around sensor tasking. A lot of it has been focusing on how you do sp tasker better, right? You know, it's, it's I, I want to maintain objects in space that aren't moving and you know, don't lose them. Well, that's not super interesting. We can do that. We have, we have an algorithm that can do that. There's a lot of other things that don't have good algorithmic approaches in that sensor tasking world that there are other tasking types that the 18th SD SS is defined to turn those into actual objective functions to then bring together what is the value of going to look at something after it maneuvered to refine that, that state versus what's the, the, the, you know, the, the resources we spend to do characterization shots on an object that we're not quite sure if it's, if it's actually what it says it is turning that into an objective function will help folks actually really solve that. But until that happens, folks are gonna spin on that, we've seen a little bit of good research at Amos where people started to some of them, but not on all of them. Kind of moving away from the decision problem of sensor tasking you look at at classification and we talked a little bit about like AI algorithms are really good at looking about a lot of a lot of streaming data and saying this is a change detection. This is, you know, this is not nominal, this is nominal. There's a lot of places where that can be a really good opportunity to apply AI techniques to those. Um And then, you know, just because I feel like I'm required to talk about ChatGPT. You know, there are a lot of opportunities to leverage large language models to look at unstructured data that otherwise we couldn't pass into an algorithm very easily to potentially pull out insights whether that be news reporting in foreign languages because they can read the foreign language, I can't. And or that be, you know, looking at Twitter posts, media, video, turning video into text that can then be searched to be able to look for other left of launch indicators that might tell me I need to spend more cycles looking at something. Can I jump on? Absolutely. It's my opinion that if you want your innovative new research prototype AI thing to gain adoption in the operational setting, it has to be very clearly attached to a battle management function. So if you can't articulate that in a short sentence, then the rest of the research paper is gonna go unread. So one place where I see missed opportunity is with regard to anomaly detection on orbit. I think there is huge opportunity to make use of existing algorithms that but they're not well integrated because we haven't yet described to the operational community. This is why this type of anomaly when detected gives you decision making power in your protection of an asset. So one example from this past past three months in the lab, if somebody tells me that an object is stable or unstable, that's that's somewhat trivial may not require machine learning. But it might lots of time ragged time series data from heterogeneous network of unc calibrated optical sensors. If you wanna know, you know, the sparse data, tell me quickly whether that target goes from unstable to three axis stable that change. can tell me something very useful about the intent of the target and then I can communicate to a battle manager if objects on this list become stable, that may tell you the next proximity event is potentially hostile, right? So being able to answer those operational questions in that language, not in machine learning language, is gonna help people to adopt your, your whatever it is. If you got some cool long, short term memory thing or whatever, that's doing a neat ragged time series processing rock on, tell me whether or not this target is potentially hostile and if I need to take action against it. So and it sounds like in the example and he brought up ChatGPT, right? We're not talking about at at this point where a lot of people are worried are, are we using AI to replace the human? We're talking about using AI to enable the human? We want our operators to be able to work on things that that they should be spending time on and let now this AI engine start to really deal with volumetrics of the data. And, as you said, merely help us with the back end decision making course of action development and, and things like that. Pat, have you seen any other areas than we were just talking about here? That, that you think really fits this, this near term application of AI. Well, you know, ChatGPT and I are very good friends. So thank you for asking the question. One of the things that's interesting about, OK. So that is a class of algorithms called large language models that has enabled a lot of new applications. But large language models are probabilistic token generators. OK. What they do is they take language and they say I'm breaking these words up into tokens which are parts of speech. And the big breakthrough, the large part was that if you give it enough human language, it says, oh, I figured out how nouns and verbs are related and I figured out how certain facts are related. And by the way, it doesn't always get it right. But it says I can predict the next token in the sequence. One of the things that surprises people is um is its ability to do things for which it hasn't been trained. So I have an example in the book where I said chat GP T here is an Excel file that contains ship movements, find transshipment events. So transshipment events is when two ships come together and they offload something to transfer them to each other. And it says I found six transshipment events and here they are, but you go, it shouldn't know what that is or how to do that analysis yet it produces six events that are correct. And so the researchers have found like it's predicting patterns that are not all based on language. So in space domain awareness, you could use this to take a series of motion patterns and treat them all as tokens and then say, can you predict the next token which would be the future position of this spacecraft? Now we have physics based ways of solving that problem. You go like, hey, I know the math, but it's possible that these algorithms are going to find different math. And some of the companies that are doing um motion prediction for self driving cars instead of doing the kinematics of here's my velocity, my acceleration, my angle and I should be here. They're treating it as a token prediction problem and using large language models to guess where the vehicle should be in the future. So that's an example of a domain where you go like, but you shouldn't be using text based language models to drive a car and some researchers are going well, they work. And as you know, there's like a lot of things in our society where I don't know how that works and someone tried to explain it to me and it doesn't make any sense, but it does seem to work. And so I know that doesn't give us like a good feeling as technical people. Like when Major Allen goes, can you explain to me how this would work? And I go like sir, it's preventing operational surprise. I have no idea how it's doing it and I can't explain it. But have you felt surprised lately? And so that's a kind of a, a weird way of looking at the problem. I mean, like the kernel that checks my requirement is going, this is not going well for you right now, but we're entering a new domain where we're, we're saying like these are algorithms that are solving problems and we may not be able to understand why. One less side, there's a lot of chemistry that we don't understand. Like when you do chemistry in high school and you're like, I have this reaction and then it goes to this. It wasn't until I was in grad school when I was in a combustion course where um Professor Lewin at Georgia Tech was like, by the way, there are hundreds of precursor reactions that happen between those two things and we can't actually observe them like we know that they're there. But we simplified this for you in undergrad and said this turns into this, but there's a lot of other stuff going on that's unobservable by any sensor. And, and that may be true in medicine. That may be true in psychology. That may be true in how I fill up my expense reports. So all those things could be um could be like totally new domains for us where AI could solve a problem. And we really like explainable AI like it's a great buzzword. It's a great concept but you go like we may have to realize like we can't explain it, but it works. So how all, all three of you? So that's a really good segue into. So we are pushing the boundary here, right? Even though in a lot of cases now we're we as humans are trying to develop AI to augment what we do our human operations. We usually use our, we hold our human brain and analysis the gold standard. How do we know as we develop these A I solutions and hundreds and thousands of contractors working with all different parts of the government on different solutions. How do we know or how are we going to work towards trusting? And I don't mean trusting as in, you know, the, the ideas of Terminator, it's trust us that what we said it was going to do meets the outcome. Um especially when that maybe that gold standard is the human and how, how do we now measure a machine against the human performance? So if we talk a little bit about how we see both government and industry embarking it down, you know, we've talked a little bit about the invention of AI, but now how do we get into that operational side of how do I know I can trust it that I can step back and you're not worried about what happens, maybe not tomorrow but a month from now or as it learns, are we feeding that back? So yeah, I'll throw in an opinion. Even if you're doing something super novel, very interesting, you know, whatever large language models for orbit determination, we do need to measure the outcome and evaluate the performance against some benchmark. And there's 50 years of good data science standard practices on how to do that. And in the space domain community, I think a lot of the, the bottom of the data science hierarchy of needs goes unmet. In many cases, access to data, publicly available, expert labeled data sets so that people could throw some spicy new hotness against it and go OK. I knew that there were six ship transit things in this data. how would you know that there are six unless an expert who knows the answer is again, it's supervised kind of a technique. But I'm not the expert in AI. It seems likely that the next few years, we're still gonna see a lot of growth in supervised machine learning techniques. And our community has done a poor job, could do a lot better in solving the bottom of this pyramid. Like let's go get uh all of the commercial G pr data related to a finite set of class label events, maneuver, proximity RF changes, attitude changes, launches reentries, whatever those events happen to be, make those easily accessible to the machine learning community with nice clean labels on them. Like I, I would argue that that's a nice place to start. Then I may not actually care or what. Maybe I won't worry is the right word. Whether you're misusing some cool technology, at least I can measure the performance. And then I can tell you whether I'm gonna spend a nickel on it. So the data is really important. I mean, sorry, go for it. So the data is really important. You make a great point this morning, you made a post about the EM n uh handwriting digit data set that came from back in the nineties. Almost all computer vision algorithms were advanced because of a data set called image net, which was 100 and 20 million images that were just scraped off the internet. And but they were labeled using a data set that was a taxonomy of it was a whole taxonomy of words like it goes down animal mammal, cat and then what type of cat? So the people that were labeling, it had a set of words from a data set called lang net that allowed them to label the images that was a major undertaking. And you go like well, gotta start somewhere. Uh On the satellite imagery side, the government released a data set called X view, which is uh a data set of a million images in 60 different classes that almost all the satellite algorithms have been trained against. And so at some point, you have to, you have to say like how much money are we spending, trying to tune up algorithms against bad data when we could just go, let's just go take the however long it takes five years to go create a gold standard dataset and then you're going to see that progress. And by the way, if you don't believe me, Tesla built the world's largest machine learning supercomputer and they're sucking data off millions of Teslas sending it back to the mothership. And they're going like every time you're in autopilot and you go like this and it goes boo boo and disengages autopilot, you just labeled the data, you said whatever the car was doing, the human did not think was correct. Their data says actually they're right more often than we are. Jury is still out on that. Literally, the jury's still out on that one. But um is that also while my insurance bill keeps going up there? It's you Tony actually remember to your original question, how do we trust it? You know, 50% of us are below the median at any task. So that means like two of us are below median drivers aggressive. OK. So, and then lastly, the trusting part would be if you have that labeled data, that is a gold standard that people have validated, that the algorithms can be run against trust comes with time. And the really tricky thing about A I is, even when it's right, 99.9% of the time, um, this is a phrase DARPA uses the phrase statistically impressive but individually unreliable. It's like you see the one example that's really obviously wrong and you're like, oh, this is never gonna work, but it was right. So many times that you didn't even notice and the one time it's really, really, really badly wrong, like, none of us have ever made a mistake. It just goes like, ah, this is never gonna work. But when you have trained operators, they go, I just can't accept that. And now you're building your way back up from the, the pit of despair. Right? Yeah. So, uh uh for one, it feels like my parents in that sense, right? The one thing you did wrong, they remember. Uh also on the note of trust, I did make a note to review Pat's expense reports for unobservable data. So we, we'll get around to that. Uh Actually Jenny approves them all. Oh, there we go. They're probably fine then. Um So, uh you know, I want to circle back to, you know, uh one of the first interactions I had when uh Sean stood up the the tap lab, was, hey Sean,uh I think you're gonna do great things. Also, I spent a year plus with a bunch of really smart optics people and really smart ma mathematicians trying to build a high quality simulated data set that I knew ground truth because I often I have great space data and I have no idea what ground truth is because we don't actually instrument a lot of the satellites or I don't have access to the GPS receivers on board all the satellites. So I don't really know what truth is and I have no GPS data for debris. So I'm just guessing what, what was actually going out there. But with high quality simulated data, um there's tradeoffs, I I all models are wrong. Some are useful. Hopefully this is a useful one. You know, I really know what ground truth was in the simulation because I made it. So my first thing was take this, it, it, it's not safe to go. Don't take this. You know, we, we to be a good partner. I need it to be able to understand. as his folks are coming up with new innovative ways of approaching problems. What does that look like on a, a data set that I've seen other people work on? I know what the performance is. And I think your point about image net about putting out challenge problems with good data that clearly define a problem, scope it down and say I want here is a, here's a whole bunch of data home of investment. I want you to do this really well. can, can really issue a statement which small teams that can't afford all of that upstart cost that can't build Tesla's supercomputers and get pat to train it every day with with his driving skills to, to be able to reduce that start up cost and allow them to work. So I think, you know, that that is an area in the space domain awareness problem set where we, we have some under investments and a little bit of that is because of classification, you know, ask, just ask Os D Plumb uh about right, Assistant Secretary of Plum about overclassification in the space world. You just made a bunch of statements about it. So, it, it is a challenge. And I, and I, you know, I, we, we talked about two ways to get around it, commercial data is not beholden to dod classification guides, simulated data if done, right? Can sidestep a lot of those problems. So there are ways around it. We just need to find a good, effective way of doing it and establish those problems that are tied to operator needs that can then have good verification validation behind them to build that operator trust. Did you do an OG legend of Zelda reference? Yes, that people like me will not recommend somebody in this room caught it. Yes, I know, I'm sure. So let me, I wanna touch on one more topic before we get a chance to let the people in the audience and virtually ask us some questions. But we, we're delving into different aspects of the human play in this. So I wanna go down that path a little bit more. So let's talk about the workforce like we saw over the last 10, 15, 20 years where um the services and the intelligence community had to really address how do we evolve our workforce to deal with the new cyber technology. Now we have artificial intelligence. How do we see this on both the government and new side? Really pushing the boundaries of our workforce? Are we going to be able to keep up with it? Do we have the people, can we recruit? Is it affect, you know, skill codes and things like that? Because we know the commercial world is pushing on it very heavily. Um We know we have our own classified issues with the workforce and how it limits things. So can we talk a little bit on that, that part of the human element when it relates to the workforce and applying it to the technology? Um especially when we're working in a sensitive topic area like SD A? Yeah, I got, got a thought on that. So I think uh those very technical skills, a literacy and AI software development, cloud computing, all, all of these various technical skills even in technical fields, not everybody is A, you know, enterprise software engineer or a cloud computing expert. Those are those 10 are, these are partitioned out skill sets. So if we want adoption of AI technology, we want trusted data sets that have been scrutinized and measured and built up rigorously. And I want workforce development. This sounds like a multidiscipline kind of activity where I'm, I'm gonna have to have people who are physically co located from very different backgrounds. So we're doing a three month, it's called the Apollo Accelerator, but they're innovation cycles, right? Build a prototype, show it off three months. This first cohort. Um There was one gentleman fresh out of his undergrad did not know what a restful api was but could solve, you know, quadratics and astro dynamic stuff with pen and paper for fun. There were dev ops guys, I had machine learning folks from national labs and every single person on the team benefited very quickly by being in proximity with experts in something different. That sounds trivial, get making the incentive so that people want to show up in the same place. UmAnd talk about something I is a huge deal. If you don't have organic ways to grow cross discipline teams, then you're gonna have to mandate it and that may be very challenging. So, yeah, yeah, I, I think you make a great point there. So being, you know, a guy who sits in a natural program office. I always think t shaped skills, right? You need to be deep in an area. You need to have an expertise that other people can rely on you, but you need to build out an understanding of who's to your right, who's to your left sufficiently, you know how to work with them because you know, I need coders who understand the domain they're working in. I need, I need domain experts who understand how to work in a system engineering process. I need system engineering guys who understand that. What they, what you know, the the thoughts and dreams that they're putting into this Visio diagram have to get turned into code. And, and have enough of an understanding, appreciation of the other folks around them and humbleness to know how to interact with them, to ask the right questions at the right time. And then the connective tissue, I, I, I see the benefits of, of working closely together As a guy living in DC with customers out in Colorado, I will say you can make virtual work. But there is a barrier. And, and you need the right collaboration tools to help bring some of those barriers down so that a conversation 2000 miles away can still look like you're sitting across the table from somebody can still feel organic and, and easy. and not like you're interrupting someone. So those are all important aspects. But yeah, I do think fluency in the techniques to understand what algorithmic approaches might be valid here is, is a very basic thing and you've seen it with, with, with him already, right? You know, he's a very I'm an operator and then he starts talking about, you know, labeled data ETSS and large language models and you know, obviously fluent in the techniques that he's trying to operate around. I think that's important. And do you see coming from uh the program of World program office world also the need to because we're talking about diversity, but diversity also includes leadership, right? Many of us that are in leadership ranks or not, you know that into the technology. So they need to also be educated, right? So that they can be part of this diverse team, making the right decisions and understanding this new technology, right as it comes up. So we're making programmatic decisions, right? Yeah. Yeah. II, I think as a, as a guy sitting in a, in a, you know, doing what ground software or just software development in a place that's used to launching satellites. There's a cultural shock there where, you know, software takes about 2.5 seconds to field when I hit my little get push and I compile and now I've launched a new software version. That paradigm is very different than someone who spends a decade building a satellite that needs to work the first time. I can launch it. I can check it and go. No, that didn't work. Oh, well, let me go fix a few things. so it's a very different paradigm and there's a lot of communication that you need to do to educate them on what your risk posture should be based on. What is the cost of making a mistake? Now, that being said, it's easy to push code. That doesn't mean you should push it directly to prod and then go out for the weekend and on a Friday, of course, you know, you got everything done. It's 530 on a Friday hit. Push, go to prod wish Sean the best. hope his weekend shift goes well. That's a recipe for disaster. But you know, at the same time, if I take the risk posture of every, every push to prod is a spacecraft launch, I'm not gonna innovate in cycles and I close the loop with China. So Pat, I'd like to ask you to kind of wrap this up here before we go for the audience. And I think your perspective coming from probably one of the the largest industry leaders in artificial intelligence. How do you see the workforce challenge and, and how you maybe you're dealing with it? Well, I re with Mr Robinson's comment about the calculator where it was like the calculator is gonna ruin math. And you know, the freak out that professors have about, oh, they're gonna use ChatGPT to write papers. And I kind of go like, I don't know why we write papers anymore. And they go like, hey, Pat, that's a dumb thing to say from a guy that just wrote a book. And you go, like, I know on page four it says the reason why I'm writing this is I don't think people are gonna read books anymore because I actually think it's like if you have this companion that can answer all your questions, you can even be like, tell me a bedtime story and it does, you be like, hey, tell me everything about the James Webb space telescope and it does. And so I think that especially like Children today who are growing up with the calculator that's called AI, there's a magic box that will make text appear OK. You still have to check it and make sure it's correct. You still have to check against references. You still have to say is that my voice that's coming out of the box. But what I often tell our leaders in the intelligence community is like, if you have someone that is 16 years old today that will be entering your workforce when they're 2223 and you sit them down and go, I need you to write me a 10 page paper a 10 page prose report on what's going on in Ukraine. They're gonna freak out. They're gonna go. I haven't written a 10 page paper ever. I haven't ever typed that many keys in a sequence in my life. I would go to my friend and go. I need, here's my prompt and I need a paper on Ukraine and it needs to cover the following topics and I need you to pull this reference and I need a map of this city and I need to know what are the major industries of the city, what it connects, how it works. But that person is gonna know everything that they need to put into crows. And so I think a lot of our workflows are gonna change and the tradition of, you know, I mentioned expense reports, but I also have to do like my annual performance appraisal which the company requires is a giant wall of pros. And I won't say where that prose we can use to streamline that I mean, I'm not going to say where that prose came from, but the our expectations of things that we produce, I think will change and to major Allen's point about it's about operations. It's about avoiding surprise. Like I don't know that I need to write a five page report that says here's how it avoided surprise today. I just need to avoid surprise. So I do think that the young people that are entering the workforce are going to bring a new set of skills, a new appreciation for human machine teaming. By the way, this happened before in the Iraq war. There's a famous quote that says the Iraq war was fought in Chat and it was like the 18 year olds that you deployed in 2003 were used to using Chat. And they're like, oh, there's this thing called Merch on CNET and I'm gonna chat to someone and say the drone is over here, here's what it's armed with, here's the target and they're just chatting like they were at home when they were playing video games. So I do think Tony, you're gonna see that kind of transformation where young people are gonna bring new skills in and we have to try really hard not to squash them and say you have to do it the way we did this in 1990 because by the way, 19 nineties are just like a thing that had like Seinfeld and friends and that's all that people remember from that entire decade. Two great things though. They, they will last forever. So with that, thank you. We're gonna take time to, to turn to the audience. For some questions, our virtual audience can get some lights that are gonna be passed around. I have several questions posed here. And the first is a two part question I'm gonna start with. The second part. Are you concerned about warfighting scenario? And adversarial activity myopia within the future mission development framework? Ok. Well, I think definitely we heard that one and, and I think we have our operational person on here all ready to chomp at the bit at that one. So major, I already uh can you repeat it slow for me? Are you concerned about war fighting scenario and adversarial activity myopia within the future mission development framework? Yeah, I, I'm not sure that I'm either qualified. I mean, yeah, we mean that we have myopia to adversary actions like a bias toward interpreting their behavior. That kind of a thing? Can we get the first part of that question? The first part is, have operators been surprised by Dr MS for quote, being surprised unquote in the space war fighting domain that have been developed by individuals and teams outside of the dod. So you make ad RM you build your system to achieve those goals and it turns out it was a bad DRM is that we're getting at. Yeah, it's absolutely possible. Yeah. And, and I mean that, yeah, multidiscipline is a huge deal here, right? Like you don't have the guy that is the one changing the gears on the truck, isn't the one designing the architecture that he sits inside typically? So? Well, I think on the question too, uh I think maybe not necessarily the perfect intent of that, but we did to touch on this earlier is the fact that, you know, this is a cycle of what we do on our side and how the adversary reacts. And then it's like I call it the the detector of the radar detector and then the detector is Ay. Yeah. So, so I think so one example of interpreting that Tony would be like, OK, we went to the same school that they did, we use the same book. So when you go, hey, what's the best way to do this? You go, well, if you turn to this page, here's the way to do this maneuver with the minimum amount of propellant. And you go, well, why do you think they're gonna do it with the minimum amount of propellant with the same challenge with humans use doing that. That's right. So there an area for AI might be, I would do this with the minimum amount of propellants because only recently we've started to say dynamic space operations and freedom of maneuver. And what if I could refuel things? So I like to conserve propellant because my vehicles are really expensive. But you go, is that what the other guy is doing? Maybe he doesn't care because he can launch vehicles whenever he wants or he can refuel them or doesn't care about burning up a whole rock AI engine could potentially course of action ahead of. If I was building an AI engine to mess with Sean, I would do things like that and I would be like, it doesn't do things that make physics sense or economic sense. It does things that win. And there's, yeah, I just, I want to jump in here now that I understand it better. I do have a strong opinion about this. Yes. we got it going. Yeah. Yeah. Yeah. Yeah. So, I think, you know, time travel is not real as far unless you guys got something going on in the back room. But there are limits to how an engagement can be performed. Physics will constrain those things. That's a math problem. Now does the weapon system that I'm worried about, what is its performance? Now, there's an envelope and that's, you know, informed by intelligence and I can make predictions and estimates and maybe AI can help me with that assessing the intent though, I feel like this is the trap. We be very careful of test the null hypothesis, which is why I'm a little bit concerned about hyperfocus on anomaly detection. You can be a strange guy all day long. If you're not holding me at risk, there's no imminent use of force. So now we need to have a policy discussion. I think we, these things have been ongoing for some time. But what are the norms of behavior for hostile intent? How do you evaluate them? And then we should be messaging those things very broadly. to say there are certain behaviors that we do not tolerate regardless if others imminent use of force. Now. And that, that's a discussion for leadership in the government, right. So by the way, Star Trek postulated AI before it was, it also postulated time travel. So maybe we'll be back here in a few years talking time travel with everyone. Right. So, all right. Do we have another one time for one more question here in the room? Oh, no stealing. So history is kind of rife with some of the examples. And you guys talked about both new technology and operational surprise when you talk about the Trojan Horse Germany bypassing the ma line with fast armor, refitting merchant ships. How will A I help the operator identify those things that achieve operational surprise that we don't know the tactic for that? We don't know the kill chain for how will A I help identify new things we aren't expecting? So this is really around the idea of really early indication work that we're not used to, right? So we're perfect. Colonel Harvey. Yeah. So I'll say you achieved operational surprise when I saw you specifically raise your hands. Yeah, I know that it was gonna be. So you know, we kind of talked a little bit about like how, how do you not get so focused on like, oh, I'm gonna do this and then fuel approach.I will call out a a reach. I think we're a long way from getting there in the space community of being able to throw AI A course of action, determination type stuff. But you've seen in some of the reinforcement learning approaches, you know, before large language models became a thing, everyone was excited about reinforcement learning to solve all the problems. There was a series of successes that eventually got to something called Alpha Star, which was deep mind's approach to solving the Starcraft problem, which was a set of three asymmetric forces um that any two of them would be chosen at a time to go head to head on a complex map with fog of war. We had partial observable. So there's all kinds of problems are analogous to reality, right? War is typically asymmetric, you have different forces, you have different capabilities, you fog of war, you don't see everything, you know what it once look like. You don't know what it looks like right now. All kinds of problems that uh and what they did is they said, OK, here's here's how we think you should solve the problem by giving it millions of playthroughs of how people have done that. And we have a lot of exercise data of how we think people have white carded, you know, the what red's gonna do and what blue is gonna do um get you know, putting that together to teach an algorithm how to start playing the game that is war. And once you have a foundation of letting it play the game, let it fight itself, let it fight itself. So many times that you start to have instances like they showed in alpha star where it perform maneuvers, perform moves that no human had yet been documented as having done. So it managed to find action space that was admissible by the game. that, you know, people had not previously thought it wasn't doing things that people were incapable of doing. They limited. So it couldn't just act faster than people can act because computers are really fast and people can only act to a certain Hertz rating, which is still crazy fast when you get to the really good experts. But they didn't let it do superhuman things, but it did do things that humans had not done. And I think that that's at least one approach that I've seen that might get us to the point where we start to find action space that the adversary could uh explore that we haven't explored, that could then give us a chance to learn how to defeat that those unexplored actions that, that you can start. Once you've seen it in a simulation, once you've seen it in a playthrough, you can start building your indications, you can say, ok, this is what it took to set that up, these are the observables that could have gotten out of this. This is how it could have detected that. But, but you need to, you need an opportunity to see that event to start thinking through how you decompose that problem and how you turn that into a real alert that an operator isn't gonna say that's not gonna happen. That's that, that's not happening. I don't know what you're talking about. So if I pop up an alert today that says, you know, this, this spacecraft is gonna do this crazy maneuver and it's going to, you know, spend a ton of delta V and like circle past three different things and do something we've never seen before. Uh the sean that's working on the off floor today is gonna go That's a mistake and he's gonna throw it away. He needs some kind of confidence that says, yeah, that's a real thing. I've seen that in X rays. I've seen that play out. I understand what this might be doing. I know what I should do now. Well, that's perfect wrap to this session. So I want to thank you all for taking the time. Thank the panel for having such a quality discussion in a very little amount of time. So thank you all. Thank you.