[ ♪ exciting music ♪ ] Catherine, could you start perhaps explaining some of the difference between maybe the legacy traditional core AI and this emergent generative AI? This path going from traditional machine learning, supervised, unsupervised, and those concepts to neural networks and some of these more kind of rote, dareI say, types of AI today to where we see as gen AI was not so much a huge moonshot as it was iterative improvements. So everything that we see in generative AI algorithms today, the way we train them, the way you fine tune them, the way we build them, they're all based on fundamentals that we know, like loss functions, optimization and transformers to models like BERT kind of really changed the game and being able to retain memory and information, those really kind of set the foundation for what we now call foundational models. These are large transformer architectures like GPTs that in and of themselves now, because of the adoption of gen AI, they form the backbone of many, many, variations of different types of gen AI. So, Ed, let's go to you and let's pull the thread on that. And let's talk a little bit about the landscape of these large language models today Can you give us a perspective on where we are? Yeah, so as Catherine was mentioning, there's a lot of different architectures available as sort of an initial backbone to build on and attached to. And that could be just using the architecture itself and training it from scratch on your own data, that can be using the pre-trained model that's been released and fine tuning it to your particular data set or problem. There's another part that also ties into what Alison mentioned earlier is the tokenizers. There's different tokenizers that are used by different models that allow or enable certain things more easily, like counting and how many R’s are there in “strawberry?” Or literally just counting numbers and being able to accurately count from 1 to 5 if you're going to do some numerical task. So there's choices there to be made, that affects its performance on your problem. Do you have the compute to run it? If you have - Llama’s I think the new big one, is 400 billion parameters. Like, okay, you're going to need some muscle, some real compute if you're going to run that. If you want to run it on your phone, okay, that's not an option. I need to pick something smaller. So there's a lot of available options for you to sort of pick from a Pareto frontier of what's important to me, and what do I need in order to get my sort of problem solved, including all these sort of online API options as well. So Alison, let's talk a little bit about the risks around generative. With the executive order, OMB guidance, NIST the AI risk management framework, there's a lot of concerns about these generative models. And in particular, you know, we've deployed many of these to our federal clients. And I'm kind of curious about our empirical lessons. What have we learned? How have we sort of thought about that risk challenge and how do we mitigate those risks as you look towards adoption? I think the risk is really dependent on the use case, and we can't apply a one-size-fits-all kind of solution. And I think going back to that, hallucinations, there are some instances where hallucinations don't really matter because you're not looking for factual outputs. You're looking for creativity, brainstorming, and certain cases where you're not going to be optimizing on reducing hallucinations. But when you're looking at, for example, policies and trying to really understand how to implement a policy correctly, you want to be as accurate as possible, in which case for those clients, we are doing model steering, doing retrieval, augmented generation, or RAG, patterns in order to further ground the data. And so there are other technical techniques that go beyond the standard framework for responsible practices, that I think are really important in managing risk. [ decoder noise ]