What does it mean to live a good life? How can AI help us flourish? These are questions that AI ethicists should make central to their work. We should consider an AI system’s potential benefits and risks in concert with one another. In fact, a more robust—and historically accurate—ethical calculus will focus on the net good that an AI system will generate over its lifespan. As we think about the future of AI ethics, the field should emphasize three questions: First, what is the maximal good an AI system can do? Second, what are the potential risks in its design? And third, how can we mitigate those risks to achieve the maximal good? The order of these questions is intentional, as they shift our focus from harms to happiness and from failure to flourishing. This will help us open up new missions and needs for AI ethics to support. After all, ethics was never about compliance. Nor was it simply about the difference between right and wrong. Instead, it provided the overriding question of philosophy in ancient times: How can we be happy and flourish? Revisiting this ancient question will ensure that the future of AI ethics is bright, useful, and critical to the advancement of society. In other words, AI ethics can help us live lives that are, indeed, well-lived. The field is just getting started.