Cara LaPointe is co-director of the Johns Hopkins Institute for Assured Autonomy, adjunct professor/senior fellow of Georgetown University’s Beeck Center for Social Impact and Innovation, and founder/CEO of Archytas, a strategic consulting firm. Her service with the Navy took her from underwater research engineering to providing vision for that service’s efforts in unmanned and autonomous systems. She holds degrees from MIT/Woods Hole, Oxford and the United States Naval Academy.
Q: Can you define what you mean by assured autonomy?
A: You want systems that are safe and reliable, secure and robust and resilient, can be predictably integrated into the ecosystem, and are ethical and socially beneficial. We’re talking about systems that can be trusted to operate like they’re intended to operate despite the fact that there may be robust adversarial attacks. With AI, you don’t always have graceful degradation—you can have a catastrophic event. We need to be really strategic and intentional about assessing what tools are needed and how we can help built tools that don’t yet exist.
Q: What is your mission at the Institute for Assured Autonomy?
A: IAA works to ensure a safe, secure, reliable, ethical, predictable integration of autonomous systems into society, covering research and applications. We talk about three pillars: technology, ecosystem, and policy and governance. There’s understanding the socio-technical, socio-economic ecosystem the technology is being integrated into. Policy and governance helps technologies thrive and provide important guardrails against potential negative impacts of technology, either from unintended consequences or from adversarial and malicious attacks. We try to look at what I would call, as an engineer, “strategic feedback.” What are the thousands of things that have to be done to realize the future of autonomous systems as truly trustworthy contributors? Are there gaps? How can we create synergies and break through silos and connect these communities?
Q: How does your work relate to inspection?
A: I focus on how we create the tools and methodologies of assurance, the tools that help you better design, develop, integrate, operate and protect these systems. For design and development, we have a team that’s looking at, okay, when I am fixing a bug in the software, how do I inspect it to make sure that I didn’t create new bugs? Frankly, there’s no way you’re going to be able to engineer all of the risk out of the system, so with AI-enabled systems today, it’s important that you develop tools to monitor and govern systems when they’re being operated. Inspection is an important piece of the whole systems engineering piece that’s rapidly developing and not developed yet.
Q: And with physical inspection?
A: I spent a lot of time in the Navy; some of the most challenging things are, say, the chip husbandry and inspections of your prop and your shaft. More AI and autonomy give you more sophisticated tools. You have to think about autonomous systems within an entire ecosystem, and how they’re going to interact. What are my bandwidth limitations? How can I be strategic about what data actually has to leave one vehicle and be shared with other vehicles or humans? If you can’t trust the systems, if you can’t rely on these systems, you’re not going to use these systems.
Q: What about ethical considerations?
A: As an engineer, technology ethics is the idea that you can make small, insignificant choices and those decisions can end up having resounding impact on people, communities and humanity—especially with digital tools that can effectively be replicated across the globe very quickly. So that’s the challenge for all of us. As we explore AI, we have to do it in a way that is ultimately beneficial for society.