As artificial intelligence (AI) races ahead, the balance between speed and safety has become critical. The challenge is clear: how can companies innovate rapidly without compromising the integrity and security of their AI systems? Erica Greene, Engineering Director at Yahoo, and Shreya Rajpal, CEO of Guardrails AI, and Remy Thellier, Head of Growth and Strategic Partnerships at Vectice recently shared their insights on this topic at the AI Quality Conference 2024 in San Francisco.
What Does AI Safety Mean in Practice?
AI safety is not a one-size-fits-all concept. Erica Greene, with nearly 15 years of experience in the industry, emphasizes that "AI safety is very context-dependent." The risks, she explains, vary widely across sectors—from the life-or-death stakes of self-driving cars to the more subtle, yet equally important, risks in media. "As we add more complex technology quickly, we have to be more thoughtful about it," she says. The underlying message is clear: AI safety is not just about preventing catastrophic failures but also about managing everyday risks in a responsible way.
In this context, for Remy Thellier from Vectice, model documentation plays a pivotal role. Comprehensive documentation ensures that all aspects of AI systems—such as what the model is trained on, its intended use cases, performance metrics, and limitations—are meticulously recorded. This transparency is essential for building trust and enabling stakeholders to understand and manage the risks associated with AI deployment.
Shreya Rajpal, whose company focuses on making AI applications reliable and controllable, echoes this sentiment. She argues that safety must be tailored to specific use cases. "It's not just about concrete harms like cybersecurity risks," she says, "but also about ensuring factuality and avoiding bias in AI-generated outputs." In other words, AI safety is as much about the details as it is about the big picture.
From Traditional Machine Learning to LLMs: How Has the Approach to Safety Evolved?
Remy Thellier highlighted the serious shift he is witnessing in large enterprises, from only traditional machine learning to leveraging now also large language models (LLMs). This shift has significantly changed the landscape of AI development. According to Erica Greene, the change has been profound. "The thing that's changed predominantly is that executives want to launch things quickly." This rapid pace necessitates a different approach to ensuring safety, one that involves more proactive risk management and thorough evaluation of potential impacts.
Rajpal points out that the sheer breadth of tasks that LLMs can handle has made defining and ensuring correctness far more complex. "What correctness means or what it means for this AI system to do this task well is very ill-defined," she explains. “This complexity necessitates a more nuanced approach to safety, one that can adapt to the evolving capabilities and applications of AI.”
Evaluating Risks: What Are the Key Considerations?
Thellier then refocussed the discussion on Identifying and mitigating risks before they become problems, this being crucial. Greene advocates for a proactive approach, suggesting companies conduct a "pre-mortem" on their AI applications. "You need to define the harms, think about what could possibly go wrong, and essentially do a pre-mortem on the application." This forward-thinking strategy is about anticipating issues before they arise, rather than reacting to them after the fact.
Risk evaluation is a key use case at Vectice. Effective documentation enhances risk evaluation by providing a detailed record throughout the AI development process. By maintaining a clear and accessible history of all model-related activities, teams can conduct thorough pre-mortems, identifying potential risks early on and ensuring that all relevant data is available for review.
Rajpal agrees, emphasizing the need to tailor risk assessments to specific applications and domains. Developers must ask tough questions about the potential pitfalls of their AI systems. "You have to ground these criteria in data-driven evaluations." In a world where AI systems can make mistakes that range from harmless to catastrophic, this data-driven approach is not just advisable—it’s essential.
Speed vs. Safety: Where Do You Draw the Line?
The tension between moving fast and ensuring safety is a familiar one in the tech world. Greene suggests that companies can have it both ways—at least to some extent. "You can get a demo going really fast, get as many eyes on it as possible, and start to define a rubric for evaluating whether it’s working or not," she says. “The key is not to let the excitement about new technology cloud the judgment about its readiness for deployment.”
Rajpal offers a pragmatic approach: define success criteria early and ground them in data. "What does success look like for me?" she asks, urging product owners to think deeply about this question. In an industry where the difference between a successful product and a failed one can be razor-thin, this kind of clarity is invaluable.
Who Should Be Involved in Risk Management?
Thellier advocated for bringing in stakeholders to the risk management conversation. When it comes to managing AI risks, it’s not just about the engineers. Rajpal stresses the importance of involving a wide range of stakeholders, including product owners, risk and compliance teams, and developers. "These groups must collaborate to identify and mitigate risks effectively," she says.
Thellier at vectice has also been working on solutions: Comprehensive documentation further facilitates this collaboration by making information accessible to all stakeholders. Whether it's technical teams, legal advisors, or business leaders, everyone involved can easily review and contribute to the documentation, ensuring that risks are identified and managed collectively.
Greene takes it a step further, highlighting the value of involving legal and editorial teams, especially in industries like media. "Involving people with different backgrounds and open brainstorming sessions where concerns can be raised at any point in the project is crucial.” In an era where AI can influence public opinion and even sway elections, the stakes are higher than ever.
Conclusion: A Complex but Manageable Balance
Balancing speed and safety in AI development is no small feat. But as Greene and Rajpal and Thellier’s insights reveal, it’s not an impossible task either. With careful planning, proactive risk management, and the involvement of diverse stakeholders, companies can innovate quickly while still safeguarding the integrity of their AI systems. Efficient documentation processes are a key enabler of this balance, ensuring that safety protocols are integrated into the development process without hindering innovation. In an industry defined by rapid change, this balance is not just desirable—it’s essential.