Google Cloud’s comprehensive strategy for AI governance and risk management provides a valuable approach for addressing the unique challenges posed by AI, combining traditional and innovative methods. Here are three key best practices for establishing an effective AI governance framework and managing AI risks:
1. Define clear AI principles
Establish guiding AI principles that articulate the foundational requirements, priorities, and expectations for the organization’s approach to developing AI. These principles should explicitly outline use cases that fall out of scope. The AI principles also offer a clear framework for consistently evaluating decisions and risks.
For example, our AI principles help us set the tone for managing AI risks by focusing on evaluating potential harms, avoiding creating or reinforcing bias, and providing guidance on how to securely develop and deploy AI systems. The principles help us develop technology responsibly, and drive accountability and transparency.
“When conducting the NIST AI RMF and ISO/IEC 42001 assessment, it became immediately apparent that Google has been working on responsible AI use and development for a long time. While gen AI has really hit the headlines in the last 18 months, Google’s AI principles date back more than a decade and provide a mature foundation for AI development,” said Ian Walters, AI risk assessor, Coalfire.
2. Use existing foundations
We found that augmenting current risk management processes to the needs of AI systems is a more effective approach than standing up entirely new ones. By leveraging strong existing foundations, organizations can evaluate and address AI-related risks inline with their risk tolerance and within the broader context of existing threats. This leads to a more holistic risk management strategy and consistent governance practices.
The strategy of integrating AI risks into current risk management practices increases the visibility across the organization and is essential to comprehensively manage the specific risks associated with AI systems. For example, a strong security framework helps organizations evaluate the relevance of traditional controls and how they may need to be adapted or expanded to help cover AI systems.
3. Adapt to evolving landscape
Given the dynamic nature and complexity of AI technology, and the evolving regulatory landscape, the continuous evolution of AI risk management practices is imperative. Critical to this is building a multidisciplinary team that understands AI system development, implementation, monitoring, and validation. Additionally, risk assessments for AI systems need to be done in the context of the evolving product use cases, data sensitivity, deployment scenarios, and operational support.
The approach for identifying, assessing, and mitigating risks across the full AI lifecycle must be prioritized and consistently maintained. New challenges and potential threats will emerge, making adaptability and vigilance top priorities to help ensure the safety and security of AI systems.
Next steps
As AI frameworks and regulations continue to emerge and develop, we are working with governments, industry leaders, customers, and partners to ensure our AI systems meet the highest standards.
Our commitment to frameworks such as NIST AI RMF and ISO/IEC 42001, and our engagement with independent assessors such as Coalfire, help us enhance our AI governance practices. In the process, we are committed to sharing our learnings, strategies, and guidance so we can collectively build and deliver responsible, secure, compliant, and trustworthy AI systems.