in

Frontier Mannequin Discussion board


Governments and business agree that, whereas AI presents super promise to learn the world, applicable guardrails are required to mitigate dangers. Essential contributions to those efforts have already been made by the US and UK governments, the European Union, the OECD, the G7 (through the Hiroshima AI course of), and others. 

To construct on these efforts, additional work is required on security requirements and evaluations to make sure frontier AI fashions are developed and deployed responsibly. The Discussion board can be one car for cross-organizational discussions and actions on AI security and accountability.  

The Discussion board will deal with three key areas over the approaching 12 months to help the secure and accountable growth of frontier AI fashions:

  • Figuring out greatest practices: Promote information sharing and greatest practices amongst business, governments, civil society, and academia, with a deal with security requirements and security practices to mitigate a variety of potential dangers. 
  • Advancing AI security analysis: Assist the AI security ecosystem by figuring out a very powerful open analysis questions on AI security. The Discussion board will coordinate analysis to progress these efforts in areas reminiscent of adversarial robustness, mechanistic interpretability, scalable oversight, impartial analysis entry, emergent behaviors and anomaly detection. There can be a robust focus initially on creating and sharing a public library of technical evaluations and benchmarks for frontier AI fashions.
  • Facilitating info sharing amongst firms and governments: Set up trusted, safe mechanisms for sharing info amongst firms, governments and related stakeholders relating to AI security and dangers. The Discussion board will comply with greatest practices in accountable disclosure from areas reminiscent of cybersecurity.


Kent Walker, President, International Affairs, Google & Alphabet mentioned: “We’re excited to work along with different main firms, sharing technical experience to advertise accountable AI innovation. We’re all going to wish to work collectively to verify AI advantages everybody.”

Brad Smith, Vice Chair & President, Microsoft mentioned: “Firms creating AI expertise have a accountability to make sure that it’s secure, safe, and stays beneath human management. This initiative is an important step to carry the tech sector collectively in advancing AI responsibly and tackling the challenges in order that it advantages all of humanity.”

Anna Makanju, Vice President of International Affairs, OpenAI mentioned: “Superior AI applied sciences have the potential to profoundly profit society, and the power to attain this potential requires oversight and governance. It’s critical that AI firms–particularly these engaged on essentially the most highly effective fashions–align on frequent floor and advance considerate and adaptable security practices to make sure highly effective AI instruments have the broadest profit attainable. That is pressing work and this discussion board is well-positioned to behave rapidly to advance the state of AI security.” 

Dario Amodei, CEO, Anthropic mentioned: “Anthropic believes that AI has the potential to essentially change how the world works. We’re excited to collaborate with business, civil society, authorities, and academia to advertise secure and accountable growth of the expertise. The Frontier Mannequin Discussion board will play a significant function in coordinating greatest practices and sharing analysis on frontier AI security.”


Past Churn Prediction and Churn Uplift | by Matteo Courthoud | Jul, 2023

Routinely detecting label errors in datasets with CleanLab | by João Pedro | Jul, 2023