OpenMined contributed expertise on AI auditing and transparency to the global report summarizing the consultations among citizens and civil society actors ahead of the AI Action Summit.
As the world prepares for the AI Action Summit in Paris, OpenMined joins over 200 expert organizations in shaping a new vision for AI governance. The report, which gathered insights from thousands of citizens and hundreds of experts across five continents, highlights the critical need for robust AI oversight mechanisms – an area where OpenMined’s expertise in privacy-preserving AI evaluation has proven invaluable.
Building Trust Through Transparency
OpenMined’s contributions to the report emphasize the importance of creating standardized, verifiable ways to evaluate AI systems while protecting privacy and intellectual property. This approach aligns with the broader consultation findings, which identify transparency and accountability as key priorities for global AI governance.
Summit Deliverables
OpenMined’s submission inspired four key deliverables recommended by the report:
- Global AI Auditing Standards: The rapid deployment of AI systems has outpaced the development of standardized, transparent, and independent auditing mechanisms, leaving significant gaps in accountability and public trust. Existing practices lack comprehensive oversight across the AI lifecycle, increasing risks and undermining ethical compliance globally. International standards for AI auditing and monitoring could include pre-deployment risk evaluations, continuous monitoring of live AI systems, independent audits (including red-teaming exercises), safety reporting protocols, and standardized post-market impact assessments.
- AI Corporation Commitments Report Card: AI companies have made numerous global commitments for trustworthy AI at previous summits. However, there aren’t any accountability mechanisms to ensure these commitments are being adhered to. OpenMined shared several approaches to verify corporate commitments while protecting sensitive information.
- Collaborative Testing Framework for the AI Safety Institute Network: Following the November 2024 meeting in San Francisco, the International Network of AI Safety Institutes has established priority areas in driving scientific consensus, and evaluations and testing of frontier models. There is now a need to operationalize these commitments into concrete testing frameworks and protocols. To do so effectively, there is also a need for the AISI Network to establish a robust coordination arm that allows for knowledge exchange and joint testing collaboration.
- AI Commons to Empower Citizens in AI Design: Current AI development is largely concentrated within the private sector, leading to concerns about transparency, accountability, and the potential for bias and harm. Citizens, especially those from marginalized communities, often lack opportunities to influence the development and use of AI systems that impact their lives. An “AI Commons” would democratize AI by empowering citizens globally to participate in shaping its development and utilization.
Looking Forward
The report emphasizes that successful AI governance requires both technical innovation and inclusive participation, a principle that has long guided OpenMined’s work. OpenMined will continue to lead on the development of such bleeding-edge technical solutions for AI governance and is looking forward to sharing more of our recent work in this space in the weeks before the Summit.
Read the full AI Action Summit consultation report here.