Open Horizons: Exploring nuanced technical and policy approaches to openness in AI

We’re excited to see OpenMined’s work and technical approaches featured prominently in Open Horizons, a timely report published by Demos and Mozilla exploring balanced approaches to AI openness and transparency.

The report, drawing from a June 2024 expert workshop in which OpenMined’s Dave Buckley participated, outlines key recommendations for pursuing AI openness benefits while managing risks. In particular, the report highlights technical governance methods that can enable meaningful transparency over open and closed-source models.

Several OpenMined initiatives and approaches are highlighted as promising solutions. The report highlights the importance of external oversight, citing our collaboration with the Christchurch Call Initiative on Algorithmic Outcomes as a promising example, where we demonstrated how PySyft can facilitate secure external audits of production recommender systems at Microsoft’s LinkedIn and Dailymotion. The report also argues for broader researcher access programmes, and highlights our ongoing collaboration with Reddit to develop a privacy-preserving researcher access program with PySyft.

The technical governance approaches that OpenMined advocated in our comments to the National Telecommunications and Information Administration’s (NTIA) Request for Comments on the Openness of AI Models are also featured. Specifically, the report discusses how secure enclaves can enable structured access to closed models while protecting intellectual property and how retrieval-augmented generation (RAG) approaches could help partition model capabilities for safer model sharing. As we argue in our comments to the NTIA, combining these approaches can facilitate a shared governance model for AI that moves beyond the traditional open vs closed dichotomy.We welcome the report’s vision of more nuanced approaches to AI transparency. Rather than viewing openness and safety as opposing forces, we need sophisticated technical governance solutions that facilitate structured transparency – providing the benefits of openness whilst ensuring safety. We’re proud to see our approaches being recognized as potential solutions to such critical challenges in AI governance. OpenMined remains committed to developing practical tools that can help realize this vision, and we look forward to working with partners across the ecosystem to implement these ideas and create more transparent, accountable, and secure AI systems.

Interested? 👀

Sign up to recieve an email when new content like this is posted.

Want to write for OpenMined or help update a post?

Let us know!

By sending, you agree to our privacy policy
and join the OpenMined Newsletter.

Continued Reading...
View all posts
OpenMined at the Datasphere Initiative Roundtable on Advancing Global AI Governance
Balancing Innovation and Privacy: Congress Eyes PETs as a Solution to AI Data Challenges in the AI Task Force Final Report

OpenMined is a 501(c)(3) non-profit foundation and a global community on a mission to create the public network for non-public information.

With your support, we can unlock the world’s insights while making privacy accessible to everyone.

We can do it, with your help.

Secure Donation

$
$
Philanthropist looking for more?
Contact us