We’re excited to see OpenMined’s work and technical approaches featured prominently in Open Horizons, a timely report published by Demos and Mozilla exploring balanced approaches to AI openness and transparency.
The report, drawing from a June 2024 expert workshop in which OpenMined’s Dave Buckley participated, outlines key recommendations for pursuing AI openness benefits while managing risks. In particular, the report highlights technical governance methods that can enable meaningful transparency over open and closed-source models.
Several OpenMined initiatives and approaches are highlighted as promising solutions. The report highlights the importance of external oversight, citing our collaboration with the Christchurch Call Initiative on Algorithmic Outcomes as a promising example, where we demonstrated how PySyft can facilitate secure external audits of production recommender systems at Microsoft’s LinkedIn and Dailymotion. The report also argues for broader researcher access programmes, and highlights our ongoing collaboration with Reddit to develop a privacy-preserving researcher access program with PySyft.
The technical governance approaches that OpenMined advocated in our comments to the National Telecommunications and Information Administration’s (NTIA) Request for Comments on the Openness of AI Models are also featured. Specifically, the report discusses how secure enclaves can enable structured access to closed models while protecting intellectual property and how retrieval-augmented generation (RAG) approaches could help partition model capabilities for safer model sharing. As we argue in our comments to the NTIA, combining these approaches can facilitate a shared governance model for AI that moves beyond the traditional open vs closed dichotomy.We welcome the report’s vision of more nuanced approaches to AI transparency. Rather than viewing openness and safety as opposing forces, we need sophisticated technical governance solutions that facilitate structured transparency – providing the benefits of openness whilst ensuring safety. We’re proud to see our approaches being recognized as potential solutions to such critical challenges in AI governance. OpenMined remains committed to developing practical tools that can help realize this vision, and we look forward to working with partners across the ecosystem to implement these ideas and create more transparent, accountable, and secure AI systems.