lightdark

Response to OSTP on National AI Priorities

OpenMined’s Policy Lead, Lacey Strahm, outlines an innovative approach to AI system evaluation in response to OSTP’s Request for Information on National AI Priorities.

As AI systems become increasingly prevalent in our lives, a critical challenge has emerged: How do we effectively evaluate AI systems for potential harms while respecting privacy, security, and intellectual property concerns? In a recent response to the United States Office of Science and Technology Policy (OSTP), OpenMined presents an innovative solution: remote auditing.

The Hidden Challenge of AI Oversight

Today’s AI systems can have far-reaching impacts that are often invisible to their creators. Consider a movie recommendation system: while its developers might track engagement metrics, they may have no way to know if their algorithm is inadvertently disrupting users’ sleep patterns. Even if this information exists (perhaps in a sleep-tracking app), privacy and competitive concerns make it difficult to share.

The problem extends beyond just accessing AI systems – it’s about accessing the information needed to understand their real-world impact on people’s lives.

A New Approach: Remote Audits

OpenMined proposes a new solution that allows meaningful oversight while protecting all parties’ interests. Remote audits enable external evaluators to get answers about AI systems’ impacts without requiring direct access to sensitive data or systems.

Here’s how it works:

  1. AI companies host their AI systems on secure servers with specialized APIs
  2. Third parties can securely share relevant impact data
  3. Auditors develop and test their evaluation methods using mock data
  4. Once approved, these evaluation methods run on the real system and data
  5. Auditors receive verified answers to specific questions without accessing raw data

Learning from Drug Development

OpenMined draws an illuminating parallel between AI evaluation and pharmaceutical trials:

  • Early Phase: Just as drugs are first tested in laboratory conditions, AI systems undergo initial testing in simulated environments
  • Real-World Testing: Like clinical trials, AI systems need real-world evaluation to understand their actual impacts
  • Continuous Monitoring: Similar to post-market drug monitoring, AI systems require ongoing monitoring to detect unexpected effects

The comparison is particularly apt because both AI auditing and drug trials focus on the same fundamental question: do people’s lives improve or worsen when they use this technology in the real world?

Benefits for All Stakeholders

This approach offers advantages across the board:

For AI Companies:

  • Protect proprietary systems and user privacy
  • Gain valuable insights about their products’ impacts
  • Demonstrate transparency and responsibility

For Auditors:

  • Access meaningful evaluation capabilities
  • Create verifiable results
  • Focus on impact rather than technical details

For Society:

  • Better understanding of AI systems’ effects
  • Early detection of potential harms
  • Balance between innovation and safety

Looking Forward

As the U.S. government continues developing its AI strategy, OpenMined’s proposal offers a practical path forward for meaningful oversight. Their approach builds on existing government initiatives like the Blueprint for an AI Bill of Rights while providing concrete mechanisms for implementation.

To effectively harness the benefits and mitigate the risks of AI, we must solve these access problems. Without proper oversight mechanisms, harmful effects of AI systems may go undetected and unaddressed.


Read our full response to OSTP here, for more detailed technical specifications and recommendations.

Continued Reading...
View all posts
Response to Ofcom’s Online Safety Information Guidance Consultation
Towards an inclusive AI governance process ahead of the AI Action Summit