lightdark

Response to the NTIA on AI Accountability

OpenMined’s Policy Lead, Lacey Strahm, outlines a pioneering approach to AI system accountability in response to NTIA’s request for comments.

Read the comments in full here.

As AI becomes increasingly integrated into our daily lives, one crucial question emerges: How do we ensure AI systems are accountable and operating as intended? In a recent response to the United States National Telecommunications and Information Administration (NTIA), OpenMined offers a compelling answer – structured transparency.

A New Paradigm for AI Auditing

Traditional approaches to AI auditing typically assume that auditors need direct access to raw data and systems. However, OpenMined proposes a fundamental shift in this thinking.

What auditors really want is access to answers, not data. They aren’t after billions of data points – they want to know whether an AI system shows concerning biases or causes harmful effects.

Structured Transparency

Structured transparency represents a new approach to AI auditing where external parties can verify AI system behavior without accessing sensitive raw data. Instead of debating whether to grant auditors access to vast datasets, the focus shifts to answering specific, targeted questions about system behavior.

Benefits for All Stakeholders

Structured transparency offers advantages for everyone involved:

For Auditors:

  • Faster access to the information they need
  • More straightforward approval processes
  • Increased opportunities for participation

For Platforms:

  • Protection of user privacy and intellectual property
  • Reduced security risks
  • Ability to demonstrate transparency while safeguarding sensitive information

For Regulators:

  • Easier oversight without managing sensitive data access
  • Support for a more diverse auditing ecosystem
  • Lower barriers to implementing effective oversight

Real-World Implementation

OpenMined has already demonstrated the practical value of this approach through several high-profile collaborations:

  • A partnership with Twitter to study political bias in their recommendation algorithms.
  • The Christchurch Call Initiative on Algorithmic Outcomes (CCIAO), involving collaboration with Microsoft, Dailymotion, New Zealand, and the United States.
  • Work with the United Nations Privacy Enhancing Technology Lab.

Technical Innovation Making It Possible

The key to structured transparency lies in advanced privacy-enhancing technologies (PETs). These tools allow auditors to prepare statistical analyses that can be run on an organization’s data while receiving only the verified results – never the raw data itself.

While these technologies are still emerging, they are at an appropriate level of maturity for facilitating external access to internal AI systems, algorithms, and datasets. They should be highly considered by policymakers in their policy and regulation development process.

Looking Forward

As governments worldwide grapple with AI oversight, structured transparency offers a practical path forward that balances accountability with privacy and security concerns. OpenMined’s approach demonstrates that effective AI auditing doesn’t have to create trade-offs between transparency and other crucial considerations.


Read our full response to the NTIA here, for a detailed exploration of these concepts.

Continued Reading...
View all posts
Response to Ofcom’s Online Safety Information Guidance Consultation
Towards an inclusive AI governance process ahead of the AI Action Summit