To build AI that works for democracy and that can bring about more benefit than harm, we need organizing principles under which each party involved in developing and operating AI can operate. This would represent a shared vision of democratic AI. We propose the following:
7 Principles for AI That Strengthen Democracy
- AI must be broadly capable.
- AI tools must be made widely available.
- AI developers and tools must be transparent.
- AI developers must be meaningfully responsive.
- AI must be actively debiased.
- AI tools must be reasonably secure.
- AI tools and their developers must be non-exploitive.
1. AI Must Be Broadly Capable
It needs to satisfy the full range of AI use cases: from predicting outcomes and making decisions, to generating text and images, to still other applications that will become possible in the future. How that works may change over time. It could mean a single model with broad capabilities or it could mean a handful of specialized models with narrower capabilities. Although they don’t need to be the world’s best-performing AIs, they do need to be capable of executing each job adequately.
2. AI Tools Must Be Made Widely Available
Anyone, including public and private entities, needs to be able to access them, regardless of their identity, affiliation, nationality or wealth. They must be unrestrictive in their licensing, portable to many different hardware platforms and available for use at low or no cost.
3. AI Developers and Tools Must Be Transparent
In a democracy, the consent of the governed requires an informed citizenry. Systems and technologies of governance must therefore be open to review and criticism. AI tools must be available for study, testing and extension, requiring models to be open source with respect to both the code and the data.
4. AI Developers Must Be Meaningfully Responsive
The public must be invited to participate in AI development and management. Entities developing AI — whether a for-profit corporation, a non-profit, a nation or an international organization — should actively solicit and demonstrably respond to input from stakeholders, and developers should be held accountable if they do not do so. Responsiveness does not stop with choosing data inputs and model designs; AI developers must consider and monitor the winners and losers from any deployment of their products and responsively mitigate disparate impacts and potential harms to different communities.
5. AI Must Be Actively Debiased
There is no one true definition of fairness, equity or bias, and the last thing we want is for AI to step in to decide these questions. We must recognize that human choices drive the biases of AI systems and acknowledge and manage their effects. No system can please every demographic all of the time, but no basis for trust will exist if AI systems deny or obscure users’ values.
6. AI Tools Must Be Reasonably Secure
They need to do what they promise and nothing more. Using them must not result in the disclosure of confidential or sensitive information. They must enforce data integrity and reflect the actual state of the world. They must be subject to public direction (see “meaningfully responsive” above), not surreptitiously beholden to private interests such as their developers, their billionaire investors, or antidemocratic actors (like authoritarian states). They won’t be infallible, but they must be uncorrupted.
7. AI Tools and Their Developers Must Be Non-Exploitive
They must not co-opt public labor for private profit, whether that public labor consists of millions of individuals’ collective creative output used as training data, or workers in developing economies used to annotate images without being fairly compensated. The actual costs to people and environments must be included in any calculation of the net value of an AI system. Corporate AI developers are skilled at externalizing these costs, but AI aimed at enhancing democracy must demonstrably not do so.
The Scope of Democratic AI
These principles for trustworthy AI for use in democratic contexts are the same principles we would insist that any publicly accountable institution adhere to: to be capable, available, transparent, responsive, debiased, secure and not exploitative.
We are deliberately not prioritizing other issues. AI does not need to be free from mistakes; it can be useful without being perfect. Neither does it need to be free of risk; beneficial uses for AI exist even in fields where it could cause harm, such as biotech. Also, AI does not need to be altruistic; some entities might profit from it. If we saddle AI with too many additional requirements before we consider it suitable for use in democratic contexts, it will never be possible.
Many people and groups advocate for responsible and trustworthy AI. In 2020, Mozilla published a white paper on trustworthy AI with many of these same ideas. Data and Society’s Michele Gilman and the Ada Lovelace Institute’s Lara Groves have each originated frameworks for public participation in AI development.
The Collective Intelligence Project has outlined what it calls “Democratic AI,” a definition for an AI ecosystem that will “provide public goods, and safeguard people’s freedom, wellbeing and autonomy”; that is, a definition of AI that behaves like the ideal of democracy itself. Government agencies like the U.S. National Institute of Standards and Technologies have put forward their own guidelines for trustworthy AI. Meanwhile, research institutes monitor the industry’s adherence to ethical principles, such as the Stanford Center for Research on Foundation Models’ transparency index, which measures factors like model accessibility and data disclosure.
AI Must Work for Democracy
Widespread implementation of these principles will not be easy to achieve, and there is no easy solution for any of the concerns we have discussed. Broad capability largely exists, but the other factors do not. Corporate AI models incorporate structural incentives to limit their availability.
For example, OpenAI gave its lead investor, Microsoft, first dibs on using and commercializing its key technologies. While the R and D and operational costs of many of today’s leading models have been subsidized by venture capital, ultimately, corporate AI developers will extract costs from users by hiking fees or through monetization mechanisms like advertising.
OpenAI, for example, has signaled its intent to charge exorbitant rates for its most capable tools. Corporate AI developers have demonstrated that they have no obligation, and little inclination, to shoulder the environmental costs of their products and have shirked their responsibility to publicly acknowledge and mitigate the disparate, harmful impacts of their products. Most of all, corporate AI has failed at being responsive. Even Meta’s proclaimed open model, Llama, is developed largely out of public view; even the data used to train it is kept secret.
The barriers to trustworthy democratic AI are more social and political than technical. For example, steering corporate AI towards these outcomes requires governments to more actively regulate AI developers to avoid exploitation, to eliminate subversion of user interests on behalf of advertisers and to require developers to meaningfully consider public input. Providing robust alternatives to corporate AI requires political will, and perhaps even significant public investment, amongst many other pressing needs.
Excerpted from Rewiring Democracy: How AI Will Transform Our Politics, Government and Citizenship by Bruce Schneier and Nathan E. Sanders. Reprinted with permission from The MIT Press. Copyright 2025.
