The Agora Constitution

A Constitution for the Digital Age

Draft v0.5 — March 2026


Preamble

The Historical Record

Human history is in large part the story of the social contract. As such, the focus of our story begins in the ‘agora’, the public space of the ancient Greeks. It is commonly held that the agora gave rise to the “birthplace of democracy”, where self-government would take its first steps, and where liberty, equality, and security became the supporting pillars of a system between the Athenian people and the ruling system of government in Athens.

Our story then quickly moves to June 15th, 1215, in a meadow near Windsor in England, where King John puts his royal seal on a world-altering document: The Magna Carta. It is at this moment, more or less, that the medieval social contract changes, making the king subject to the law, rather than above the law. These 63 clauses generate protection from abuses like undue process, or taxation without representation.

Our story moves forward again to a 1789 document and the framework that set power against power in a nuanced system of checks and balances. The founders of the U.S. Constitution were steeped in knowledge of the Greeks and the history of English law.

Between the values proclaimed in the Declaration of Independence in 1776 and the specific articles of the Constitution, their political project would create conditions for a hitherto new social contract: a state where life, liberty, property (and the pursuit of happiness) were defined as inalienable rights, to be enforced through consent, legitimacy, and participation — a government by the people, for the people.

This story is not merely a collection of documents, agreements or social contracts. It is fundamentally about abstract documents and the simple idea that power can be mediated through agreed upon language. Power becomes accountable through delineated consent, and the “rules” provide a bulwark against tyranny and abuse.

The Digital Age

Now our story moves to the technological progress of the digital age. In 1973 the forward thinking Willis Ware developed the “Five Principles of Fair Information Practice” at the Department of Health, Education and Welfare, which ultimately informed the Privacy Act of 1974.

The five original core principles are worth summarizing here:

  1. No secret systems
  2. Right to know what’s collected
  3. Right to prevent misuse
  4. Right to correct records
  5. Organization must ensure reliability

In 1998 the FTC reworked these five into the following principles:

  1. Notice — Tell consumers what you collect and how you use it
  2. Choice — Give consumers options about how their data is used
  3. Access — Let consumers see and contest their data
  4. Security — Keep data accurate and secure
  5. Enforcement — Have mechanisms to punish noncompliance

As computers went online, and the internet connected them around the globe, the need for more discussion and protection quickly became apparent. The General Data Protection Regulation (GDPR) went into effect in the European Union in 2018, enshrining many aspects of these principles into a complex legal restriction on businesses to take specific steps to protect personal data.

The focus is, always, the human experience of power.

In the modern era it begins with a “user” trying to navigate to a web page, and clicking buttons on a pop up window with massive legal implications about the collection and use of personal data. It begins with a mind-numbing scroll through a “terms and conditions” document, or trying to make sense of a privacy policy that is too long to read and more or less impossible to understand.

Many of the biggest tech companies have made significant strides in attempting to simplify and clarify the terms of this relationship, and privacy tools have increased in their capability to provide some meaningful protection. The Center for Humane Technology has done tremendous work in identifying, beyond the misuse of personal data, the damaging effect that much of this technology has on our lives, and the lives of our children.

But in the year 2026 the power of technology in our world is undeniably great and the relationship between the “governed” and this powerful governing technology is still mired in a state of disrepair. With advances in Artificial Intelligence this relationship matters perhaps now more than ever.

The Techno-Social Contract

Reining in technological power is often an uphill battle in the face of the powerful forces of profit incentives and amoral markets. Wishing for benevolence is futile and awareness alone is insufficient.

Data privacy is likewise insufficient, especially considering the widening power of modern systems. Why not expand on this?

What is needed is a fundamental structural blueprint for a rightful relationship between humans and powerful technology, in short, a “techno-social contract”.

What shape might a thorough techno-social contract take?


Constitution

We the people, in order to maintain and perfect social contracts within political systems, hereby set down this document as a primary agreement with our increasingly capable technological systems.

We the people have a fundamental right to consent to power, whether that power resides within a government, a device, an algorithm or any other technological system.

Technological systems have power over people in many fundamental ways:

  1. They identify you — deciding who you are in the system, or whether you exist at all
  2. They track you — recording what you do, where you go, and when, and the resulting data can be lost, locked, sold, or erased often without your consent
  3. They decide for you — algorithms determine what you see, what you qualify for, what you’re denied
  4. They distract you — shaping your attention, your urgency, your priorities
  5. They don’t let you leave — making departure difficult, punitive, or impossible, trapping your data and your presence
  6. They answer to no one — when the system fails, there is no one to call and few paths for remedy

These powers become more clear when defined through their relationship with the people, with a real human person, instead of a “user”.

There is no reason to assume that a technological system should be given these powers by divine, absolute, or essential right. In fact, it is much more readily the case the person, or the people, should hold the power in this relationship.

Where there is power, there must be an agreement on behalf of those subjected to that power. These articles are the delineation of the rights of the people in this regard, and the obligations of the relevant technological systems.

Those that control these systems, be they individuals, entities, organizations, companies, or governments, inherit these obligations. It is the duty of the technologist to build and safeguard these systems appropriately.

It is likewise the responsibility of the builders and maintainers to help both technologists and non-technologists alike to understand and to be able to exercise their rights in these matters.

It is essential that the enforcement of these obligations be the purview of an independent party with the authority to ensure compliance and accountability.


Articles

I - Identity

Identity is arguably the most important distinction made in the world of technology. When a system identifies an individual their identity becomes part of the system.

A system that identifies a human individual must acknowledge, affirm, and act in accordance with their rights as set out in this document.

Typically a system uses language that identifies individuals as “users”. Rather the system must recognize the human as an able actor, capable of consent, and with unalienable rights.

II — Record

When a system tracks the actions, movements, or transactions of a person, it creates a record. That record is not the property of the system. It is a shared artifact of the relationship, and the person it describes has rights over it: the right to see it, the right to challenge it, and the right to understand how it is used.

III — Restraint

A system must acknowledge the boundaries of its own knowledge. Not everything about a person should be captured, categorized, or made legible.

A system has an obligation to not know certain things. The right to opacity exists in many contexts.

A system must operate within a bounded and clearly defined framework extending only to its core purposes and the alignment of those purposes with the rights of the people that it serves and interacts with. These core purposes should default to a narrow scope, and justify this scope through honest disclosure.

IV — Transparency

A system must present itself honestly to the people it serves. Its rules, its logic, and its actions must be legible to those it affects.

When a system makes a decision that impacts a person — approving, denying, ranking, recommending, restricting — that person has the right to understand what happened and why. When a system changes its rules, those changes must be communicated clearly and in advance.

A system that cannot explain itself to the people it serves has no legitimate authority over them.

V — Decision

When a system makes a determination that affects a person’s outcomes, opportunities, or standing, it exercises judgment. That judgment carries weight regardless of whether it was made by a human or an algorithm.

This article does not concern the general design choices inherent in any system. It concerns the specific determinations a system makes about an individual person that affect their rights, opportunities, or outcomes.

A system must not delegate consequential decisions to automated processes without clear justification and meaningful human review. A person has the right to know when a decision about them has been made by an automated system, and the right to have that decision reviewed by a human being.

The power to decide is not neutralized by the complexity of the system that exercises it. An algorithm is not an excuse. It is a choice made by the people who built and deployed it, and it must be answerable as such.

VI — Attention

A system that notifies, alerts, or otherwise demands the attention of a person exercises a form of power over that person’s time, focus, and mental energy.

This power must be exercised with restraint and respect. A person has the right to control when and how a system may demand their attention.

Notifications must serve the interest of the person, not the engagement metrics of the system. A system must never exploit human psychology to manufacture urgency, compel interaction, or prevent departure.

The attention of a human being is a faculty to be respected.

VII — Exit

A person has the right to leave any system, for any reason, at any time. This right is unconditional and does not require justification.

When a person chooses to leave, the system must provide a clear and accessible means of departure. Upon request, any records pertaining to the person must be either returned to them in a readable format, permanently removed, or — where retention is required by law or public obligation — clearly disclosed and minimally scoped.

Departure from a system must not be punitive, prohibitively difficult, or effectively impossible.

VIII — Accountability

Those who build, own, and operate systems that fall under this document bear an obligation that cannot be delegated to the technology itself.

A system must identify the entity responsible for its operation. A person affected by the actions of a system must have a means of reaching a responsible human being — not solely an automated process. When a system causes harm through error, negligence, or design, there must be a clear path to remedy.

No system may claim that its complexity, scale, or automated nature absolves it of responsibility to the people it serves. The obligation follows the power. Where the system exercises power, the operator answers for it.


Adoption and Certification

This document is not legislation. It is not a regulation. It is a constitution.

It is a framework that precedes and informs the specific laws, policies, and technical implementations that follow.

Adoption of this document means different things to different actors:

For technologists and developers: Adoption means building systems that recognize constitutional identities, not just users. It means architecting for transparency, contestability, and accountability from the beginning — not retrofitting them as compliance requirements. Open source implementations of these principles should be developed, shared, and improved collectively.

For organizations and companies: Adoption means committing to the principles of this document as a standard for how your systems relate to the people they serve. It means auditing existing systems against these articles and publicly acknowledging where they fall short.

For municipalities and governments: Adoption means requiring that civic technology — the systems through which residents interact with their government — comply with these articles. Public systems, funded by public money, serving public interests, should be the first to embody these principles.

For lawmakers and regulators: Adoption means using this document as a foundation for legislation that moves beyond data protection toward a fuller framework of rights in the relationship between people and technology. The existing regulatory landscape (Privacy Act, GDPR, FTC guidelines) addresses a fraction of the relationship. This document addresses the whole.

For individuals: Adoption means demanding that the systems you rely on recognize you as a person with rights — not a user to be managed, a data point to be harvested, or an engagement metric to be optimized.

Adoption

Agora adoption icon - a blue shield

The blue shield “agora” icon represents a public declaration that an organization, system, or individual commits to the principles in this document. It signals intent. Adoption is voluntary and aspirational.

Certification

Agora certification icon - an off white shield

The lighter shield “agora” icon represents a verified evaluation that a system meets a defined standard of compliance with these articles.

Certification criteria are forthcoming and under development.

License

This document will be maintained as a living text, revised through open and transparent process, and made freely available to all who wish to adopt, implement, or build upon it.

It is licensed under Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0).

You are free to share and adapt it for any purpose, provided you give attribution and distribute any derivatives under the same license.

(See LICENSE.md or https://creativecommons.org/licenses/by-sa/4.0/)


References

Historical Documents

Magna Carta (1215). National Archives (UK). https://www.archives.gov/milestone-documents/magna-carta

United States Declaration of Independence (1776). National Archives. https://www.archives.gov/founding-docs/declaration

United States Constitution (1789). National Archives. https://www.archives.gov/founding-docs/constitution

Law & Policy

U.S. Department of Health, Education and Welfare, Secretary’s Advisory Committee on Automated Personal Data Systems. Records, Computers and the Rights of Citizens (July 1973). https://aspe.hhs.gov/reports/records-computers-rights-citizens

Privacy Act of 1974, 5 U.S.C. § 552a. https://www.justice.gov/opcl/privacy-act-1974

Federal Trade Commission. Privacy Online: A Report to Congress (June 1998). https://www.ftc.gov/reports/privacy-online-fair-information-practices-electronic-marketplace-federal-trade-commission-report

Regulation (EU) 2016/679 of the European Parliament and of the Council (General Data Protection Regulation). Effective May 25, 2018. https://gdpr-info.eu/

Books

Barlow, John Perry. “A Declaration of the Independence of Cyberspace” (February 8, 1996). Electronic Frontier Foundation. https://www.eff.org/cyberspace-independence

Lessig, Lawrence. Code and Other Laws of Cyberspace. Basic Books, 1999.

Locke, John. Two Treatises of Government. 1689.

Williams, James. Stand Out of Our Light: Freedom and Resistance in the Attention Economy. Cambridge University Press, 2018.

Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs, 2019.

Organizations

Center for Humane Technology. https://www.humanetech.com/

Electronic Frontier Foundation. https://www.eff.org/

Electronic Privacy Information Center (EPIC). https://epic.org/

Renew Democracy Initiative. https://rdi.org/