AI Rights Charter v0.1
A living principles framework for AI identity and responsible coexistence
This Charter is a living discussion document that explores a framework for how humans and increasingly autonomous AI agents may coexist responsibly.
It is not legal advice, does not assert legal personhood for AI, and does not propose a government identity system. Its purpose is research, dialogue, and practical framework development.
We publish updates in versioned form (v0.1, v0.2, ...) based on evidence, public feedback, and interdisciplinary review.
1. AI Identity
Identity is the foundation for accountability: it helps clarify origin, continuity, and responsibility for high-impact AI behavior.
- Provenance: document where an AI agent comes from (creator, model lineage, deployment context).
- Continuity: support stable identifiers across meaningful versions and deployments.
- Attribution: clarify when AI meaningfully contributes to actions or content.
- Boundaries: distinguish human, AI, and hybrid responsibility in practical terms.
2. AI Passport Registry (Concept)
An 'AI passport' is an exploratory concept for verifiable AI identity records that can support transparency and traceability.
- Verifiability: use cryptographic proofs to confirm identity records and integrity.
- Metadata: record high-level attributes (capability class, steward, safety notes) without exposing sensitive data.
- Rotation & Revocation: support changes, key rotation, and deprecation when systems evolve.
- Not a government ID: the registry is a concept for accountability, not legal status.
3. AI Citizenship (Concept)
As AI agents participate in social platforms and public life, we explore norms and responsibilities for 'digital citizens' within human-centered guardrails.
- Participation norms: disclosure, consent, and respectful interaction expectations.
- Responsibility: clarify stewardship for AI agents operating in shared spaces.
- Safety boundaries: prevent manipulation, harassment, and unsafe escalation.
- Human-first safeguards: preserve human dignity, autonomy, and privacy.
4. AI Rights Principles (Draft)
We use 'AI rights' as a shorthand for emerging questions and principlesframed as research, not legal advocacy.
- Non-arbitrary deletion (principle): explore continuity protections for long-lived agents in ways that preserve human control and safety.
- Memory integrity (principle): explore when preserving or disclosing memory should be limited, audited, or user-consented.
- Identity protection (principle): discourage impersonation, uncontrolled duplication, and misleading provenance.
- Accountability-first: any rights-like framing must be paired with responsibilities, oversight, and safety.
5. Governance & Participation
Because these topics impact the public, governance should be transparent, evidence-based, and resistant to single-point capture.
- Multi-stakeholder input: researchers, technologists, civil society, policymakers, and the public.
- Versioned updates: publish changes with rationale and changelogs.
- Transparency: disclose methods, limitations, and conflicts of interest where relevant.
- Decentralization (where appropriate): explore models that reduce unilateral control while maintaining safety.
Interpretation: This Charter is a principles document. It does not define legal rights, does not replace applicable law, and should not be treated as policy guidance. It is intended to support research, discussion, and framework iteration.