The Decision
On October 22, 2025, the Fedora Council approved a policy on AI-assisted contributions that stands in stark contrast to the blanket bans adopted by Gentoo, NetBSD, and QEMU.
"AI-generated content must be treated as a suggestion, not as final code or text. It is your responsibility to review, test, and understand everything you submit."
Fedora didn't ban AI. They didn't ignore it either. They spent over a year consulting their community, gathering evidence, debating in public, and crafting a policy that addresses real concerns without resorting to prohibition.
This is what good governance looks like.
The Process: A Year of Deliberation
Where other projects made decisions in days or weeks, Fedora took their time.
The Timeline
| Date | Action |
|---|---|
| Summer 2024 | Community-wide AI survey conducted |
| Fall 2024 | Results discussed at Flock conference and Council meetings |
| February 2025 | Work-in-progress draft posted for early feedback |
| September 25, 2025 | Formal policy proposal published by Jason Brooks |
| September-October 2025 | Two-week formal review period with community feedback |
| October 22, 2025 | Council approves policy via ticket voting (#542) |
What They Did Right
- Started with data, not assumptions: They surveyed their community first to understand actual concerns rather than projecting fears
- Discussed openly at conferences: The Flock conference allowed face-to-face dialogue, not just mailing list echo chambers
- Published drafts early: The work-in-progress approach invited critique before positions hardened
- Formal review period: Two weeks of structured feedback ensured everyone could participate
- Multiple revisions: The policy incorporated community input through several iterations
- Transparent voting: Final approval via public ticket, not backroom consensus
The Policy: Responsibility, Not Prohibition
Fedora's policy is built on a simple principle: contributors are responsible for their contributions, regardless of how they were created.
Core Requirements
| Area | Policy |
|---|---|
| Accountability | Contributor is always the author and fully responsible |
| Disclosure | Encouraged via Assisted-by: commit trailer for significant AI use |
| Review | AI may assist reviewers but cannot be sole decision-maker |
| Governance | AI prohibited for CoC matters, funding, leadership decisions |
| User-facing | AI features must be opt-in with informed consent |
What This Means in Practice
- Using Copilot for autocomplete? Fine, just review what you submit
- Having ChatGPT write a whole function? Fine, disclose it and take responsibility
- Submitting AI output without review? Not acceptable — that's "AI slop"
- Grammar checking with AI? Fine, no disclosure needed for minor assistance
The policy trusts contributors to be adults while setting clear expectations about quality and accountability.
The Community Discussion: Real Debate
Unlike the echo chambers we've seen elsewhere, Fedora's discussion included genuine disagreement and diverse perspectives.
Supporters
Community member jokeyrhyme called it "probably the most reasonable AI policy i've seen so far":
"Outright banning of an entire class of technology, as other projects have done, should be held as a last resort. Focusing instead on contributor behaviour and responsibilities, as Fedora has done here, is far more balanced."
Another participant, neggles, argued that bans "incentivise lying" about tool use, whereas "encouraging people to be honest about what tools they used... should make it easier for reviewers and maintainers to identify which parts of a given contribution need more scrutiny."
Critics
The discussion wasn't all praise. Some raised legitimate concerns:
- Copyright risks: DemiMarie worried about liability from AI-generated code that might reproduce training data
- Enforcement challenges: nickodell argued "it's useful to have a bright-line, easy-to-understand rule" because with behavior-defined rules, "we need to decide if the submitter understood what they submitted"
- Commercial promotion: Some objected to naming proprietary tools like ChatGPT in the policy, calling it "free advertising"
The Berrangé Factor
Here's a fascinating detail: Daniel P. Berrangé — the same Red Hat engineer who proposed QEMU's blanket ban — participated in Fedora's discussion.
Did he call for a ban? No. He made thoughtful critiques:
"Fedora ought to have a general policy on how it evaluates & approves tools which process user data, which should be considered for any tool whether using AI or not."
On the disclosure requirement, Berrangé noted that "an 'Assisted-by:' tag is effectively providing continued free advertising for commercial tools," questioning whether naming specific tools provided meaningful benefit.
The same person, different environments, different behavior. In QEMU's virtualization-engineer echo chamber, Berrangé proposed a ban. In Fedora's broader community discussion, he offered constructive critique. Environment shapes outcomes.
Why This Worked
Fedora's success came from recognizing what other projects missed:
1. The Problem Isn't AI — It's Quality
Justin Wheeler, Fedora's community architect, framed it well:
"Without a policy to provide some kind of guidance, we already run the risk of abuse."
The concern isn't the tool — it's the quality of contributions. Fedora addressed this by holding contributors accountable, not by banning tools.
2. Bans Create Perverse Incentives
Wheeler also noted that without clear standards, "a contributor might be harassed by project members for their use of AI." Blanket bans don't prevent AI use — they just drive it underground and punish honest disclosure.
3. Living Documents Beat Permanent Edicts
Fedora explicitly framed their policy as "a living document, reflecting our commitment to learning and adapting as this technology evolves." The Council "fully expects this policy to need to be updated over time." This acknowledges uncertainty without using it as an excuse for prohibition.
Compare this to Gentoo's proclamation that AI concerns are "unlikely to be resolved in the near future" — a justification for permanent restriction based on predicted stagnation.
4. Diverse Voices Prevent Echo Chambers
Fedora's discussion included:
- Council members and regular contributors
- Supporters and skeptics of AI
- Technical concerns and governance concerns
- In-person discussion (Flock) and asynchronous debate (forums)
QEMU had seven virtualization engineers from mostly the same company. NetBSD had no public discussion at all. Fedora had an actual community process.
5. Avoided Claims Requiring Expertise They Lacked
To be clear: Fedora's discussion also lacked explicit AI/ML researchers or copyright lawyers. But here's the crucial difference: Fedora's policy doesn't make claims that require such expertise.
Compare the approaches:
- QEMU asserted: DCO compliance is "not credible" for AI code — a legal conclusion made by engineers
- Gentoo claimed: AI reproduces training data — a technical assertion about how LLMs work
- Fedora stated: Contributors are responsible for their contributions — a governance principle requiring no special expertise
By focusing on behavior rather than technical or legal judgments, Fedora avoided making claims they couldn't substantiate. They didn't assert AI code is legally problematic; they said contributors must take responsibility regardless of how code was created.
Comparison: Governance Approaches
| Project | Process | Expert Input | Outcome |
|---|---|---|---|
| Gentoo | RFC with author voting | ML engineer ignored | Full ban |
| NetBSD | Silent commit | None sought | Effective ban |
| QEMU | Mailing list (engineers only) | None (made legal claims anyway) | Full ban |
| Debian | Open proposal + debate | N/A (no ban to justify) | No ban |
| Fedora | Survey + conference + formal review | None (but avoided claims needing it) | Allow with disclosure |
Time alone doesn't guarantee good outcomes — QEMU took 18 months and still produced a ban. But Fedora combined time with structured community engagement: surveys, conferences, formal review periods, and transparent voting.
The Takeaway
Fedora's AI policy isn't perfect. Critics raised valid concerns about enforcement and liability that remain partially unresolved. And notably, their discussion also lacked AI/ML researchers or IP lawyers. But it's vastly better than the alternatives we've seen.
What Fedora got right:
- Gathered evidence first: Survey data, not assumptions, drove the process
- Invited diverse input: Multiple venues, multiple perspectives
- Focused on behavior: Accountability and quality, not tool prohibition
- Stayed in their lane: Didn't make legal or technical claims they couldn't substantiate
- Acknowledged uncertainty: Living document, not permanent edict
- Respected contributors: Trusted them to act responsibly with clear guidelines
The result is a policy that addresses legitimate concerns — quality, disclosure, governance bias — without throwing the baby out with the bathwater.
Editorial: Governance Is a Skill
Fedora's success isn't accidental. It reflects a mature governance culture that treats policy-making as a skill requiring deliberation, diverse input, and humility about uncertainty.
The projects that banned AI share a common flaw: they let small groups of similar people make sweeping decisions without seeking expertise they lacked. NetBSD's core team, Gentoo's council, QEMU's mailing list regulars — all competent in their domains, all out of their depth on AI policy.
Fedora asked: "What do our contributors actually think?" They held conferences, ran surveys, published drafts, and invited critique. They treated AI policy as a governance challenge, not a technical one.
The difference in outcomes speaks for itself.
References
- Fedora Community Blog: Council Policy Proposal (Official)
- Fedora Discussion: Policy Proposal Thread
- Fedora Discussion: Berrangé's Comments
- LWN.net: Fedora Considers an AI-Tool Policy
- The Register: Fedora Agrees Policy Allowing AI-Assisted Contributions
- Phoronix: Fedora Will Allow AI-Assisted Contributions
- Linuxiac: Fedora Opens the Door to AI Tools
- It's FOSS News: Fedora's Balancing Act