Two Paths Diverged
When AI coding tools emerged, the open source world faced a choice: ban the technology or adapt to it. The responses split along a revealing line.
Major foundations — Linux Foundation, Apache Software Foundation, Eclipse Foundation — chose transparency over prohibition. They convened legal committees, coordinated across organizations, and produced policies allowing AI with disclosure requirements.
Individual projects — Gentoo, NetBSD, QEMU, libvirt — chose blanket bans. Small groups of engineers, without legal expertise or cross-project coordination, prohibited AI-generated contributions entirely.
The difference isn't just in outcomes. It's in how decisions were made.
The Foundation Approach
Linux Foundation: Setting the Standard
The Linux Foundation didn't just create its own Generative AI Policy — it actively encouraged coordination across the entire open source ecosystem.
"Code or other content generated in whole or in part using AI tools can be contributed to Linux Foundation projects."
The policy treats AI-generated code like any other contribution: subject to peer review, licensing compliance, and quality standards. No special prohibition. No moral panic.
Crucially, the Linux Foundation reached out to other foundations to develop common approaches — a stark contrast to individual projects making isolated decisions.
Apache Software Foundation: Legal Committee Deliberation
In July 2023, the ASF Legal Committee published comprehensive AI guidance after careful deliberation.
Key participants:
- Roman Shaposhnik — VP of Legal Affairs, authored the announcement
- Henri Yandell — Recognized for key work on the guidance
- ASF Legal Committee members
Shaposhnik acknowledged the complexity:
"The criteria for what is 'your original creation' is pretty easy to understand if you literally just typed a page of code in a fit of inspiration, but aside from that the line gets pretty blurry pretty quickly."
The ASF explicitly credited the Linux Foundation for "encouraging all the various Open Source Foundations to start thinking about this area in common terms and agreeing on policies that wouldn't be too far from each other."
Result: Allow AI contributions with Generated-by: disclosure
tags. Contributors remain responsible for compliance, but the tool itself isn't banned.
Eclipse Foundation: Guidelines with IP Due Diligence
The Eclipse Foundation published guidelines in April 2024, emphasizing that "AI serves as a tool to enhance efficiency."
Executive Director Mike Milinkovich later announced a collaboration with the Open Source Initiative on AI policy:
"As AI reshapes industries and societies, there is no more pressing issue for the open source community than the regulatory recognition of open source AI systems."
Result: Allow AI contributions with IP due diligence, code scanning for training data matches, and committer accountability.
The Linux Kernel: Torvalds Cuts Through
The Linux kernel — the Linux Foundation's flagship project — had its own debate at the 2025 Maintainers Summit. The discussion reveals how experienced leadership handles AI concerns differently than isolated project maintainers.
The Participants
| Person | Role | Position |
|---|---|---|
| Sasha Levin | NVIDIA, stable co-maintainer | Pro-AI, initiated discussion |
| Linus Torvalds | Creator of Linux | "Just another tool" |
| Greg Kroah-Hartman | Stable maintainer | Pragmatic, noted "slop" problem |
| Ted Ts'o | Ext4 maintainer | Risks pre-date LLMs |
| Steve Rostedt | Ftrace maintainer | Raised copyright concerns |
| Konstantin Ryabitsev | Kernel.org | Worried about proprietary dependency |
What Triggered the Debate
Sasha Levin demonstrated at Open Source Summit that an LLM-generated patch had been merged into kernel 6.15. One maintainer reacted:
"It appears (from comments below) that it does indeed have a slight bug. Which I would have caught if I had know this was 100% generated, as I would not have trusted the patch as much as I did thinking Sasha did the work."
This sparked a broader discussion about disclosure and policy.
Torvalds' Position
Linus Torvalds dismissed the hand-wringing about AI documentation. On using policy documents to prevent "AI slop":
"The AI slop people aren't going to document their patches as such. That's such an obvious truism that I don't understand why anybody even brings up AI slop. So stop this idiocy. The documentation is for good actors, and pretending anything else is pointless posturing."
On taking an anti-AI stance in kernel documentation:
"I don't want some kernel development docs to take either stance" on whether AI will revolutionize or harm software engineering.
His bottom line:
"It's why I strongly want this to be that 'just a tool' statement."
Torvalds has separately acknowledged that AI "is going to change the world" but expressed frustration with the hype cycle, describing the AI industry as "90 percent marketing and ten percent reality."
The Outcome
- No ban on AI-assisted contributions
- Soft
Assisted-by:tag suggested (not enforced) - Human accountability remains mandatory
- "Purely machine-generated patches without human involvement are not welcome"
- Focus on code review quality, not tool prohibition
The Individual Project Approach
While foundations deliberated with legal committees and cross-organization coordination, individual projects took a different path.
The Bans
| Project | Process | Participants | Duration |
|---|---|---|---|
| Gentoo | RFC vote (author voted) | 7 council members | ~2 weeks |
| NetBSD | Silent commit | Unknown | None |
| QEMU | Mailing list | ~7 virtualization engineers | ~18 months |
| libvirt | Copied QEMU | 3 maintainers | 1 day |
What They Had in Common
- No legal expertise: Engineers making legal conclusions about DCO compliance
- No AI expertise: No ML researchers consulted on how LLMs actually work
- No cross-project coordination: Each project acted in isolation (or copied another)
- Small, homogeneous groups: Similar people reaching similar conclusions
- No evidence of harm: Bans were preemptive, not responses to actual problems
The Critical Difference: Legal Committee vs. Engineers
The foundations had something the individual projects lacked: legal affairs leadership.
| Organization | Who Made the Decision |
|---|---|
| Linux Foundation | Legal team |
| Apache Software Foundation | Legal Committee (VP of Legal Affairs) |
| Eclipse Foundation | Legal committee + Board |
| Gentoo | Council (developers) |
| QEMU | Mailing list (virtualization engineers) |
| libvirt | 3 maintainers |
When QEMU's Daniel Berrangé wrote that DCO compliance is "not credible" for AI code, he was making a legal conclusion. He's a virtualization engineer, not a lawyer. He may be right, he may be wrong — but he doesn't actually know.
The ASF's Roman Shaposhnik, by contrast, is the VP of Legal Affairs. When the ASF Legal Committee says contributors can satisfy their obligations with reasonable diligence, that's an informed legal judgment.
Cross-Foundation Coordination
Perhaps the most striking difference is coordination. The ASF explicitly acknowledged:
"[The Linux Foundation provided] leadership encouraging all the various Open Source Foundations (the ASF included) to start thinking about this area in common terms and agreeing on policies that wouldn't be too far from each other."
This is why the major foundations have remarkably similar policies:
- All allow AI-generated contributions
- All require contributor accountability
- All encourage disclosure (tags, commit messages)
- All emphasize existing review processes
Meanwhile, the individual projects that banned AI didn't coordinate with anyone — except libvirt copying QEMU's policy verbatim.
The Outcomes Compared
| Aspect | Foundations | Ban Projects |
|---|---|---|
| Policy | Allow with transparency | Full prohibition |
| Expertise | Legal committees | Software engineers |
| Coordination | Cross-foundation alignment | Isolated or copy-paste |
| Adaptability | "Living documents" | Fixed prohibitions |
| Enforcement | Existing review processes | Unenforceable bans |
| Real-world issues | None reported | "Am I tainted?" confusion |
What Torvalds Understood
Linus Torvalds' dismissal of the AI documentation debate reveals a key insight that the ban projects missed:
"The documentation is for good actors, and pretending anything else is pointless posturing."
The projects that banned AI assumed their policies would prevent bad contributions. But as Torvalds noted, bad actors won't disclose their AI use regardless of policy. The bans only affect honest contributors — the "good actors" who would follow rules anyway.
The foundations understood this. Their policies focus on quality and accountability, not tool prohibition. If code passes review and the contributor takes responsibility, the tool used to create it is irrelevant.
The Lesson
The split between foundations and individual projects reveals a fundamental truth about governance: expertise matters.
The foundations had:
- Legal committees to evaluate IP and licensing implications
- Cross-organization coordination to develop common approaches
- Professional governance structures for policy development
- Experience balancing innovation with risk management
The ban projects had:
- Engineers making legal judgments outside their expertise
- Isolated decision-making without external input
- Ad-hoc processes (or no process at all)
- Reactive policies driven by fear rather than evidence
The result? The organizations with professional governance produced nuanced, adaptable policies. The projects without it produced blanket bans that their own maintainers now question.
Editorial: Governance Infrastructure Matters
This comparison isn't about whether AI code is good or bad. It's about how open source communities make decisions.
The Linux Foundation, Apache, and Eclipse didn't allow AI contributions because they're AI cheerleaders. They allowed them because their legal experts concluded that existing frameworks — contributor accountability, peer review, licensing compliance — already address the relevant concerns.
Gentoo, NetBSD, QEMU, and libvirt didn't ban AI because they did better analysis. They banned it because small groups of engineers, operating without legal expertise or external coordination, made sweeping decisions based on their own risk assessments.
The foundations asked: "How do we adapt our existing processes to handle this?"
The ban projects asked: "How do we make this go away?"
One approach treats AI as a governance challenge requiring professional expertise. The other treats it as a threat to be eliminated. The foundations' approach scales. The bans create confusion, drive honest contributors away, and solve nothing.
Open source projects that lack foundation-level governance infrastructure should take note: when facing novel challenges, the answer isn't to have your engineers play lawyer. It's to recognize when you're out of your depth and seek appropriate expertise.
References
- Linux Foundation: Generative AI Policy (Official)
- ASF: Legal Committee Issues Generative AI Guidance
- ASF: Generative Tooling Guidance (Official)
- Eclipse Foundation: AI Usage Guidelines (Official)
- LWN: Toward a Policy for Machine-Learning Tools in Kernel Development
- LWN: Supporting Kernel Development with Large Language Models
- Phoronix: Torvalds on AI Slop
- OSI-Eclipse Foundation AI Policy Collaboration