The Decision
In mid-May 2024, the NetBSD project quietly updated its commit guidelines to classify code generated by Large Language Models as "tainted." The change was announced via the project's Mastodon account on May 15, 2024 — just one month after Gentoo's more publicized ban.
The Policy
"Code generated by a large language model or similar technology, such as GitHub/Microsoft's Copilot, OpenAI's ChatGPT, or Facebook/Meta's Code Llama, is presumed to be tainted code, and must not be committed without prior written approval by core."
Unlike Gentoo's outright prohibition, NetBSD's policy theoretically allows AI-generated code — but only with explicit written permission from the core team and "an in-depth audit of the code's heritage." In practice, this high barrier makes AI contributions effectively banned.
The Process: No Public Debate
Perhaps the most striking aspect of NetBSD's policy is how it was implemented. Unlike Gentoo, which had a formal RFC posted to the mailing list and a council vote with named participants, NetBSD's change appeared without any visible public debate.
There was no RFC. No mailing list discussion. No recorded vote. The policy simply appeared in the commit guidelines, announced via social media. The individual(s) who authored or championed the change are not publicly identified.
This opacity makes analysis difficult. We cannot examine the arguments that were made, the objections that were raised (if any), or how the decision was reached. The NetBSD community was presented with a fait accompli.
The Core Team
The NetBSD Core Group — the seven-member body responsible for technical management — presumably made this decision. The current members are:
- Christos Zoulas
- Chuck Silvers
- Martin Husemann
- Matthew Green
- Rin Okuyama
- Robert Elz
- Taylor R. Campbell
However, we cannot confirm which members supported the policy, whether there was internal debate, or if any dissented. The decision emerged from a black box.
The Rationale: Licensing Concerns
NetBSD's justification focuses narrowly on licensing and copyright — a notable contrast to Gentoo's broader ideological objections. The policy extends NetBSD's existing framework for "tainted code," which has long required that contributors verify the licensing of any code they didn't write themselves.
The BSD License Problem
NetBSD uses the permissive BSD license, which creates a specific vulnerability. As The Register noted:
"Accidentally incorporating GPL code into a BSD codebase is a problem: it would mean either relicensing existing code, or totally replacing it — neither of which they have the manpower to do."
Because LLMs are trained on vast amounts of code under various licenses — including GPL code from Linux and other copyleft projects — NetBSD fears that AI-generated output could inadvertently incorporate GPL-licensed patterns, contaminating their BSD codebase.
Attribution Requirements
A NetBSD representative clarified in comments that the BSD license requires proper attribution for derivative works — something LLM-generated code cannot provide:
"Our policy has nothing to do with reliability or safety, but rather copyright ('tainted' refers specifically to illegal code)."
This narrow framing — legal compliance rather than quality or ethics — distinguishes NetBSD's approach from Gentoo's more expansive objections.
Comparison: NetBSD vs. Gentoo
| Aspect | NetBSD | Gentoo |
|---|---|---|
| Date | May 15, 2024 | April 14, 2024 |
| Process | Quiet guideline update, no public debate | RFC, mailing list discussion, council vote |
| Primary Rationale | Licensing/copyright only | Copyright, quality, and ethics |
| Exceptions | Allowed with core team written approval | No exceptions |
| Tone | Bureaucratic, legalistic | Ideological, sometimes hostile |
| Identifiable Author | No | Yes (Michał Górny) |
The "Tainted Code" Framework
NetBSD's approach is clever in one respect: by classifying AI code as "tainted," they avoided creating a new category. The commit guidelines already had provisions for code of uncertain provenance:
"If you commit code that was not written by yourself, double check that the license on that code permits import into the NetBSD source repository, and permits free distribution. Check with the author(s) of the code, make sure that they were the sole author of the code and verify with them that they did not copy any other code."
AI-generated code simply became another type of code that fails this provenance check. This framing is less dramatic than Gentoo's approach, but leads to the same practical outcome.
Community Reception
Without a public debate process, there's limited record of community sentiment. However, external commentary was mixed:
Supporters
- The licensing concern is legally legitimate — LLMs do train on GPL code
- Erring on the side of caution protects the project from future legal exposure
- The exception pathway shows pragmatism — it's not a blanket ideological ban
Critics
- Enforcement is impossible — there's no reliable way to detect AI-generated code
- The same argument could apply to human developers who learned from GPL codebases
- Tools like Copilot function as "fancy autocomplete" — where's the line?
One Hacker News commenter noted sarcastically:
"I barely trust Copilot to do more than the most simple autocompletes. No way 99.9% of AI gen would pass proper peer review."
The "Friendly Reminder" Interpretation
Some defenders argued the policy is "not a ban, just a friendly reminder" that NetBSD takes copyright seriously. This charitable interpretation suggests the policy is more about signaling expectations than enforcing restrictions.
However, this raises the question: if it's just a reminder, why the specific enumeration of AI tools? Why not simply reiterate the existing tainted-code rules? The explicit mention of Copilot, ChatGPT, and Code Llama suggests something more than a general reminder.
The Enforceability Problem
Like Gentoo's policy, NetBSD's ban faces a fundamental problem: it cannot be enforced. As one commenter observed:
"There is no quantifiable way to identify if code is AI written, other than the general 'vibe'."
This creates the same perverse incentive as Gentoo's policy: honest contributors who disclose AI assistance face barriers, while those who stay silent face none. The policy rewards concealment.
Editorial: A More Defensible Position?
NetBSD's AI policy is both better and worse than Gentoo's.
Better: The narrow focus on licensing is at least a coherent legal argument. Unlike Gentoo's scattershot objections (copyright, quality, energy use, "enshittification"), NetBSD identifies a specific concern — license contamination — and addresses it. The exception pathway acknowledges that the concern might be addressed for specific code through auditing.
Worse: The lack of public process is troubling. Gentoo's RFC was flawed, but at least it existed. Community members could respond, objections were recorded, and the decision-makers were identified. NetBSD offers none of this transparency. A policy that affects all contributors appeared without explanation of who decided it, what alternatives were considered, or how objections would be handled.
The fundamental problem remains: Both policies punish honesty, cannot be enforced, and make distinctions (AI-assisted vs. human-written) that are increasingly meaningless as AI tools become ubiquitous in development workflows.
NetBSD's approach is less ideological than Gentoo's, but the quiet implementation raises its own concerns. Open source projects that make policy in the dark shouldn't be surprised when their communities question whether the "open" label still applies.
A Pattern of Vacuum-Chamber Governance
NetBSD's AI ban shares a striking similarity with Gentoo's: both were made without consulting anyone with actual expertise in the technology being banned. Neither project engaged machine learning engineers, AI researchers, or legal scholars specializing in AI copyright before making their decisions.
In Gentoo's case, an ML engineer (Martin-Kokos) who participated in the mailing list discussion was effectively ignored. NetBSD didn't even have a public discussion where experts could weigh in. Both decisions were made in a vacuum — small groups of developers ruling on technology they don't work with, based on assumptions rather than evidence.
This pattern — technical governance conducted without technical expertise — produces policies that feel principled but lack grounding in reality. The licensing concern sounds reasonable until you ask: has anyone actually found GPL-derived code in LLM output that was incorporated into a BSD codebase? Has any lawsuit been filed? Has any legal expert confirmed this risk is material rather than theoretical?
The answer, as far as public records show, is no. Both bans are preemptive strikes against imagined threats, made by people who decided they knew enough to act without consulting those who know more.
The Irony of "Dictators"
There's a revealing irony in how different open source projects have handled AI policy.
Linus Torvalds is often called a "Benevolent Dictator for Life" — the single decision-maker who controls what goes into the Linux kernel. Critics sometimes use "dictator" as a pejorative, implying autocratic or arbitrary rule.
Yet Torvalds has taken a markedly different approach to AI. Rather than issuing edicts, he's been openly skeptical of AI hype while remaining pragmatic about potential uses:
"I hate the hype cycle so much that I really don't want to go there... In five years, things will change, and at that point we'll see what of the AI is getting used every day for real workloads."
— Linus Torvalds
No ban. No RFC declaring AI tools forbidden. Just patient observation and willingness to wait for evidence. The "dictator" is showing more restraint and openness than the ostensibly democratic councils and core teams.
Meanwhile, Gentoo's elected council and NetBSD's core team — structures designed to prevent dictatorial rule — have issued top-down prohibitions without meaningful community input, without expert consultation, and in NetBSD's case, without even public debate. The democratic structures produced more authoritarian outcomes than the dictatorship.
Perhaps "dictator" is the wrong label to worry about. What matters isn't the governance structure but whether leaders — whatever their title — make decisions thoughtfully, transparently, and with appropriate humility about what they don't know. By that measure, the so-called dictator is governing better than the committees.
FreeBSD and Debian: Different Paths
It's worth noting that not all BSD or Linux projects followed this path:
- FreeBSD has taken a more cautious approach, investigating the issue without implementing a ban. Their existing guidelines indicate a middle ground where certain AI uses are acceptable while direct code generation requires scrutiny.
- Debian explicitly declined to join Gentoo and NetBSD in banning AI code, demonstrating that major projects can reach different conclusions on the same evidence.
The variety of responses suggests that AI code policies are not as obvious as NetBSD and Gentoo imply. Reasonable projects disagree.
References
- NetBSD Commit Guidelines (Official)
- NetBSD Core Group Members (Official)
- NetBSD Mastodon Announcement
- Hacker News Discussion
- The Register: Gentoo and NetBSD ban 'AI' code, but Debian doesn't
- OSnews: NetBSD bans use of Copilot-generated code
- Hackaday: NetBSD Bans AI-Generated Code From Commits
- Slashdot: NetBSD Bans AI-Generated Code
- Linuxiac: NetBSD's New Policy - No Place for AI-Created Code
- FreeBSD Forums: NetBSD vs AI-Generated Code
- LWN.net: Linus Torvalds' Benevolent Dictatorship
- Futurism: Creator of Linux Trashes AI Hype