The Contrast: Linux Kernel vs. Gentoo

In April 2024, the Gentoo Linux project made headlines by becoming the first major distribution to ban AI-generated code contributions. The council voted 6 yes, 0 no (with 1 abstain, 1 absent), citing copyright concerns, quality issues, and ethical objections. No technical analysis was conducted. No AI-related incidents had occurred in the project.

Meanwhile, Linus Torvalds — creator and maintainer of the Linux kernel itself, the very foundation that Gentoo and every other Linux distribution depends on — has taken a notably different approach. No bans. No moral panic. Just the same pragmatic, results-oriented thinking that has guided Linux development for over three decades.

This contrast is striking: a distribution that packages the kernel banned AI tools entirely based on theoretical fears, while the kernel project itself continues to evaluate contributions based on quality, not on what tools produced them.

"90% Marketing, 10% Reality"

At the Open Source Summit in Vienna in October 2024, Torvalds offered his characteristically blunt assessment of the AI landscape:

"I think the whole tech industry around AI is in a very bad position and it's 90 percent marketing and ten percent reality."

— Linus Torvalds, Open Source Summit Vienna, October 2024

But Torvalds isn't dismissive of AI itself — he's dismissive of the hype surrounding it. He drew a clear distinction between the technology's potential and the industry's behavior:

"I think AI is really interesting and I think it is going to change the world and at the same time I hate the hype cycle so much that I really don't want to go there, so my approach to AI right now is I will basically ignore it."

His solution? Wait and see: "In five years things will change and at that point we'll see what of the AI is getting used for real workloads."

"It's Hilarious to Watch"

At Open Source Summit North America in Seattle (April 2024), Torvalds and longtime collaborator Dirk Hohndel discussed AI's current capabilities in their traditional "fireside chat." Hohndel offered his own skeptical take, describing AI as "autocorrect on steroids" — but Torvalds actually pushed back on this characterization, suggesting AI is more than that and comparing it to the evolution of programming tools like assemblers and compilers.

Yet Torvalds remained amused by the hype cycle:

"It's hilarious to watch. Maybe I'll be replaced by an AI model!"

— Linus Torvalds, Open Source Summit North America, April 2024

His summary: "Let's wait 10 years and see where it actually goes before we make all these crazy announcements."

Hope for AI in Code Review

While skeptical of hype, Torvalds sees genuine potential for AI in kernel development — specifically in helping maintainers catch bugs:

"I hope. I hope, because that's certainly one of the areas which I see them really being able to shine, to find the obvious stupid bugs."

— Linus Torvalds, on AI assisting with code review, April 2024

When Hohndel asked about the risk of AI "hallucinations" producing buggy code, Torvalds offered a perspective grounded in decades of kernel maintenance:

"I see the bugs that happen without AI every day. So that's why I'm not so worried. I think we're doing just fine at making mistakes on our own."

— Linus Torvalds, Open Source Summit Japan, December 2023

This comment encapsulates Torvalds's pragmatism: AI isn't uniquely dangerous — humans already produce plenty of bugs. The solution is good review processes, not tool bans.

The Upside: NVIDIA's Kernel Involvement

Torvalds has noted at least one concrete benefit from the AI boom. At Open Source Summit China in Hong Kong (August 2024), he observed:

"When AI came in, it was wonderful, because Nvidia got much more involved in the kernel."

For years, Torvalds had publicly criticized NVIDIA for poor Linux support — famously giving the company the middle finger on camera in 2012. The AI revolution, which depends heavily on NVIDIA GPUs, created business incentives for the company to improve its kernel contributions.

As he noted: "NVIDIA has gotten better at talking to Linux kernel developers and working with Linux memory management" because of its need for Linux to run AI's large language models efficiently.

This illustrates Torvalds's outcomes-focused thinking: he doesn't care about ideology or corporate motivations — he cares about whether code gets better and hardware works properly.

AI as Just Another Tool

At the Open Source Summit in Seoul, South Korea (November 2025), Hohndel raised the contentious question of AI's impact on programming careers. Torvalds's answer was measured:

"AI is just another tool, the same way compilers free people from writing assembly code by hand, and increase productivity enormously but didn't make programmers go away."

— Linus Torvalds, Open Source Summit Seoul, November 2025

He expressed hope for a future where AI becomes "less hyped and more like the everyday reality that nobody talks constantly about" — integrated into workflows without the accompanying drama.

Vibe Coding: "Fairly Positive" With Caveats

By late 2025, Torvalds had begun to engage more directly with AI coding tools. In November 2025, he described himself as "fairly positive" about vibe coding — the practice of describing what you want in natural language and letting AI generate code.

But he was careful to distinguish between use cases:

"[Vibe coding] may be a horrible, horrible idea from a maintenance standpoint."

— Linus Torvalds, Open Source Summit Seoul, November 2025

Torvalds sees vibe coding as potentially valuable for helping newcomers "get computers to do something that maybe they couldn't do otherwise" — but not for production systems that require long-term maintenance. This nuanced position avoids both uncritical embrace and blanket rejection.

The AudioNoise Project: Torvalds Tries Vibe Coding

In January 2026, Torvalds put his words into practice. He published AudioNoise, a GPLv2-licensed open-source project for digital audio effects, continuing his hobby of building guitar pedals. The project quickly gained attention, accumulating over 4,000 GitHub stars.

The project's README contained an admission that made tech headlines worldwide:

"Also note that the python visualizer tool has been basically written by vibe-coding. I know more about analog filters — and that's not saying much — than I do about python. It started out as my typical 'google and do the monkey-see-monkey-do' kind of programming, but then I cut out the middle-man — me — and just used Google Antigravity to do the audio sample visualizer."

— Linus Torvalds, AudioNoise README, January 2026

Google Antigravity is Google's AI-powered IDE announced in November 2025 alongside Gemini 3. Torvalds used it to generate Python visualization code for his audio project — a task outside his primary expertise.

This is classic Torvalds: hands-on experimentation rather than theoretical pontification. The AudioNoise project isn't kernel code — it's a hobby project where maintenance concerns matter less. He tried the technology, formed opinions based on actual experience, and shared his honest assessment.

The Linux Kernel's Approach to AI Code

Unlike Gentoo's blanket ban, the Linux kernel community is developing practical guidelines for AI-assisted contributions. NVIDIA's Sasha Levin — the Linux LTS kernel co-maintainer — proposed documentation and configuration files for AI coding assistants.

The key elements under discussion:

  • Disclosure, not prohibition: Contributors must disclose AI assistance using "Co-developed-by" tags in commit messages
  • No Signed-off-by for AI: The legal certification (Developer Certificate of Origin) must come from a human, not an AI
  • Unified configuration: Standard rules for Claude, GitHub Copilot, Cursor, and other AI assistants working on kernel code
  • Quality standards unchanged: AI-assisted code must meet the same review standards as any other contribution

"AI Slop Is NOT Going to Be Solved with Documentation"

When discussing the proposed AI guidelines, Torvalds was characteristically direct about their limitations:

"The AI slop issue is *NOT* going to be solved with documentation, and anybody who thinks it is either just naive, or wants to 'make a statement'."

— Linus Torvalds, January 2026

He elaborated: "The AI slop people aren't going to document their patches as such. That's such an obvious truism." His point: documentation is for "good actors" who want to follow guidelines — bad actors will ignore it regardless.

This is why Torvalds prefers treating AI as "just a tool" rather than creating special AI-focused policies: "I do *not* want any kernel development documentation to be some AI statement."

Why This Matters

The divergence between Torvalds's pragmatism and Gentoo's prohibition illustrates two fundamentally different approaches to technological change:

The prohibitionist approach (Gentoo) assumes AI is uniquely dangerous and requires preemptive bans. It prioritizes ideological consistency over practical outcomes and treats tool choice as a moral issue.

The pragmatic approach (Linux kernel) evaluates contributions on merit regardless of origin. It trusts existing review processes, adapts guidelines as needed, and focuses on outcomes rather than inputs.

Neither approach is inherently right — projects have autonomy to set their own policies. But the kernel's approach arguably better reflects open source's foundational principles: contributions should be judged on quality, barriers to participation should be minimized, and technical decisions should be evidence-based.

The Ongoing Challenges

Torvalds and the kernel community aren't blind to AI's problems. Kernel maintainers have reported an uptick in low-quality submissions that appear AI-generated — patches that look correct superficially but hallucinate kernel APIs that don't exist or misunderstand system architecture.

Torvalds has also noted that AI crawlers have been "very disruptive to a lot of our infrastructure" as they scrape kernel.org source code, and that maintainers increasingly receive "bugs and security notices that are... made up by people who misuse AI."

The solution, in the kernel community's view, isn't to ban AI but to maintain rigorous review standards and require disclosure. Bad patches get rejected whether AI-generated or not. The tool used is irrelevant to quality assessment.

Lessons from Torvalds's Approach

Torvalds's handling of AI offers lessons for anyone navigating technological change:

  • Separate hype from substance. "90% marketing, 10% reality" means most claims are overblown — but 10% is still real and worth understanding.
  • Experiment before legislating. Torvalds tried vibe coding himself before forming strong opinions. Direct experience beats theoretical speculation.
  • Context matters. AI for a hobby Python visualizer is different from AI for production kernel code. Blanket policies miss this nuance.
  • Trust your processes. Good code review catches bad code regardless of origin. If your review process works, you don't need to police tool choices.
  • Focus on outcomes. NVIDIA improved kernel support because of AI economics. The motivation matters less than the result.
  • Maintain perspective. "We're doing just fine at making mistakes on our own." AI isn't uniquely dangerous — it's another tool with tradeoffs.
  • Be honest about limitations. Documentation won't stop bad actors. Policies for good actors; quality review for everyone.

A Model for Thoughtful Leadership

In an era of polarized AI discourse — where one must either embrace AI as humanity's savior or condemn it as an existential threat — Torvalds offers a third way: pragmatic engagement tempered by healthy skepticism.

He doesn't pretend AI will revolutionize everything, but he doesn't pretend it's worthless either. He's critical of hype while remaining open to genuine utility. He experiments personally while cautioning against production use. He adapts kernel guidelines while maintaining quality standards.

The Value of Listening to Experts

One crucial difference between Torvalds's approach and Gentoo's ban: engagement with actual AI/ML expertise.

Torvalds doesn't operate in a vacuum. The kernel project includes contributions from NVIDIA engineers deeply involved in AI/ML development. Sasha Levin, who proposed the kernel's AI guidelines, works at NVIDIA and has hands-on experience with AI tools. Torvalds regularly interacts with engineers from companies at the forefront of AI development — not just as corporate representatives, but as kernel contributors who understand both sides.

Contrast this with Gentoo's process. When the council proposed their AI ban, an actual machine learning engineer named Martin-Kokos raised substantive objections on the mailing list. He pointed out:

  • The ban was unenforceable
  • Other organizations (OpenStreetMap, Wikipedia) had developed nuanced policies for automated contributions
  • Not all AI tools have the ethical problems the ban claimed to address
  • Ethically-trained models like Project Bergamot existed

His conclusion: "Banning all tools, just because some might be not up to moral standards, puts the ones that are, in a disadvantage."

The council ignored him. They voted 6 yes, 0 no for the ban anyway (1 abstain, 1 absent). No ML or AI experts sat on the council. The one person with relevant expertise who spoke up was overruled by developers with no demonstrated knowledge of the technology they were prohibiting.

This is the difference between governance informed by expertise and governance driven by ideology. Torvalds listens to people who actually work with AI. He changes his views based on evidence — praising NVIDIA after years of criticism because their behavior changed. He experiments personally before forming strong opinions.

The Gentoo council, by contrast, operated as an echo chamber. The proposal came from within the council, was voted on by the council, and passed unanimously despite external expert objection. This isn't thoughtful governance — it's groupthink with the authority to make policy.

Conclusion: Wisdom Through Engagement

This approach has served Linux well for over 30 years. The kernel remains one of the most successful collaborative software projects in history, precisely because it prioritizes practical outcomes over ideological purity — and because it listens to people with relevant expertise rather than dismissing them.

The vibe-coded AudioNoise visualizer may never influence kernel development. But the mindset it represents — curious experimentation, willingness to engage with new tools, openness to expert input, and refusal to panic — offers a model for how technical communities might navigate the AI era with wisdom rather than fear.

Leaders who engage with technology directly, who listen to experts, and who remain open to changing their minds make better decisions than those who rule from ivory towers based on theoretical concerns. Torvalds embodies the former. The Gentoo council, unfortunately, demonstrated the latter.

References