AI assistance when contributing to the Linux kernel
by hmokiguess on 4/10/2026, 6:35:21 PM
https://github.com/torvalds/linux/blob/master/Documentation/process/coding-assistants.rst
Comments
by: qsort
Basically the rules are that you can use AI, but you take full responsibility for your commits and code must satisfy the license.<p>That's... refreshingly normal? Surely something most people acting in good faith can get behind.
4/10/2026, 8:00:56 PM
by: oytis
How is one supposed to ensure license compliance while using LLMs which do not (and cannot) attribute sources having contributed to a specific response?
4/11/2026, 9:51:20 AM
by: ninjagoo
<p><pre><code> > Signed-Off ... > The human submitter is responsible for: > Reviewing all AI-generated code > Ensuring compliance with licensing requirements > Adding their own Signed-off-by tag to certify the DCO > Taking full responsibility for the contribution > Attribution: ... Contributions should include an Assisted-by tag in the following format: </code></pre> Responsibility assigned to where it should lie. Expected no less from Torvalds, the progenitor of Linux and Git. No demagoguery, no b*.<p>I am sure that this was reviewed by attorneys before being published as policy, because of the copyright implications.<p>Hopefully this will set the trend and provide definitive guidance for a number of Devs that were not only seeing the utility behind ai assistance but also the acrimony from some quarters, causing some fence-sitting.
4/10/2026, 9:44:17 PM
by: ipython
Glad to see the common-sense rule that only humans can be held accountable for code generated by AI agents.
4/10/2026, 8:00:35 PM
by: sarchertech
This does nothing to shield Linux from responsibility for infringing code.<p>This is essentially like a retail store saying the supplier is responsible for eliminating all traces of THC from their hemp when they know that isn’t a reasonable request to make.<p>It’s a foreseeable consequence. You don’t get to grant yourself immunity from liability like this.
4/10/2026, 8:31:43 PM
by: KaiLetov
The policy makes sense as a liability shield, but it doesn't address the actual problem, which is review bandwidth. A human signs off on AI-generated code they don't fully understand, the patch looks fine, it gets merged. Six months later someone finds a subtle bug in an edge case no reviewer would've caught because the code was "too clean."
4/11/2026, 5:34:44 AM
by: newsoftheday
> All code must be compatible with GPL-2.0-only<p>How can you guarantee that will happen when AI has been trained a world full of multiple licenses and even closed source material without permission of the copyright owners...I confirmed that with several AI's just now.
4/10/2026, 8:16:12 PM
by: dataviz1000
This is discussed in the Linus vs Linus interview, "Building the PERFECT Linux PC with Linus Torvalds". [0]<p>[0] <a href="https://youtu.be/mfv0V1SxbNA?si=CBnnesr4nCJLuB9D&t=2003" rel="nofollow">https://youtu.be/mfv0V1SxbNA?si=CBnnesr4nCJLuB9D&t=2003</a>
4/10/2026, 8:06:16 PM
by: dec0dedab0de
<i>All code must be compatible with GPL-2.0-only</i><p>Am I being too pedantic if I point out that it is quite possible for code to be compatible with GPL-2.0 and other licenses at the same time? Or is this a term that is well understood?
4/10/2026, 8:19:22 PM
by: feverzsj
Linux is founded by all these big companies. Linus couldn't block AI pushes from them forever.
4/11/2026, 4:58:47 AM
by: gnarlouse
I wonder if this is happening because Mythos
4/11/2026, 6:00:01 AM
by: KhayaliY
We've seen in the past, for instance in the world of compliance, that if companies/governments want something done or make a mistake, they just have a designated person act as scapegoat.<p>So what's preventing lawyers/companies having a batch of people they use as scapegoats, should something go wrong?
4/10/2026, 10:46:58 PM
by: zxexz
I like this. It's just saying you have responsibility for the tools you wield. It's concise.<p>Side note, I'm not sure why I feel weird about having the string "Assisted-by: AGENT_NAME:MODEL_VERSION" [TOOL1] [TOOL2] in the kernel docs source :D. Mostly joking. But if the Linux kernel has it now, I guess it's the inflection point for...something.
4/11/2026, 5:45:09 AM
by: themafia
> All contributions must comply with the kernel's licensing requirements:<p>I just don't think that's realistically achievable. Unless the models themselves can introspect on the code and detect any potential license violations.<p>If you get hit with a copyright violation in this scheme I'd be afraid that they're going to hammer you for negligence of this obvious issue.
4/10/2026, 9:58:30 PM
by: lowsong
At least it'll make it easy to audit and replace it all in a few years.
4/10/2026, 9:06:02 PM
by: bharat1010
Honestly kind of surprised they went this route -- just 'you own it, you're responsible for it' is such a clean answer to what feels like an endlessly complicated debate.
4/11/2026, 4:13:49 AM
by: martin-t
This feels like the OSS community is giving up.<p>LLMs are lossily-compressed models of code and other text (often mass-scraped despite explicit non-consent) which has licenses almost always requiring attribution and very often other conditions. Just a few weeks ago a SOTA model was shown to reproduce non-trivial amounts of licensed code[0].<p>The idea of intelligence being emergent from compression is nothing new[1]. The trick here is giving up on completeness and accuracy in favor of a more probabilistic output which<p>1) reproduces patterns and interpolates between patterns of training data while not always being verbatim copies<p>2) serves as a heuristic when searching the solution-space which is further guided by deterministic tools such as compilers, linters, etc. - the models themselves quite often generate complete nonsense, including making up non-existent syntax in well-known mainstream languages such as C#.<p>I strongly object to anthropomorphising text transformers (e.g. "Assisted-by"). It encourages magical thinking even among people who understand how the models operate, let alone the general public.<p>Just like stealing fractional amounts of money[3] should not be legal, violating the licenses of the training data by reusing fractional amounts from each should not be legal either.<p>[0]: <a href="https://news.ycombinator.com/item?id=47356000">https://news.ycombinator.com/item?id=47356000</a><p>[1]: <a href="http://prize.hutter1.net/" rel="nofollow">http://prize.hutter1.net/</a><p>[2]: <a href="https://en.wikipedia.org/wiki/ELIZA_effect" rel="nofollow">https://en.wikipedia.org/wiki/ELIZA_effect</a><p>[3]: <a href="https://skeptics.stackexchange.com/questions/14925/has-a-programmer-ever-embezzled-money-by-shaving-fractions-of-a-cent-from-many-b" rel="nofollow">https://skeptics.stackexchange.com/questions/14925/has-a-pro...</a>
4/10/2026, 8:25:16 PM
by: NetOpWibby
inb4 people rage against Linux
4/10/2026, 10:38:06 PM
by: shevy-java
Fork the kernel!<p>Humans for humans!<p>Don't let skynet win!!!
4/10/2026, 8:28:25 PM
by: baggy_trough
Sounds sensible.
4/10/2026, 7:54:26 PM
by: spwa4
Why does this file have an extension of .rst? What does that even mean for the fileformat?
4/10/2026, 9:16:25 PM
by: techpulselab
[dead]
4/11/2026, 12:15:57 AM
by: builderhq_io
[dead]
4/11/2026, 8:30:41 AM
by: redoh
[dead]
4/10/2026, 9:02:50 PM
by: midnightn
[dead]
4/11/2026, 12:38:02 AM
by: northstar-au
[dead]
4/10/2026, 10:24:24 PM
by: chaosprint
[dead]
4/11/2026, 3:35:36 AM
by: bitwize
Good. The BSDs should follow suit. It is unreasonable to expect any developer not to use AI in 2026.
4/10/2026, 7:47:39 PM
by: the_biot
Linux has fallen. Linus Torvalds is now just another vibe coder. I give it less than a year, or maybe a month, until Linux gets vibe-coded patches approved by LLMs.<p>Open source is dead, having had its code stolen for use by vibe-coding idiots.<p>Make no mistake, this is the end of an era.
4/10/2026, 9:25:58 PM
by: rwmj
Interesting that coccinelle, sparse, smatch & clang-tidy are included, at least as examples. Those aren't AI coding tools in the normal sense, just regular, deterministic static analysis / code generation tools. But fine, I guess.<p>We've been using <i>Co-Developed-By: <email></i> for our AI annotations.
4/11/2026, 7:05:26 AM