I have no unique perspective to add other than an obvious question: If the PR is low quality, why not just close/reject it? Does it matter if it's AI assisted or not?
OptionOfT 12 hours ago [-]
Because PRs with AI need to be reviewed with a lot more scrutiny, simply because AI is good at generating code that looks good, but isn't necessarily correct.
So now you're looking at a PR that at face value looks good, but doesn't reflect the author's skill and understanding of the subject.
Meaning now you shift more work to the owners of the codebase, as they have to go through those verifications steps.
10 hours ago [-]
whattheheckheck 11 hours ago [-]
Just deprioritize it and make the mr openers do more verification
rendaw 6 hours ago [-]
What sort of verification?
schwede 6 hours ago [-]
One reason is that AI can create PRs at a scale that can just overwhelm maintainers not to mention drowning out non-AI PRs.
chjj 14 hours ago [-]
That means all AI code would simply be rejected. This saves time.
spoiler 14 hours ago [-]
If AI writes a for loop the same way you would... Does it automatically mean the code is bad because you—or someone you approve of—didn't write it? What is the actual argument being made here? All code has trade offs, does AI make a bad cost/benefit analysis? Hell yeah it does. Do humans make the same mistake? I can tell you for certain they do, because at least half of my career was spent fixing those mistakes... Before there ever was an LLM in sight. So again... What's the argument here? AI can produce more code, so like more possibility for fuck up? Well, don't vibe code with "approve everything" like what are we even talking about? It's not the tool it's the users, and as with any tool theres going to be misuse, especially new and emerging ones lol
chjj 14 hours ago [-]
If this is your opinion, I ask you: are you okay with AI reviewing the PRs as well, or do you prefer a human to do it?
Think carefully before responding.
spoiler 13 hours ago [-]
I don't know why you have to qualify your sentence with "think carefully before you respond" it makes it seem like you're setting up some rhetoric trap... But I'll assume it's in good faith? Anyway...
I don't mind if a review is AI-assisted. I've always been a fan of the whole "human in the loop" concept in general. Maybe the AI helps them catch something they'd normally miss or gloss over. Everyone tends to have different priorities when reviewing PRs, and it's not like humans don't have lapses in judgement either (I'm not trying to anthropomorphise AI, but you know what I mean).
My stance is same about writing code. I honestly don't mind if the code was written `ed` on a linux-powered toaster from 2005 with 32x32 screen, or if they wrote it using Claude Code 9000.
At the end of the day, the person who's submitting the code (or signing off a review) is responsible for their actions.
So in a round-about way, to answer your question: I think AI as part of the review is fine. As impressive as their output can be sometimes be, it can be both impressively good and impressively bad. So no, only relying on AI for review is not enough.
chjj 2 hours ago [-]
You should use AI.
ray_v 9 hours ago [-]
It sounds like what you'd send to an LLM lol.
"Think carefully, make no mistakes."
chjj 2 hours ago [-]
Yeah, it never works though, as you can see from this example.
ds82 2 hours ago [-]
Is there a counter petition?
The author of the PR is a long time nodejs contributor & conference speaker. He explicitly claims that `I've reviewed all changes myself.`.
In the end it's a question if you trust him to submit a useful, well-reviewed PR. Doesn't matter if it was created using AI or not.
pan69 14 hours ago [-]
> A 19k lines-of-code Pull Request was opened in January, 2026.
Such a PR should be rejected simply because of the shear size of it, regardless of AI use. Seriously, who submits a 19k line PR? Just make many small ones.
spoiler 14 hours ago [-]
The PR touched a lot of internals, including module code and mirrors the fs APIs. So, yes it was big, but the commit history was largely clean and followed a development story, and it was tested. The code quality was decent too. I didn't review all of it because I don't have a personal stake in this though.
I suggest EVERYONE in this thread go read the the GitHub PR in question. There's some good arguments for and against AI, and what it means for FOSS... But good lord you will have to sift through the virtue signalling bullshit and have patience for the constant moving of goalposts
tracker1 14 hours ago [-]
How would you go about breaking up this particular set of functionality into smaller PRs, exactly? It's meant to introduce a virtualized file system... the size is dictated by the feature itself.
Also, no mention at all regarding the test coverage, or impact if any on existing code paths specifically.
ramon156 4 hours ago [-]
There's multiple features, not just VFS.
tylerchilds 14 hours ago [-]
On the one hand, agreed
On the other hand, I haven’t and I believe many of us, have never paid node any money so it feels weird to dictate their approach.
If they allow AI in Node it just might do a full rewrite into Rust, Go or Elixir ;)
mtndew4brkfst 14 hours ago [-]
Well, survivorship bias means that Elixir is loudly populated by AI maximalists now. Just go look at the last several years worth of US/EU Elixirconf talks schedules, it's maybe a third of each cohort and included in keynote slots.
bhttrrrrrt 11 hours ago [-]
How is that survivorship bias
mtndew4brkfst 9 hours ago [-]
Because people who enjoyed working with Elixir otherwise but don't want to participate or support that kind of environment have mostly left when the trend became clear. So the folks who are sticking around are the ones who are neutral-to-positive on AI. This means explicitly or implicitly surveying that group for opinions on AI's place in development work, such as while designing a conference schedule, are going to be missing most of those voices who might once have objected. It will continue to skew harder towards favoring AI in the future with most of the possible sources of more-critical opinions leaving.
That to me seems to match the definition of survivorship bias quite well?
thedevilslawyer 8 hours ago [-]
Maybe selection bias.
bwestergard 14 hours ago [-]
This is how I would deal with the problem if I maintained node: "Please, use your tokens and experimental energies to port to Rust and pass the following test suite. Let us know when you've got something that works."
vova_hn2 13 hours ago [-]
I don't see, how such policies can possibly achieve more good, then harm.
A person, who posts slop for whatever reason, or runs bots that post slop, will simply ignore them.
An honest person, who cares about the quality of their contribution and genuinely wants to improve the project will be more limited in the choice of tools to do so.
So, this policy only serves to limit honest contributors, while doing absolutely nothing to stop spammers/slopposters.
ramesh31 15 hours ago [-]
This is a silly reactionary response. Where is the line? Can I use AI to look up APIs? Write documentation? What if I write a function and ask AI to test it? What if I manually implemented an idea that I thought about after chatting with AI a few weeks ago?
Stop treating this like it's going to go away. We need actual solutions for the FOSS community that make reviewing AI assisted work tractable.
tredre3 11 hours ago [-]
> Stop treating this like it's going to go away. We need actual solutions for the FOSS community that make reviewing AI assisted work tractable.
I don't think it should be up to reviewers and maintainers to put in the work to figure that one out. You want to "disrupt" the open-source pipeline? Fine, then you must propose a solution for the problems that your disruption is now causing.
Come up with a system so that I, a maintainer, can review a large volume of AI-generated PRs where the contributor often has neither the inclination nor the skills of assessing the quality of what they're proposing.
The system must be effective at preventing me from waste time on very obvious slop, it must also work offline and be free, because most maintainers are unpaid volunteers.
If you can offer that solution, I'm sure more projects would be open to giving carte blanche to AI-authored PRs.
canmi21 8 hours ago [-]
[dead]
huflungdung 14 hours ago [-]
[dead]
graphememes 14 hours ago [-]
Honestly, this is a small pebble but feels like a ripple in the reasons why node.js is losing to bun and others.
johnny22 11 hours ago [-]
Bun has claude code generated commits as we speak (as robobun).
manwe150 9 hours ago [-]
Which does pull into question the future stability or quality of bun. As much as I don’t think nodejs should ban AI, the quality of some recent robobun AI commit message and code quality looked like hallucinated slop to me.
rglover 10 hours ago [-]
I can see the good intention in this move, but it's not realistic. The genie isn't going back in the bottle, so the priority shouldn't be artificial limits, but more emphasis on review and sets of eyes required to sign off on a merge.
Rendered at 11:25:47 GMT+0000 (Coordinated Universal Time) with Vercel.
So now you're looking at a PR that at face value looks good, but doesn't reflect the author's skill and understanding of the subject.
Meaning now you shift more work to the owners of the codebase, as they have to go through those verifications steps.
Think carefully before responding.
I don't mind if a review is AI-assisted. I've always been a fan of the whole "human in the loop" concept in general. Maybe the AI helps them catch something they'd normally miss or gloss over. Everyone tends to have different priorities when reviewing PRs, and it's not like humans don't have lapses in judgement either (I'm not trying to anthropomorphise AI, but you know what I mean).
My stance is same about writing code. I honestly don't mind if the code was written `ed` on a linux-powered toaster from 2005 with 32x32 screen, or if they wrote it using Claude Code 9000.
At the end of the day, the person who's submitting the code (or signing off a review) is responsible for their actions.
So in a round-about way, to answer your question: I think AI as part of the review is fine. As impressive as their output can be sometimes be, it can be both impressively good and impressively bad. So no, only relying on AI for review is not enough.
"Think carefully, make no mistakes."
The author of the PR is a long time nodejs contributor & conference speaker. He explicitly claims that `I've reviewed all changes myself.`.
In the end it's a question if you trust him to submit a useful, well-reviewed PR. Doesn't matter if it was created using AI or not.
Such a PR should be rejected simply because of the shear size of it, regardless of AI use. Seriously, who submits a 19k line PR? Just make many small ones.
I suggest EVERYONE in this thread go read the the GitHub PR in question. There's some good arguments for and against AI, and what it means for FOSS... But good lord you will have to sift through the virtue signalling bullshit and have patience for the constant moving of goalposts
Also, no mention at all regarding the test coverage, or impact if any on existing code paths specifically.
On the other hand, I haven’t and I believe many of us, have never paid node any money so it feels weird to dictate their approach.
@indutny explains their views in that thread.
That to me seems to match the definition of survivorship bias quite well?
A person, who posts slop for whatever reason, or runs bots that post slop, will simply ignore them.
An honest person, who cares about the quality of their contribution and genuinely wants to improve the project will be more limited in the choice of tools to do so.
So, this policy only serves to limit honest contributors, while doing absolutely nothing to stop spammers/slopposters.
Stop treating this like it's going to go away. We need actual solutions for the FOSS community that make reviewing AI assisted work tractable.
I don't think it should be up to reviewers and maintainers to put in the work to figure that one out. You want to "disrupt" the open-source pipeline? Fine, then you must propose a solution for the problems that your disruption is now causing.
Come up with a system so that I, a maintainer, can review a large volume of AI-generated PRs where the contributor often has neither the inclination nor the skills of assessing the quality of what they're proposing.
The system must be effective at preventing me from waste time on very obvious slop, it must also work offline and be free, because most maintainers are unpaid volunteers.
If you can offer that solution, I'm sure more projects would be open to giving carte blanche to AI-authored PRs.