avatar
antirez @antirez.bsky.social

Why the "declare AI help in your PR" is useless, courtesy of 3000 years ago logic: - If you can't tell without the disclaimer, it's useless: people will not tell. - If you can tell, it's useless: you can tell. Just evaluate quality, not process.

aug 24, 2025, 5:20 pm • 32 4

Replies

avatar
Brian Slesinsky @skybrian.bsky.social

I'm reminded of the AI images that look fine at first glance, but after someone points out the error you can't unsee it. It seems like if people are forewarned, they will notice more?

aug 24, 2025, 10:13 pm • 0 0 • view
avatar
Just a passive observer @escapetherace.bsky.social

AI is making PR reviews a lot more laborious.. People I've worked with for a decade, whose work I can usually trust implicitly, now needs me to review line by line to make sure that AI slop isn't baked in and causing problems. Disclosure means I'm either reviewing a trusted source, or a tainted one

aug 25, 2025, 11:52 am • 0 0 • view
avatar
antirez @antirez.bsky.social

The problem is there, it's the solution that is totally useless. I'm not denying the problem. But it is self-evident when the PR is a mass of AI generated code. At the same time, AI can help to refine a PR and produce *better* code. Also contributors build trust over time usually.

aug 25, 2025, 11:57 am • 2 0 • view
avatar
Just a passive observer @escapetherace.bsky.social

The middle ground is the problem. With a large PR, picking the wheat from the chaff becomes exponentially harder when there is no trust.. you're basically having to treat people with 20+ years under their belt like Interns, because there might be an AI created issue buried in high quality code

aug 25, 2025, 12:01 pm • 0 0 • view
avatar
Just a passive observer @escapetherace.bsky.social

I'd at least like a heads up on which parts have potential AI issues. There should absolutely be a standard for commenting AI assisted code...

aug 25, 2025, 12:02 pm • 0 0 • view
avatar
Just a passive observer @escapetherace.bsky.social

Use AI for unit tests, sure.. as long as it's not asserting that 1 == 1 then I really don't give much of a shit.. but the functional side needs to be pristine I'm never going to get away with explaining to stakeholders that a major bug was an AI problem. I'd get sacked, rightly

aug 25, 2025, 12:04 pm • 2 0 • view
avatar
karashiiro @karashiiro.moe

I don't think you would need to do that, the problem is always the developer. If the developer generated that code and saw fit to ship it, it's their fault, full stop. The fact they chose to delegate the work does not mean they get to delegate the responsibility.

aug 25, 2025, 3:45 pm • 2 1 • view
avatar
Just a passive observer @escapetherace.bsky.social

Absolutely. And the responsibility is mine (platform owner). AI just makes my job a LOT harder as I can't trust anything pushed by people I can usually trust.. unless AI code is flagged Stakeholders don't care who did it, they care about who is in the firing line when shit starts to roll downhill

aug 25, 2025, 4:11 pm • 1 0 • view
avatar
karashiiro @karashiiro.moe

I think that trust is convenient, but has always been a poor tool to leverage. Most of the time in the service teams I'm on, operational issues (from code or otherwise) are caused by using trust as a shortcut — verifying people's work is difficult, but there's never been a real substitute for it.

aug 25, 2025, 4:24 pm • 1 0 • view
avatar
karashiiro @karashiiro.moe

With that said, there is something different about AI in that people who should know better fool themselves into thinking they can trust the tools, but that's just exposing the flaw in relying on trust to begin with.

aug 25, 2025, 4:25 pm • 1 0 • view
avatar
Just a passive observer @escapetherace.bsky.social

Can't disagree there. I'm in a rare situation where every developer contributing to my platform know their specific business domain better than me, but I know the platform better than them, so it's highly collaborative and trust is a primary tenet in that collab

aug 25, 2025, 4:28 pm • 1 0 • view
avatar
Patricio Del Boca @pdelboca.me

We had the debate in the project I help maintaining and I argued similarly. From a process perspective there's no difference between: a PR generated with AI, A PR by a human copying-pasting from Stack Overflow, a PR by a Junior contributor, etc. It's all code that needs to be carefully reviewed.

aug 24, 2025, 10:34 pm • 4 0 • view
avatar
Peter Bhat Harkins ✔️ @push.cx

Are you offering to evaluate all my PRs of unknown quality and origin?

aug 24, 2025, 6:30 pm • 0 0 • view
avatar
CraftLife @craftlinks.bsky.social

He doesn't need to evaluate all of them to know their shit or hit 😉

aug 24, 2025, 8:47 pm • 0 0 • view
avatar
creativesustain.bsky.social @creativesustain.bsky.social

I’ll just be over here judging everyone who does use AI

aug 25, 2025, 1:57 pm • 0 0 • view