Why the "declare AI help in your PR" is useless, courtesy of 3000 years ago logic: - If you can't tell without the disclaimer, it's useless: people will not tell. - If you can tell, it's useless: you can tell. Just evaluate quality, not process.
Why the "declare AI help in your PR" is useless, courtesy of 3000 years ago logic: - If you can't tell without the disclaimer, it's useless: people will not tell. - If you can tell, it's useless: you can tell. Just evaluate quality, not process.
I'm reminded of the AI images that look fine at first glance, but after someone points out the error you can't unsee it. It seems like if people are forewarned, they will notice more?
AI is making PR reviews a lot more laborious.. People I've worked with for a decade, whose work I can usually trust implicitly, now needs me to review line by line to make sure that AI slop isn't baked in and causing problems. Disclosure means I'm either reviewing a trusted source, or a tainted one
The problem is there, it's the solution that is totally useless. I'm not denying the problem. But it is self-evident when the PR is a mass of AI generated code. At the same time, AI can help to refine a PR and produce *better* code. Also contributors build trust over time usually.
The middle ground is the problem. With a large PR, picking the wheat from the chaff becomes exponentially harder when there is no trust.. you're basically having to treat people with 20+ years under their belt like Interns, because there might be an AI created issue buried in high quality code
I'd at least like a heads up on which parts have potential AI issues. There should absolutely be a standard for commenting AI assisted code...
Use AI for unit tests, sure.. as long as it's not asserting that 1 == 1 then I really don't give much of a shit.. but the functional side needs to be pristine I'm never going to get away with explaining to stakeholders that a major bug was an AI problem. I'd get sacked, rightly
I don't think you would need to do that, the problem is always the developer. If the developer generated that code and saw fit to ship it, it's their fault, full stop. The fact they chose to delegate the work does not mean they get to delegate the responsibility.
Absolutely. And the responsibility is mine (platform owner). AI just makes my job a LOT harder as I can't trust anything pushed by people I can usually trust.. unless AI code is flagged Stakeholders don't care who did it, they care about who is in the firing line when shit starts to roll downhill
I think that trust is convenient, but has always been a poor tool to leverage. Most of the time in the service teams I'm on, operational issues (from code or otherwise) are caused by using trust as a shortcut — verifying people's work is difficult, but there's never been a real substitute for it.
With that said, there is something different about AI in that people who should know better fool themselves into thinking they can trust the tools, but that's just exposing the flaw in relying on trust to begin with.
Can't disagree there. I'm in a rare situation where every developer contributing to my platform know their specific business domain better than me, but I know the platform better than them, so it's highly collaborative and trust is a primary tenet in that collab
We had the debate in the project I help maintaining and I argued similarly. From a process perspective there's no difference between: a PR generated with AI, A PR by a human copying-pasting from Stack Overflow, a PR by a Junior contributor, etc. It's all code that needs to be carefully reviewed.
Are you offering to evaluate all my PRs of unknown quality and origin?
He doesn't need to evaluate all of them to know their shit or hit 😉
I’ll just be over here judging everyone who does use AI