avatar
Pavel🐀 @spavel.bsky.social

You might have heard AI boosters say: "it's okay that this tech is wrong half of the time, because the human operator 'partnering' with the LLM will catch errors." Except no they fucking won't. The actual context in which these systems are deployed make checking the AI's "work" impossible.

aug 24, 2025, 6:41 pm • 277 73

Replies

avatar
Shirley Siluk @ebishirl.bsky.social

This is a great line re: "human in the loop": "the loop closed when the output was generated."

aug 24, 2025, 9:36 pm • 4 1 • view
avatar
scott-macshlibber.bsky.social @scott-macshlibber.bsky.social

If AI needs people to be doing the work what's the point of AI?

aug 24, 2025, 9:39 pm • 5 0 • view
avatar
R. Eric VanNewkirk @sotsogm.bsky.social

My thought exactly. If you have to ride herd on it so closely, you might as well do the work yourself in the first place.

aug 26, 2025, 3:59 am • 1 0 • view
avatar
Smitha @smithakp.bsky.social

That Cybermen image… 🥲

aug 25, 2025, 12:07 am • 0 0 • view
avatar
liz @catboss.bsky.social

@tressiemcphd.bsky.social i bet there’s no way you’ll see this, but it may be helpful for your current Discourse Cycle!

aug 25, 2025, 9:14 pm • 1 0 • view
avatar
Pavel🐀 @spavel.bsky.social

I still need to find a good place to cite the "AI is mid" article in. It's a very good explanation of why - after the astronomical costs of LLMs - the benefits are really not all that.

aug 25, 2025, 11:04 pm • 1 0 • view
avatar
Pavel🐀 @spavel.bsky.social

Thanks to @tante.cc @davidgerard.co.uk @ruthmalan.bsky.social @charity.wtf @jamtoday.bsky.social @justinsheehy.bsky.social @acuity.design @jenson.org @swardley.bsky.social @klingebeil.bsky.social @hazelweakly.me and everyone else who's not yet on bsky for the writing featured in this week's Picnic.

aug 24, 2025, 6:45 pm • 21 1 • view
avatar
Hazel Weakly @hazelweakly.me

Thanks for the shoutout! I liked your article and it was a great overview with a lot of fun links :)

aug 25, 2025, 6:39 am • 3 0 • view
avatar
Pavel🐀 @spavel.bsky.social

Also before anyone else says it: "more like human in the poop, am I right"

aug 24, 2025, 6:55 pm • 13 0 • view
avatar
Matteo Gabriele @matteogabriele.bsky.social

aug 25, 2025, 9:28 am • 1 0 • view
avatar
Bingo Lingfucker @jamtoday.bsky.social

I'm famous!

aug 24, 2025, 6:53 pm • 4 0 • view
avatar
Pavel🐀 @spavel.bsky.social

Haha yes I finally got around to using the link, although I ultimately decided not to attribute it to Bingo Lingfucker in the article text itself.

aug 24, 2025, 6:55 pm • 4 0 • view
avatar
Bingo Lingfucker @jamtoday.bsky.social

Ahahaha! It will change again as soon as I think of something funny.

aug 24, 2025, 6:58 pm • 2 0 • view
avatar
Toronto Will @torontowill.bsky.social

Good post. I had a few thoughts from the headline, but you captured all of them, and more. It isn't like checking your own work for errors, or even that of a colleague; you're facing off against a supercomputer specifically designed to make you believe it's always correct.

aug 24, 2025, 9:05 pm • 6 1 • view
avatar
Audacious Ness @audaciousness.bsky.social

Very good points. I like to point out that most people have little or no experience with editing / QA compared to their experience writing / creating. And thorough testing / editing takes so much time and strong attention to detail. Retraining from generators to supervisors is a huge task.

aug 26, 2025, 2:13 pm • 1 0 • view
avatar
kamarad evpok @evpok.love

time to boost @olivia.science's paper again

aug 24, 2025, 7:00 pm • 12 3 • view
avatar
Mark Madsen @markmadsen.bsky.social

I’m working on a paper/ presentation on how to use the slop generator to aid thinking about second order effects and subvert the process of injecting bad AI by using bad AI. The advocates are unlikely to see how this will limit their efforts.

aug 24, 2025, 7:17 pm • 1 0 • view
avatar
Mark Madsen @markmadsen.bsky.social

Sure they’ll catch the errors. Look at how well it worked on YouTube.

aug 24, 2025, 7:09 pm • 1 0 • view