avatar
Adam Becker @adambecker.bsky.social

These machines are not thinking. They're just extruding homogenized thought-like product. And it's not a very good substitute.

aug 8, 2025, 6:22 pm • 39 6

Replies

avatar
Adam Becker @adambecker.bsky.social

The internal contradiction here is really a perfect illustration of what's going on. It's just producing text strings without regard for its meaning. And it won't produce the string "bluebberies" because that appears nowhere (or almost nowhere) in its training data.

aug 8, 2025, 6:24 pm • 13 0 • view
avatar
Adam Becker @adambecker.bsky.social

But it will be confidently wrong, because it doesn't actually know anything aside from statistical weights on character strings, not even about the internal structure of those very same character strings!

aug 8, 2025, 6:26 pm • 15 0 • view
avatar
Annwen @tarrog71.bsky.social

Yet we call it "artificial intelligence". Maybe we should call it "word strings" or "word streams".

aug 10, 2025, 12:44 am • 0 0 • view
avatar
Alice Roberts @epistrophee.bsky.social

It's really frustrating that there aren't better visual metaphorical representation of what is actually happening when GPT "talks". If people could see random words within a set being scooped up and put into logical places by a confused robotic arm they wouldn't be as impressed

aug 8, 2025, 6:41 pm • 0 0 • view
avatar
Lillard's Maximum Perception Control @junipersbird.bsky.social

Madlibs

image
aug 8, 2025, 7:00 pm • 1 0 • view
avatar
Alice Roberts @epistrophee.bsky.social

Yes. A robot arm filling in madlibs in a dark room

aug 8, 2025, 7:07 pm • 0 0 • view
avatar
Lillard's Maximum Perception Control @junipersbird.bsky.social

By grabbing words out of buckets labeled "nouns," "verbs," "adjectives," etc.

aug 8, 2025, 7:09 pm • 1 0 • view
avatar
Alice Roberts @epistrophee.bsky.social

Yes. You could even show it grabbing from specific subject buckets for types like "fish nouns" "bird adjectives", but they're still just things to the Thing

aug 8, 2025, 7:11 pm • 1 0 • view
avatar
pulsarcube.bsky.social @pulsarcube.bsky.social

I'm not so sure this is still true with the "reasoning" models. ChatGPT-5 uses multiple models and routes prompts to one of them based on estimated complexity. If you explicitly prompt it to use a chain-of-though model, it can answer this particular question correctly. bsky.app/profile/puls...

aug 8, 2025, 7:13 pm • 0 0 • view
avatar
pulsarcube.bsky.social @pulsarcube.bsky.social

No, it's not human reasoning. But it can do certain things (like fact checking itself and backtracking) that humans also do when attempting to solve a problem.

aug 8, 2025, 7:15 pm • 0 0 • view
avatar
Peter Hall @peterha2l.bsky.social

I don’t understand how people can believe this is true and also believe that the AI community is making serious strides toward AGI.

aug 9, 2025, 12:13 am • 0 0 • view
avatar
Adam Becker @adambecker.bsky.social

Yeah. I mean, that's a lot of what my new book is about. But it's really quite something to watch people fall into this nonsense in real time. 😬

aug 9, 2025, 3:19 am • 1 0 • view