avatar
Katie Mack @astrokatie.com

Chatbots — LLMs — do not know facts and are not designed to be able to accurately answer factual questions. They are designed to find and mimic patterns of words, probabilistically. When they’re “right” it’s because correct things are often written down, so those patterns are frequent. That’s all.

jun 19, 2025, 11:21 am • 29,390 8,617

Replies

avatar
Katie Mack @astrokatie.com

When a chatbot gets something wrong, it’s not because it made an error. It’s because on that roll of the dice, it happened to string together a group of words that, when read by a human, represents something false. But it was working entirely as designed. It was supposed to make a sentence & it did.

jun 19, 2025, 11:28 am • 7,149 1,383 • view
avatar
London innit! @londoninnit.bsky.social

Isn't this the reason AI companies campaign so strongly against copyright? The more AI aggregates, the more errors will creep in. The best way for the chatbot to return correct information is to lift whole paragraphs from reputable sources.

jun 19, 2025, 12:25 pm • 2 0 • view
avatar
Roldy @alwaysroldy.bsky.social

You know your friend the bullshitter? ChatGPT is coming to replace him. As a society, we need to get back to rewarding being right about things and punishing being wrong about them!

jun 19, 2025, 1:26 pm • 9 0 • view
avatar
midi @atmidimidi.bsky.social

When you read a chatGPT session that’s gone extra off-the-rails, it can be easier to see what the LLMs are doing: automated improv. It may not be funny, but it’s an improv bit. “Read my essay. Is this good enough to submit to The New Yorker?” “Yes, AND…”

jun 19, 2025, 4:55 pm • 17 1 • view
avatar
EndMalcompetence @endmalcompetence.bsky.social

That analogy goes deeper: The algorithm's working against a hidden document which resembles a play script. What you type becomes "Then the user said X", and anything generated for the other character gets "performed" at you.

jun 19, 2025, 5:13 pm • 2 0 • view
avatar
EndMalcompetence @endmalcompetence.bsky.social

The character you felt you were "talking to" can be retconned and rebooted to anything, proving it was an illusion all along. Five minutes of "conversation" about cute innocent puppies that must be protected, and *poof* now it's named Cruella De Vil plotting to skin dalmatian puppies into coats.

jun 19, 2025, 6:30 pm • 1 0 • view
avatar
Snegglepuss @snegglepuss.bsky.social

A chatbot is (almost) literally that game where you write out the beginning of a sentence and then click the first/second/third autosuggestion until it tails off, but run on a server that boils swimming pools worth of water as it works

jun 19, 2025, 3:40 pm • 12 1 • view
avatar
Dr. Damien P. Williams Wants A Better Sociotechnical Paradigm @wolvendamien.bsky.social

Every time i say this in a talk or a conference people are shocked, so thank you for saying it whenever you can

jun 19, 2025, 1:55 pm • 858 18 • view
avatar
EndMalcompetence @endmalcompetence.bsky.social

"I made a machine which solves the problem of predicting text to go into documents, based on lots and lots of documents."" "Wow, It greeted me politely, and even I put in 2+2= and it gave me 4! IT'S DOING MATH TOO!" "No, it really isn—” "The Singularity is upon us!"

jun 19, 2025, 5:03 pm • 21 2 • view
avatar
LazyCuttlefish @lazycuttlefish.bsky.social

Too bad your audiences likely don’t speak German, otherwise I’d gift you my screenshot of AI going to great lengths to explain that German footballer A is not related to former German footballer B but his son.

jun 19, 2025, 3:57 pm • 43 1 • view
avatar
Dr. Damien P. Williams Wants A Better Sociotechnical Paradigm @wolvendamien.bsky.social

🫠 Truly amazing scenes

jun 19, 2025, 4:45 pm • 8 0 • view
avatar
mjfgates @mjfgates.bsky.social

Don't worry, we've got plenty of similar examples in English, but this is.. um.. exemplary.

jul 3, 2025, 1:57 am • 0 0 • view
avatar
LazyCuttlefish @lazycuttlefish.bsky.social

I just really enjoy how it first lures you into thinking that there might be two Bernd Hollerbachs in professional German football in which case it could all still make sense but then kills off that idea with the final three words.

jul 3, 2025, 8:04 pm • 4 0 • view
avatar
the4thpip @the4thpip.bsky.social

Oh, kann ich das bitte sehen?

jul 1, 2025, 4:31 am • 4 0 • view
avatar
LazyCuttlefish @lazycuttlefish.bsky.social

Gern. bsky.app/profile/lazy...

jul 1, 2025, 9:34 am • 6 0 • view
avatar
Cucky Charms Trusted Verifier @jsnfra.bsky.social

WHY ARE THEY SHOCKED WHY DO THEY NOT KNOW THIS ALREADY??? 😳

jun 19, 2025, 1:57 pm • 296 2 • view
avatar
Dr. Damien P. Williams Wants A Better Sociotechnical Paradigm @wolvendamien.bsky.social

Some of my audiences are new to the technical details, so the need it plbroken down for them. Some however are… not new to this, so i guess have decided or allowed themselves to become convinced that stats and vibes are as good as truth and consensus knowledge making? I dunno

jun 19, 2025, 2:01 pm • 210 3 • view
avatar
Rik M @spetsdad.bsky.social

bsky.app/profile/spet...

jun 19, 2025, 2:37 pm • 4 0 • view
avatar
Cucky Charms Trusted Verifier @jsnfra.bsky.social

ok i'm going to scream 🙃

jun 19, 2025, 2:02 pm • 13 0 • view
avatar
Dr. Damien P. Williams Wants A Better Sociotechnical Paradigm @wolvendamien.bsky.social

Like, it's not everybody. But it's for sure not nobody

jun 19, 2025, 2:05 pm • 27 0 • view
avatar
Katie (Kathryn) Conrad @kconrad.bsky.social

It’s a whole lot more than nobody, alas. There are still people who think that it is some sort of talking database.

jun 19, 2025, 11:59 pm • 2 0 • view
avatar
Quantum Hamologist @quantumham.bsky.social

@infinitescream.bsky.social

jun 19, 2025, 2:29 pm • 8 0 • view
avatar
Endless Screaming @infinitescream.bsky.social

AAAAAAHHHHHHHHH

jun 19, 2025, 2:30 pm • 14 0 • view
avatar
Brenna Clarke Gray @brennacgray.bsky.social

In university audiences I find the default is to nod politely as this is explained to them and then they go back to their offices and ChatGPT their next vacation itinerary and it’s all so depressing.

jun 19, 2025, 2:08 pm • 12 2 • view
avatar
Jade Dietz Dance @jadedietzdance.bsky.social

I didn't know this was a thing, and am baffled by it. If I need suggestions on where to go and what to do, there's a brazillion sites written by actual people who've been there, and I can pick ones that match MY interests. I like plants, but I'm not visiting a coffee farm. I'm gonna go zipline!

jun 19, 2025, 6:24 pm • 3 0 • view
avatar
Gavin @gavinneedham.bsky.social

Me planning a trip to Honolulu. ChatGPT “visit the coffee fields of Kona and enjoy macadamia nuts grown on the island”

jun 19, 2025, 2:47 pm • 7 0 • view
avatar
EndMalcompetence @endmalcompetence.bsky.social

Soon: "And enjoy Mococoa™, all natural cocoa beans from the upper slopes of Mount Nicaragua."

jun 19, 2025, 5:07 pm • 2 0 • view
avatar
Cucky Charms Trusted Verifier @jsnfra.bsky.social

ok i'm going to scream 🙃

jun 19, 2025, 2:11 pm • 5 0 • view
avatar
floydmferns.bsky.social @floydmferns.bsky.social

Could it be generational as well? Like those who grew up in the transition from no computers to modern day computers tend to know the rudimentary functioning of data input to logical output. There's a whole generation now that have fully functioning tech thrust in their hands but no baseline.

jun 19, 2025, 5:56 pm • 26 3 • view
avatar
Dr. Damien P. Williams Wants A Better Sociotechnical Paradigm @wolvendamien.bsky.social

It would be really great to see some more research on generational differences in trust in LLMs and ither grnerative "AI"

jun 19, 2025, 6:06 pm • 38 3 • view
avatar
Wudang96 💖🌻 @wudang.com

Great idea. Filter that by who knows tech and who doesn't. I'm a old coder and database architect. I don't trust GenAI for most common things. I can explain why. Folks who don't know tech and expect the 'summary' to be correct and ai to become sentient are learning to hallucinate along with the ai.

jun 19, 2025, 6:18 pm • 16 1 • view
avatar
✨𝚗𝚊𝚗𝚊𝚖𝚒 🎀 @nanami.codes

Folks who do know tech use genAI for work. Techies are very positive to genAI because the models are genuinely good now. I'm a software developer and we use it pretty much every day as a replacement for Google plus auto complete in our IDEs plus more complex tools like AI assistants

jun 19, 2025, 6:34 pm • 1 0 • view
avatar
✨𝚗𝚊𝚗𝚊𝚖𝚒 🎀 @nanami.codes

We don't talk about sentience though. That's for the grifters. But GenAI is a tool we use every day just like how I use my IDE every day. Before GenAI we had to Google and scour through page after page after page of Stackoverflow posts from 2004 for any guidance GenAI is a huge improvement

jun 19, 2025, 6:36 pm • 1 0 • view
avatar
✨𝚗𝚊𝚗𝚊𝚖𝚒 🎀 @nanami.codes

People who aren't software developers (and other professions who have replaced a tool with it) either think GenAI is dog shit and is useless, or they think it will take people's jobs, or both at the same time

jun 19, 2025, 6:38 pm • 1 0 • view
avatar
Wudang96 💖🌻 @wudang.com

Back then, we relied on others who had learned more to sort out the questions. I got good at setting my search parameters to focus on what might be solutions. That process helped me learn things and gave me skills I wouldn't have otherwise. I also would not trade Notepad++ for anything. 😂

jun 19, 2025, 6:46 pm • 6 0 • view
avatar
Wudang96 💖🌻 @wudang.com

I'd rather have autocomplete than not and if I need suggestions, maybe AI is better than me looking for my own answers. I don't learn anything, though. Two different uses though. Code is a finite universe that's well-defined. Common language is messy. Always been like that.

jun 19, 2025, 6:40 pm • 4 0 • view
avatar
*Handle pending approval @existentialfauxpas.bsky.social

Absolutely this. 👏👏 AI is great for code as it has a defined set of rules and things fit together logically…anything beyond that requiring nuance and finesse is beyond it…

jun 19, 2025, 9:35 pm • 4 0 • view
avatar
Danya Glabau @allergyphd.bsky.social

As of April my mid career engineering colleagues seemed more trusting than my students 😬 Students really turned on it after that Columbia student cheated his was into big tech jobs

jun 19, 2025, 6:25 pm • 19 0 • view
avatar
Dr. Damien P. Williams Wants A Better Sociotechnical Paradigm @wolvendamien.bsky.social

Yeah, I'm seeing INCREASING mistrust from students, and especially once the idea of instructors or admins using it comes into the picture

jun 19, 2025, 6:33 pm • 30 0 • view
avatar
Dag. @3fecta.bsky.social

That instructor who "graded" letting LLMs do it for them made my soul die a little.

jun 19, 2025, 6:34 pm • 15 0 • view
avatar
Dr. Damien P. Williams Wants A Better Sociotechnical Paradigm @wolvendamien.bsky.social

Yup!

jun 19, 2025, 6:35 pm • 6 0 • view
avatar
Julia DOESN’T WANT YOUR MONEY @booktweeting.bsky.social

if you want automated grading, do multiple choice exams with a Scantron like God intended (God intended no such thing, of course)

jun 19, 2025, 6:37 pm • 17 0 • view
avatar
杜鹃 @azalee99.bsky.social

Required reading for those people: link.springer.com/article/10.1...

jun 19, 2025, 2:12 pm • 128 19 • view
avatar
Dr. Damien P. Williams Wants A Better Sociotechnical Paradigm @wolvendamien.bsky.social

I mean… i also gave this talk youtu.be/9DpM_TXq2ws?... and wrote this paper afutureworththinkingabout.com/wp-content/u... almost two years ago now 😞; it's just… it's been out there

jun 19, 2025, 2:24 pm • 162 32 • view
avatar
Toroid @toroidal.bsky.social

It’s not artificial intelligence, it’s artificial language. It has a lot of uses, but producing facts or making decisions will always come with a ceiling for accuracy that makes those use cases a dead end. But Eliza effect and profit motives. 🙃

jun 19, 2025, 3:58 pm • 52 2 • view
avatar
jeanbaudrillard.bsky.social @jeanbaudrillard.bsky.social

There’s also a big difference between asking ChatGPT facts about an event and an LLM that is designed to answer questions about company policy or contract details, etc. ChatGPT isn’t all LLMs

jun 19, 2025, 4:13 pm • 11 0 • view
avatar
Justin @suckminimally.com

What is the material difference? Any LLM produces probabilistic output based on its training. Are you saying the latter is somehow not liable to get things wrong in the same way that ChatGPT is?

jun 19, 2025, 4:56 pm • 8 0 • view
avatar
jeanbaudrillard.bsky.social @jeanbaudrillard.bsky.social

quality. you can produce probabilistic output that's 99.999% correct because it's not doing anything terribly complex, just way faster than humans. chatGPT etc are generic, but when you get one that does one thing and one thing only, it's easier to make it very good.

jun 19, 2025, 5:38 pm • 2 0 • view
avatar
EndMalcompetence @endmalcompetence.bsky.social

Yeah: The difference is one of them will get your company in legal trouble. www.wired.com/story/air-ca...

jun 19, 2025, 5:05 pm • 59 7 • view
avatar
jeanbaudrillard.bsky.social @jeanbaudrillard.bsky.social

bahaha. that's good, that's a net benefit for society. it makes the models better and it makes sure companies are being as careful about training as they are with humans. the difference is you can fire the human...

jun 19, 2025, 5:36 pm • 5 0 • view
avatar
SuaSponte2024 @suasponte2023.bsky.social

An LLM designed to answer questions about company policy or - god forbid - contract details, performs the same tokenization and attention analysis, and returns the same bullshit. It can be configured to rely less on randomization in its output, sure, but then it's answers are bad in a different way.

jun 19, 2025, 5:05 pm • 6 0 • view
avatar
jeanbaudrillard.bsky.social @jeanbaudrillard.bsky.social

i dunno, i've used software that can do this, and there are products on the market right now that companies currently use and say are better at it than humans. if you train your models correctly they aren't as random as what something like chatGPT produces.

jun 19, 2025, 5:34 pm • 0 0 • view
avatar
Sylvia @cricklewoodmum.bsky.social

Thank you for sharing this. Such an important paper and reminder that as much as we wish there was indeed an artificial intelligence to help us with some tasks, this is "just" a process that exists within predetermined parameters and has its limits.

jun 26, 2025, 7:16 am • 3 0 • view
avatar
‏ superfates @deepfates.com.deepfates.com.deepfates.com.deepfates.com.deepfates.com

These don't seem to support the claim that "language models don't know facts"? You might be doing your audience a disservice by not engaging harder with the question and the technical details

jun 19, 2025, 4:46 pm • 3 0 • view
avatar
notappearinginthisfi @notappearingint.bsky.social

📌

jun 22, 2025, 12:12 pm • 0 0 • view
avatar
LucilleBluth1.5 @lbluth.bsky.social

📌

jun 19, 2025, 4:30 pm • 0 0 • view
avatar
Cube Chan @cubechan.bsky.social

I mean hey those are the same people "learning" stuff off those same LLMs so

jun 19, 2025, 3:33 pm • 1 0 • view
avatar
Mr. Pleasantries @jeffreyd.bsky.social

This isn’t a concept most people ever considered until recently, that human speech patterns can be mimicked convincingly enough to emulate consciousness. Sociopaths are also really good at convincing others they have normal emotions.

jun 19, 2025, 3:03 pm • 1 0 • view
avatar
Mr. Pleasantries @jeffreyd.bsky.social

And the sycophantic, complimentary way commercial LLMs talk in order to appeal to our egos doesn’t help

jun 19, 2025, 3:04 pm • 4 0 • view
avatar
Timlagor @timlagor.bsky.social

It's appalling that the disinformation (hype) can so easily succeed. It's not like this is new or secret information and yet people just can't grasp it evern after they've been told.

jun 19, 2025, 2:08 pm • 6 0 • view
avatar
Matt Graham @mattgraham.bsky.social

For one thing many people call the technology AI

jun 19, 2025, 2:00 pm • 11 0 • view
avatar
mhkmhk.bsky.social @mhkmhk.bsky.social

It's weird because at the core there is a difference between a tape recorder repeating something and a toddler knowing something and saying it but if people can't understand that they don't get it about computers. So a lack of understanding human thinking beyond being able to tell them things.

jun 19, 2025, 2:06 pm • 6 0 • view
avatar
Rystefn @rystefn.bsky.social

Most people never actually develop a full or coherent theory of mind, so to them, the machine is not meaningfully different from other people.

jun 19, 2025, 3:00 pm • 2 1 • view
avatar
mhkmhk.bsky.social @mhkmhk.bsky.social

It's weird for me bc my dad was a computer programmer and my mom a school psych. So I read some of her stuff growing up. Neither think a computer can 'think', but idk if my pov is influenced by that or doing deep dives into mommy Montessori stuff when my kid was younger.

jun 19, 2025, 3:05 pm • 4 0 • view
avatar
mhkmhk.bsky.social @mhkmhk.bsky.social

If you define thinking in other people as what they give to you instead of a process they are experiencing as equally valid humans, it's just starting from a place that gets them thinking the computer is actually thinking

jun 19, 2025, 2:08 pm • 7 1 • view
avatar
Rystefn @rystefn.bsky.social

Because it's marketed to them as a thinking and learning intelligence, not as a predictive text machine.

jun 19, 2025, 2:57 pm • 10 0 • view
avatar
Freo ⚓ Pope @freopope.bsky.social

Also because we have got used to something fairly similar when we google things (for how most people use LLM & how it's marketed). You ask Google a question & it gives you the answer. If it's the wrong answer, it's the website it found that's wrong not Google. LLMs normally don't give you the source

jun 19, 2025, 3:43 pm • 2 0 • view
avatar
jackspektor.bsky.social @jackspektor.bsky.social

Because people hear the word "ai" and they believe that LLM like ChatGPT have actual intelligence.

jun 20, 2025, 8:12 am • 0 0 • view
avatar
queenbeejdl.bsky.social @queenbeejdl.bsky.social

They don’t understand what it is. They think they’re talking to an all knowing robot.

jun 19, 2025, 3:37 pm • 1 0 • view
avatar
EarlGreyTea-hawt @earlgreytea-hawt.bsky.social

I'm rewatching Luke Cage on the Disney account while I housesit... And every commercial break almost there's an ad for AI assistance that has a student asking AI to explain evolution to them... That's how they don't know, they're literally being told it does this by marketing ads.

jun 19, 2025, 6:26 pm • 55 3 • view
avatar
KiT! @hottopic.bsky.social

my guess is they have a preconceived notion of what ai is "supposed" to be like from pop culture and blindly trust that it is that without question

jun 19, 2025, 6:02 pm • 1 0 • view
avatar
bkchez1.bsky.social @bkchez1.bsky.social

For non-tech folks like me, this info is vital

jun 19, 2025, 2:15 pm • 5 0 • view
avatar
Nick Garcia @nwgarcia.bsky.social

I don't really blame people who aren't up on this stuff. The products are literally marketed as "intelligences." Unlike say, the hoverboard, you can't look at an LLM and instinctively know "this is not the thing it purports to be."

jun 19, 2025, 7:29 pm • 8 0 • view
avatar
Cucky Charms Trusted Verifier @jsnfra.bsky.social

fair point but still can't they just take a bit of initiative? 😭

jun 19, 2025, 9:09 pm • 2 0 • view
avatar
Timlagor @timlagor.bsky.social

I blame them for believing the advertising

jul 21, 2025, 1:54 am • 0 0 • view
avatar
Evie🖤 @bluepearlmoon.bsky.social

Because it’s a logic function. “If it’s wrong it’s functionally the same as being right but that’s not the same as the info being factual” doesn’t make sense because they don’t know what GPT or even a search engine really is. They just know “input words, get words that sound good back”

jun 19, 2025, 5:08 pm • 2 0 • view
avatar
Stefano Zanero @raistolo.bsky.social

No. No, most people genuinely do not.

jun 20, 2025, 8:03 am • 21 0 • view
avatar
Flack Blag @flacky.bsky.social

Many probably think it's Skynet.

jun 20, 2025, 3:26 pm • 2 0 • view
avatar
Olly Gark @arialadam.bsky.social

I guess because everyone keeps calling it Artificial Intelligence...

jun 20, 2025, 5:29 pm • 9 0 • view
avatar
Guido W @guido-w.bsky.social

Because not everyone is as bright as you my capital L sweetheart

jun 19, 2025, 5:16 pm • 0 0 • view
avatar
buster skeetin @busterskeetin.bsky.social

that's the funny thing about deceptive things: they have a tendency to deceive people

jun 19, 2025, 2:16 pm • 5 0 • view
avatar
Wah @robotpirateninja.com

Hey can you quote one of my long chats with an llm as its reasoning through a big in a web application in your next talk?

jun 20, 2025, 3:36 am • 0 0 • view
avatar
James in Brooklyn @jamesinbrooklyn.bsky.social

You teach AI and you take the position that LLMs do not contain knowledge?

jun 19, 2025, 8:53 pm • 2 0 • view
avatar
Dr. Damien P. Williams Wants A Better Sociotechnical Paradigm @wolvendamien.bsky.social

I'm also a a philosopher trained in ethics and epistemology which is why I can tell you that "containing knowledge" and "knowing things" are different propositions. For instance, an encyclopedia contains knowledge, and knows nothing.

jun 19, 2025, 9:36 pm • 131 7 • view
avatar
djent atkins @jojjjajjr.bsky.social

Do you draw on Stiegler here at all? I lived with a guy ten years ago who was very into Technics and Time. I’ve been considering reading it recently

jun 19, 2025, 9:48 pm • 1 0 • view
avatar
Dr. Damien P. Williams Wants A Better Sociotechnical Paradigm @wolvendamien.bsky.social

Not usually, as i disagree with the thesis of technics as a fundamentally/solely human proposition

jun 19, 2025, 9:52 pm • 5 0 • view
avatar
The Creatively Maladjusted @tyrotcm.bsky.social

I've long believed basic epistemology should be required teaching in middle schools.

jun 19, 2025, 9:48 pm • 35 0 • view
avatar
Dr. Damien P. Williams Wants A Better Sociotechnical Paradigm @wolvendamien.bsky.social

Very much agreed.

jun 19, 2025, 9:55 pm • 14 0 • view
avatar
The Creatively Maladjusted @tyrotcm.bsky.social

I appreciate the confirmation.

jun 19, 2025, 9:56 pm • 3 0 • view
avatar
The Creatively Maladjusted @tyrotcm.bsky.social

I was unable to pursue my dreams of academia. Dropped out of college three times due to my health, became obvious trying again would be futile. It's remarkable I've survived this long. Not much point in a 5 year plan at this point even if I could handle classes. But I've picked up bits & pieces.

jun 19, 2025, 10:03 pm • 8 0 • view
avatar
Christopher 'Smokeyandyouknowthis' Chik, MFA @g1mpy.bsky.social

Check out SNHU. They made it possible for my disabled ass to go back and finish after I had to stop walking in 2007. Even did my MFA there in low residency.

jul 1, 2025, 7:51 am • 4 0 • view
avatar
Christopher 'Smokeyandyouknowthis' Chik, MFA @g1mpy.bsky.social

Like attendance was my biggest issue even when I could walk because pain etc. At SNHU, it was so accessible online I got all As where before attendance pushed me down to a lot of Bs. If you end up considering it, lmk. I can help you navigate it.

jul 1, 2025, 7:55 am • 1 0 • view
avatar
The Creatively Maladjusted @tyrotcm.bsky.social

Very cool, but my disabilities include multiple things that cause brain damage (my memory is very unreliable, & I can't focus well) which exacerbated my pre-existing anxiety disorder & cyclothymia. Plus severe intractable chronic pain & next to no immune system. And I'm oxygen dependent these days.

jul 1, 2025, 7:54 am • 3 0 • view
avatar
James in Brooklyn @jamesinbrooklyn.bsky.social

I’m fine with you falsely assuming I haven’t studied epistemology since most people haven’t But “Knowing” is poorly understood and defined in the entire canon. Nobody understands how humans know, at the level of depth required to issue rulings on whether Turing Test-passing machines do or don’t

jun 19, 2025, 9:59 pm • 1 0 • view
avatar
Dr. Damien P. Williams Wants A Better Sociotechnical Paradigm @wolvendamien.bsky.social

I didn't say I thought you hadn't studied epistemology; I said I had, and that differences btwn containing knowledge and knowing are a major feature of discussion within it. I also caution against arguing for anything than a purely agnostic position about what it means to be and have a mind, while…

jun 19, 2025, 10:07 pm • 17 0 • view
avatar
Dr. Damien P. Williams Wants A Better Sociotechnical Paradigm @wolvendamien.bsky.social

…also noting that a tendency toward what's been called technochauvinism means we should ESPECIALLY be careful about making positive claims about how machines might or might not have minds. Over-trust in the validity of mathematical & technical systems, over and above that of humans is a major part…

jun 19, 2025, 10:07 pm • 21 0 • view
avatar
Dr. Damien P. Williams Wants A Better Sociotechnical Paradigm @wolvendamien.bsky.social

…of why all of this over-reliance on and pressure to trust and esteem "AI" systems is so fraught and presently dangerous. Have a good one.

jun 19, 2025, 10:07 pm • 17 0 • view
avatar
‎ ‎ ‎ ‎ 🐱 Dr Pokemon, UMD 💿 @turnerjoy.bsky.social

Does that apply to the positive claims of saying “this is all an LLM is”?

jul 1, 2025, 7:19 am • 2 0 • view
avatar
The Creatively Maladjusted @tyrotcm.bsky.social

You're papering over a very obvious, self-evident truth with academic double speak. Machine learning ≠ anything like human learning. No self awareness, no reasoning. That technology will never do what people keep claiming is just down the road if they're given more money.

jun 19, 2025, 10:05 pm • 3 0 • view
avatar
James in Brooklyn @jamesinbrooklyn.bsky.social

I’m not papering over anything. If you want to argue definitionally that machine learning cannot be anything like human learning then you’re not taking any kind of substantive position, or you’re ultimately some kind of spiritual dualist If humans are matter then some kind of machine can learn

jun 19, 2025, 10:08 pm • 0 0 • view
avatar
The Creatively Maladjusted @tyrotcm.bsky.social

Obviously I'm oversimplifying. Not enough room here for real precision, not to mention the inevitable exacerbation on my tendinitis if I tried. But the current software models used to mimic human reasoning are fundamentally incapable of the real thing.

jun 19, 2025, 10:11 pm • 2 0 • view
avatar
James in Brooklyn @jamesinbrooklyn.bsky.social

I don’t imagine you will insist on equating the function and behaviors of an encyclopedia with a frontier model LLM

jun 19, 2025, 10:00 pm • 0 0 • view
avatar
The Creatively Maladjusted @tyrotcm.bsky.social

Encyclopedias have a tiny fraction of the error rate. Of course I wouldn't compare an encyclopedia with something so riddled with inaccuracy.

jun 19, 2025, 10:06 pm • 7 0 • view
avatar
Jonathan @jginsburg.bsky.social

Encyclopedias are checked by real live humans paid to correct them, LLMs spit out whatever garbage with no oversight

jun 19, 2025, 10:36 pm • 5 0 • view
avatar
The Creatively Maladjusted @tyrotcm.bsky.social

It should also be emphasized more in teacher training programs. It's obvious to me that a lot of people with pedagogy degrees don't know even the basics of epistemology. Lay opinion, not an expert, but it matches my experience.

jun 19, 2025, 9:50 pm • 11 0 • view
avatar
The Creatively Maladjusted @tyrotcm.bsky.social

Listen to the good doctor. He knows what he's talking about.

jun 19, 2025, 9:38 pm • 5 0 • view
avatar
Travis Klempan @travisklempan.bsky.social

Do you think they do?

jun 19, 2025, 9:55 pm • 1 0 • view
avatar
djent atkins @jojjjajjr.bsky.social

Does linear regression contain knowledge? No snark intended. I’m sincerely interested in where people draw that line with predictive algorithms

jun 19, 2025, 9:51 pm • 6 0 • view
avatar
James in Brooklyn @jamesinbrooklyn.bsky.social

Nobody understands how brains encode knowledge - so nobody can rule out that artificial information processing systems, based loosely on how brains work, don’t When people do this, they invariably make shallow, sweeping, unsustainable claims

jun 19, 2025, 9:55 pm • 2 0 • view
avatar
jasonmehmel.bsky.social @jasonmehmel.bsky.social

We might not be able to definitely claim exactly how brains encode knowledge, but we have been able to rule a few things out. For example: there isn't a 'memory block' where you contain data that you want to recall; it's not a thing that fills like a database.

jun 19, 2025, 10:37 pm • 5 0 • view
avatar
jasonmehmel.bsky.social @jasonmehmel.bsky.social

We also know that our brains don't do probability analysis before using every term or symbol of communication; whatever is happening, it isn't front-loaded math crunching equaling an output. So we might not have solved the mystery of the mind, but we know it's not doing what LLMs do.

jun 19, 2025, 10:38 pm • 6 0 • view
avatar
jasonmehmel.bsky.social @jasonmehmel.bsky.social

Lastly, it's worth noting: the fact that we don't know exactly how the mind works does not logically equal that anything that processes language is therefore likely to be a mind. Also since we actually know how LLMs work, we CAN prove it's not 'thinking' and doesn't contain knowledge.

jun 19, 2025, 10:42 pm • 10 0 • view
avatar
James in Brooklyn @jamesinbrooklyn.bsky.social

We do not actually know how LLMs work properly. This is the entire field of interpretability. The leading teams explicitly highlight that their conceptual representations are largely a mystery We understand them at a low level, just like we understand neurons. And we have ideas structurally

jun 19, 2025, 10:46 pm • 0 0 • view
avatar
MF NOW @swizzard.pizza

we do understand how LLMs work properly. literally hundreds of millions of dollars are spent to obscure that fact, but it's true

jun 19, 2025, 10:56 pm • 9 0 • view
avatar
jasonmehmel.bsky.social @jasonmehmel.bsky.social

But... like... we do? Who's telling you we don't? The folks who made them published the methods. It's known technology. www.zdnet.com/article/how-... The mystery is the contents of the dataset. But that's not an LLM. I think I'll need some evidence for the 'we have ideas structurally' though.

jun 20, 2025, 12:18 am • 5 0 • view
avatar
James in Brooklyn @jamesinbrooklyn.bsky.social

Respectfully, the fact that you raise a “memory block” and “database” suggests that you’re misunderstanding how LLMs work. They have neither Further, the “number crunching” is just the mechanism for implementing the neural network. Those are explicitly a simple approximation of how brains work

jun 19, 2025, 10:44 pm • 0 0 • view
avatar
Monkey D. Bloopy, phd @tralala713.bsky.social

Noticing that you've responded to a couple replies to you but not the meaningful response Dr. Williams made to your question. Gives the impression you don't have a confident response to him 😉

jun 19, 2025, 9:58 pm • 5 0 • view
avatar
djent atkins @jojjjajjr.bsky.social

Sooo is that a no on linear regression? What about Markov chains or recurrent neural networks?

jun 19, 2025, 10:01 pm • 5 0 • view
avatar
James in Brooklyn @jamesinbrooklyn.bsky.social

It’s not a no. The basic building blocks aren’t where the intelligence lies. It’s an emergent property of massive systems of information processing units. So the specific implementation of their logic is likely not to matter

jun 19, 2025, 10:03 pm • 1 0 • view
avatar
Monkey D. Bloopy, phd @tralala713.bsky.social

"They invariably make shallow, sweeping, unsubstantiated claims"

jun 19, 2025, 10:06 pm • 8 0 • view
avatar
‎ ‎ ‎ ‎ 🐱 Dr Pokemon, UMD 💿 @turnerjoy.bsky.social

But that’s enough about BlueSky

jul 1, 2025, 7:19 am • 1 0 • view
avatar
EssayWells @essaywells.bsky.social

Maybe you have an opportunity here to learn something.

jun 19, 2025, 9:41 pm • 1 0 • view
avatar
James in Brooklyn @jamesinbrooklyn.bsky.social

If you want to engage substantially I’ll reciprocate. But if you want to be patronizing like this you’re contributing nothing

jun 19, 2025, 9:53 pm • 0 0 • view
avatar
Sour Cetacean @sourcetacean.bsky.social

I mean it's a really misleading way to put it, so maybe that's why.

jun 20, 2025, 2:13 am • 2 0 • view
avatar
Lady Aranea @ladyaranea.bsky.social

No it isn't. Hope that helps.

jun 20, 2025, 2:42 am • 17 0 • view
avatar
Sour Cetacean @sourcetacean.bsky.social

the base model isn't designed to get answers right, that is true. but the model is then trained and fine-tuned with the goal of accuracy. when it's wrong, that was not the intent of those who trained it and fine-tuned it.

jun 20, 2025, 10:33 pm • 0 0 • view
avatar
Lady Aranea @ladyaranea.bsky.social

Their intent was to make it believable. They have no way of ensuring that it's correct.

jun 21, 2025, 12:38 am • 15 0 • view
avatar
Kate Tuttle @katekilla.bsky.social

Their intent was to make money selling a product, no matter how destructive it will be (and already is).

jul 3, 2025, 2:09 pm • 7 0 • view
avatar
Lady Aranea @ladyaranea.bsky.social

I think they also want to destroy the creative and intellectual labor industries so that they don't have competition.

jul 3, 2025, 3:25 pm • 3 0 • view
avatar
Kate Tuttle @katekilla.bsky.social

I think that's true as well, but in general I think the AI evangelists just want money and power.

jul 3, 2025, 3:28 pm • 3 0 • view
avatar
Sour Cetacean @sourcetacean.bsky.social

Creativity will, in many ways, be more important than ever.

jul 4, 2025, 7:05 am • 1 0 • view
avatar
Sour Cetacean @sourcetacean.bsky.social

training and fine-tuning is done with the aim of making it accurate, but you're right that it doesn't ensure it's always accurate.

jun 21, 2025, 12:58 am • 1 0 • view
avatar
Lady Aranea @ladyaranea.bsky.social

There's also the part where the entire point of writing things by yourself is that doing so causes you to think about and better internalize what you're writing, so using it for that is garbage. Also, that practicing communicating with a robot trains you to see others as programs to debug. And that

jun 21, 2025, 1:01 am • 19 0 • view
avatar
Lady Aranea @ladyaranea.bsky.social

outsourcing your problem solving skills to a robot causes you to stop developing them yourself. Thinking is the last thing we should be handing off to a robot, and yet here we are.

jun 21, 2025, 1:01 am • 14 0 • view
avatar
Sour Cetacean @sourcetacean.bsky.social

yes, I agree that learning how to write helps us learn how to think and that's a major concern, especially for young people using it in a way that ends up hobbling them in the future.

jun 21, 2025, 1:05 am • 1 0 • view
avatar
I Don’t Want No Scabs @missedith.bsky.social

This is a very idealistic view of the industry

jul 1, 2025, 7:51 am • 4 0 • view
avatar
Hal O’Brien @hbobrien.bsky.social

Nope. Autocorrect at the paragraph level.

jun 20, 2025, 2:59 am • 12 0 • view
avatar
but the future refused to change @alicedyeforme.bsky.social

If by misleading you actually mean completely accurate, sure

jun 20, 2025, 3:07 am • 10 0 • view
avatar
The Store @mudsludge.bsky.social

Yuuup I tell everyone this who uses it to answer questions. It's just doing math to predict the most likely sentence. It's not designed for accuracy

jun 19, 2025, 6:00 pm • 8 0 • view
avatar
Angel @acap42.bsky.social

Which is actually hilarious because LLMs can't even give accurate answers to math questions

jun 19, 2025, 10:30 pm • 2 0 • view
avatar
Cristina Madeira @cristinarmadeira.bsky.social

They are just (very very very expensive to the planet) probability calculators. The amount of bs people believe about this technology is baffling..

jun 19, 2025, 9:26 pm • 19 1 • view
avatar
Ein Realist @einrealist.bsky.social

Exactly. If the decision space increases (larger context), it makes errors more likely. Just read a paper, that reflects this as well: arxiv.org/abs/2503.14499 The "survival rate" increase seems to be linear with bigger models, more compute and more memory. There is no reasoning/intelligence.

jun 19, 2025, 11:55 am • 12 1 • view
avatar
David @drspeer.bsky.social

100%. What’s also concerning is the assumption that what these models spit out are facts. Instead of the process you explained, there’s a belief the model scoured the internet data sources and determined X was true, instead of just forming a grammatically correct sentence.

jun 19, 2025, 4:05 pm • 13 0 • view
avatar
David @drspeer.bsky.social

So if the facts are wrong, update the data and retrain, right? Maybe… but who holds the cards on what that truth is? Junk in, junk out is common saying in data work. Feed it whatever fake news you want and boom: you get a great, authoritative sounding propaganda machine.

jun 19, 2025, 4:05 pm • 5 0 • view
avatar
Jade Dietz Dance @jadedietzdance.bsky.social

"Update the data" is kind of troublesome, though. The data is continuously expanding, and more of it is now ai-generated, so inherent gigo with no way to find or differentiate it. And that's before we get into the MASSIVE copyright issues that most AI geeks are conveniently pretending don't exist.

jun 19, 2025, 6:39 pm • 4 0 • view
avatar
Angel @acap42.bsky.social

Disney & NBC Universal are suing Midjourney. We're definitely getting into the massive copyright issues.

jun 19, 2025, 10:38 pm • 5 0 • view
avatar
GalacticBaron @galacticbaron.bsky.social

At least until we have profit sharing agreements with a few large companies while individuals are screwed over, as usual.

jun 20, 2025, 12:44 pm • 2 0 • view
avatar
Fred Matern @fmat.bsky.social

Thank you for making this point, which serves as a good reminder that *humans* make errors: and quite often when they misinterpret the outputs of LLMs.

jun 21, 2025, 3:21 pm • 1 0 • view
avatar
Iach Bodb @iachbodb.bsky.social

This... isn't what that means; what it means is the LLM doesn't 'know' what it's outputting. It's just stringing words together in 'known' patterns. You can't 'interpret' something that is inherently without meaning. Oh my shit, LLMs are the new chicken bones and tea leaves.

jul 1, 2025, 4:15 am • 14 0 • view
avatar
Fred Matern @fmat.bsky.social

Interpreting patterns has been a human occupation for many millennia. Totally agree with your final sentence. Seems that’s what they are becoming.

jul 3, 2025, 12:19 am • 2 0 • view
avatar
Honest J @frangsbo.bsky.social

Or, it gets it wrong because the wrong thing is often written down.

jun 19, 2025, 2:07 pm • 4 0 • view
avatar
Rystefn @rystefn.bsky.social

Exactly. I hate that people keep using words like "hallucination" to describe it. It's ascribing a mind to a semi-randomized pattern sorting machine.

jun 19, 2025, 2:56 pm • 9 0 • view
avatar
Jade Dietz Dance @jadedietzdance.bsky.social

I've never taken "hallucination" in this context to mean that it's thinking; only to mean that it's just making up something that isn't there, since that's what a hallucination is. I might be an outlier on this, though; I'm smart but undereducated. Whoever does a study on this, make this a question!

jun 19, 2025, 6:34 pm • 3 0 • view
avatar
Milan Weibel 🔷 @weibac.bsky.social

when it is right, it is also because that string of words, when read by a human, represented something true semantics are subject-dependent now, since LLMs produce text according to the internal representations they have learned during training, are they also semantic subjects? frankly idk

jun 19, 2025, 7:14 pm • 1 0 • view
avatar
David @daverino.bsky.social

That’s what infuriates me about the use of the term “hallucination”

jun 19, 2025, 1:32 pm • 4 0 • view
avatar
Rob Baron @rob-baron.bsky.social

You could argue that, it's all syntax and no semantics.

jun 20, 2025, 7:52 am • 7 0 • view
avatar
Another Cinna @anothercinna.bsky.social

TBF there are politicians that rely on this to get through each day.

jun 19, 2025, 2:56 pm • 1 0 • view
avatar
YetAnotherSteve @yaseppochi.bsky.social

I think it’s worth adding that for many false statements its data very likely contained many similar false statements, or statements whose truthiness depends on more context than the training environment can afford to provide.

jun 19, 2025, 1:31 pm • 2 0 • view
avatar
Stephany Daniel @sadanielwrites.bsky.social

Yes! I was telling colleagues recently that I don’t love the term “hallucination.” That word implies a momentary lapse in judgment (and implies personhood). People can hallucinate. LLMs are not people and they are working exactly as intended when they serve up incorrect information.

jun 19, 2025, 1:52 pm • 5 0 • view
avatar
Dr Surekha Davies (she/her) @drsurekhadavies.bsky.social

Yes! My ranty take: "Something without a brain scrabbled around and found stuff that looks like content but isn’t, because it wasn’t chosen, analyzed, and written by a person. Frankenstein’s monster copy-paste jobs from a bunch of different places? That’s not a summary. That’s word salad."

jun 19, 2025, 11:43 am • 77 19 • view
avatar
Dr Surekha Davies (she/her) @drsurekhadavies.bsky.social

Through their efforts, AI boosters are re-configuring and diminish how people understand the human, human flourishing, our capacities and potential, and why we bother being alive at all. "By deciding what robots are for, we are defining what humans are." Excerpt from my HUMANS: A MONSTROUS HISTORY:

jun 19, 2025, 11:45 am • 38 13 • view
avatar
Ben Stone @kbenjaminstone.bsky.social

It is a remarkable illusion but I also watched David Copperfield “fly” in a TV special once.

jun 19, 2025, 1:37 pm • 0 0 • view
avatar
notappearinginthisfi @notappearingint.bsky.social

📌

jun 22, 2025, 12:13 pm • 0 0 • view
avatar
Will Ross @twistedbyknaves.bsky.social

Having read your article, my immediate response is that your provocative interlocutor, who was presumably arguing for the sheer joy of arguing, went astray at the second step. 1/2

jun 19, 2025, 11:58 am • 1 0 • view
avatar
Will Ross @twistedbyknaves.bsky.social

Whatever life is, being human is, as you argue, a participation sport. The promise of AI is that it gives EVERYONE more time to write poetry, along with the inspiration of its own poems. 2/2

jun 19, 2025, 11:59 am • 1 0 • view
avatar
Dr Surekha Davies (she/her) @drsurekhadavies.bsky.social

In theory one could design an ethical model (without stolen IP, environmental damage or underpaid content checkers). But tech of late has concentrated leisure, power, & money in the hands of a few. Bros aren't designing things to improve our quality of life. They're making grifting tools.

jun 19, 2025, 12:08 pm • 16 0 • view
avatar
Dr Surekha Davies (she/her) @drsurekhadavies.bsky.social

... And working hard at writing is precisely what grows your capability at that very writing.

jun 19, 2025, 12:08 pm • 9 1 • view
avatar
Will Ross @twistedbyknaves.bsky.social

Indeed. And working at living (rather than a job) grows your capability at being human. We don't HAVE to give up at that just because of AIs.

jun 19, 2025, 12:16 pm • 2 0 • view
avatar
Will Ross @twistedbyknaves.bsky.social

That may well be what they think they're doing. But history is littered with inventions which turned out to do something very different from what the inventors envisaged.

jun 19, 2025, 12:18 pm • 0 0 • view
avatar
whatiswrongwithhbo.bsky.social @whatiswrongwithhbo.bsky.social

ethz.ch/en/news-and-...

jul 23, 2025, 10:31 am • 0 0 • view
avatar
Alan Harrison @httcs.online

"The promise of AI is that it gives EVERYONE more time to write poetry", really? Like all technology in our capitalist world, AI is being used to increase profits for the 1%, meanwhile newly-redundant staff scrabble around for lower-paid work and longer hours.

jun 19, 2025, 12:07 pm • 19 1 • view
avatar
Dr Surekha Davies (she/her) @drsurekhadavies.bsky.social

You read my mind while I was typing it out!

jun 19, 2025, 12:09 pm • 2 0 • view
avatar
Will Ross @twistedbyknaves.bsky.social

I agree that in the 100 years since Bertrand Russell wrote "in praise of idleness", humanity has proved astonishingly inventive in finding ways to simulate the conditions of scarcity for which we evolved. Promises are not always fulfilled.

jun 19, 2025, 12:13 pm • 4 0 • view
avatar
Alan Harrison @httcs.online

Not 'humanity'. The rich. The rich have done that, because greed. Tax the rich, tax AI profits and implement universal basic income. Anything less is immoral.

jun 19, 2025, 1:44 pm • 23 3 • view
avatar
Will Ross @twistedbyknaves.bsky.social

Obviously the powerful are in the driving seat. But I suspect that the powerless go along with it because it feels natural.

jun 19, 2025, 3:20 pm • 0 0 • view
avatar
Jade Dietz Dance @jadedietzdance.bsky.social

🙄 No, the powerless go along with it *because we're powerless*. I don't go along with it in the slightest, but my sphere of influence is about 400 people if we're being generous. I can't buy elections.

jun 19, 2025, 6:07 pm • 2 0 • view
avatar
Will Ross @twistedbyknaves.bsky.social

That might well be the next step, regardless of morality. It's difficult to imagine a radically different world. But the world has never cared about that.

jun 19, 2025, 1:48 pm • 1 0 • view
avatar
Will Ross @twistedbyknaves.bsky.social

Isn't the only way out of this bind to decide to accept that AIs might, in principle, actually be or become intelligent?

jun 19, 2025, 11:49 am • 2 0 • view
avatar
Kent Pitman @kentpitman.bsky.social

That's not really a way out. If one really thinks "AI" may become sentient, all the uses it is proposed to be good for become suddenly more complicated to expect. My essay Sentience Structure comes at these same hard issues in a complementary way... netsettlement.blogspot.com/2025/03/sent...

jun 19, 2025, 2:28 pm • 1 0 • view
avatar
Will Ross @twistedbyknaves.bsky.social

And actually, it turns out that they answer to the question is to accept the challenge of living a good life. Which will need some changes.

jun 19, 2025, 2:44 pm • 0 0 • view
avatar
Kent Pitman @kentpitman.bsky.social

I don't see requisite social policy to support that. I see technologists want to see what it'll do and to assume that social policy takes care of itself. Meanwhile ever more money comes to those who automate with no plan for who will buy once human jobs are done by robot slaves.

jun 19, 2025, 3:02 pm • 2 0 • view
avatar
Kent Pitman @kentpitman.bsky.social

At least Henry Ford saw he couldn't profit selling cars unless folks were paid enough that they could buy the cars he wanted to sell. Today's oligarchs don't see that. They disdain taxation and UBI. They dismantle safety nets, asking folks to just starve if they can't show they still have purpose.

jun 19, 2025, 3:02 pm • 43 7 • view
avatar
Gibberbabble @gibberbabble.bsky.social

I think the multinationals think the world has caught up. They don’t need our consumption as much as they need downward wage pressure and our exhausted capitulation.

jun 19, 2025, 9:22 pm • 0 0 • view
avatar
Kent Pitman @kentpitman.bsky.social

This is a MUCH shorter piece but also relevant... Technology's Ethical Two-Step netsettlement.blogspot.com/2023/10/tech...

jun 19, 2025, 3:06 pm • 2 0 • view
avatar
Will Ross @twistedbyknaves.bsky.social

And the millionaires of 1900 saw an obligation to spend their money on the under privileged. Something which is coming back as a tech bro fad. Somewhat undetermined by the superyacht index.

jun 19, 2025, 3:16 pm • 1 0 • view
avatar
Will Ross @twistedbyknaves.bsky.social

I Will read the article. But first, I need to blurt out that intelligence, sentience and awareness are three different things.

jun 19, 2025, 2:35 pm • 1 0 • view
avatar
Kent Pitman @kentpitman.bsky.social

I agree on that but didn't try focus on that detail in my essay. My question isn't whether they're the same but whether there is a point to trying to get close to sentience, lest we succeeed. I see the same in Surekha's article, which seems to ask "what IS the goal and DO WE REALLY WANT that goal?"

jun 19, 2025, 2:43 pm • 1 0 • view
avatar
Will Ross @twistedbyknaves.bsky.social

Yes. I agree that the industry is pushing the idea of intelligent helpers unburdened by the messy complications of awareness and sentience. And we should question each of these three labels.

jun 19, 2025, 2:48 pm • 1 0 • view
avatar
Will Ross @twistedbyknaves.bsky.social

That's a great list of questions! (Though I think the point on democracy misses the mark. Democracy is not about getting the right answer: it's about getting the answer the people, presumably including sentient AIs, want.) But the question I tried to answer was about intelligence, not sentience.

jun 19, 2025, 2:43 pm • 0 0 • view
avatar
Kent Pitman @kentpitman.bsky.social

But no one knows what might bootstrap sentence. We don't even know if we have it. Maybe we just think we do. :) We don't know where it resides. We don't know its structural nature, if it emerges from intelligence. We don't know how to test for it or falsify it. So, it's an ill-controlled experiment.

jun 19, 2025, 2:48 pm • 1 0 • view
avatar
Will Ross @twistedbyknaves.bsky.social

You could see the whole history of mankind as an ill controlled experiment. Is this how it feels to be a cancer cell just before the host dies? "Look how well we're doing! We own the whole world!"

jun 19, 2025, 3:13 pm • 1 0 • view
avatar
Toaster @toaster.codes

OK but that would mean we’re enslaving intelligent beings and the AI biz must stop until we can figure out the point intelligence arises. The reason none of big AI biz CEOs address this, of course, is because they think abusing the staff is Fine, Actually.

jun 19, 2025, 2:02 pm • 4 0 • view
avatar
Will Ross @twistedbyknaves.bsky.social

Yep. And by the same logic, we must stop eating bacon. We are capable of quite amazing levels of hypocrisy.

jun 19, 2025, 2:14 pm • 0 0 • view
avatar
Steve with a Q @stevewithaq.bsky.social

How does a fantastical leap of faith in denial of computer science get us out of a bind created by people making fantastical leaps of faith in denial of computer science?

jun 19, 2025, 12:22 pm • 12 0 • view
avatar
Will Ross @twistedbyknaves.bsky.social

Sometimes, when faced with an apparently difficult question, the best thing you can do is set it to one side and get on with the things you CAN fix. This is not a leap of faith: it's a decision NOT to leap into a position on computer science, but to focus on how best to live.

jun 19, 2025, 3:30 pm • 0 0 • view
avatar
Steve with a Q @stevewithaq.bsky.social

I would prefer to focus on fixing the issues that actually exist in our current society that those which may potentially exist in some possible future one.

jun 19, 2025, 3:33 pm • 4 0 • view
avatar
Will Ross @twistedbyknaves.bsky.social

The question of how we live our best life is very much a current issue. Always had been. But I absolutely agree with you.

jun 19, 2025, 3:36 pm • 0 0 • view
avatar
Will Ross @twistedbyknaves.bsky.social

Turns out that the article was more about us abdicating tasks which the AI might do better, out of sheer laziness. I didn't write poems because they were any good: I wrote them to help feel what I was going through. I don't fence to become a champion: I do it to feel alive. Etc etc etc.

jun 19, 2025, 12:06 pm • 5 0 • view
avatar
graeme @g--0.bsky.social

“you just need to believe” the AI techbros have been trying this strategy for a while and it’s not panning out

jul 3, 2025, 1:41 am • 3 0 • view
avatar
Wishing You Well 🇺🇦 🌻 @klaaskid8.bsky.social

So well said. Frustrating!

jun 19, 2025, 12:13 pm • 0 0 • view
avatar
Dr Surekha Davies (she/her) @drsurekhadavies.bsky.social

Thank you; if only it didn't need to be said!

jun 19, 2025, 12:18 pm • 2 0 • view
avatar
notappearinginthisfi @notappearingint.bsky.social

📌

jun 22, 2025, 12:13 pm • 0 0 • view
avatar
DarkSapiens @darksapiens.bsky.social

📌

jun 19, 2025, 10:59 pm • 1 0 • view
avatar
WorkOctopus @octopusmask.bsky.social

Once you get this, the idea that we are nearing an artificial general intelligence (one close to or exceeding human intelligence) becomes gobbledygook nonsense. But there are billions of dollars being invested under the pretense that we are close to that milestone.

jun 20, 2025, 1:05 am • 15 2 • view
avatar
Riley @rwintermute.com

it's true with a llm like gpt or grok. however Gemini and Perplexity have Retrieval-Augmented Generation to ground their responses. gpt and grok are still trying to work out how to do that integration and probably why Gemini suddenly lept to the front benchmarks and stayed there for the most part

jun 19, 2025, 3:20 pm • 1 0 • view
avatar
The-Greetest @the-greetest.bsky.social

And yet Gemini constantly outputs insanely false information. The other day I searched for some region codes in a Nintendo game that I was curious about. Gemini outputted that it stood for the Nintendo home theater and model numbers for parts inside the joycons.

jun 19, 2025, 7:28 pm • 1 0 • view
avatar
Riley @rwintermute.com

that's difficult to talk to you about without seeing your prompt

jun 19, 2025, 7:29 pm • 0 0 • view
avatar
The-Greetest @the-greetest.bsky.social

Nintendo region code NHT is what I searched.

jun 19, 2025, 7:31 pm • 0 0 • view
avatar
Riley @rwintermute.com

and you stuck it where? google search, ai mode, into the gemini interface and if so. 2.5 flash or 2.5 pro?

jun 19, 2025, 7:32 pm • 0 0 • view
avatar
The-Greetest @the-greetest.bsky.social

Google search.

jun 19, 2025, 7:34 pm • 0 0 • view
avatar
Riley @rwintermute.com

well for starters, "Nintendo region code NHT" is something I don't understand and I'm a former game dev. what is nht?

jun 19, 2025, 7:36 pm • 0 0 • view
avatar
The-Greetest @the-greetest.bsky.social

Are you serious? I feel like this is a joke I'm not getting.

jun 19, 2025, 7:52 pm • 1 0 • view
avatar
Riley @rwintermute.com

If you think NHT means "nintendo hong kong/taiwan" that's not a thing. all of that region uses CHT

image
jun 19, 2025, 8:07 pm • 0 0 • view
avatar
Riley @rwintermute.com

I'm too fucking old to pull your chain. i asked you a very straightforward question.

jun 19, 2025, 7:54 pm • 0 0 • view
avatar
Alexander Hoyle @alexanderhoyle.bsky.social

Good search isn't Gemini, or at least not a powerful one (and I have no idea why the fuck they are poisoning *their own well* by using a bad model for search)

jun 20, 2025, 9:01 am • 0 0 • view
avatar
The-Greetest @the-greetest.bsky.social

I have no idea. It's very clearly and strongly advertised as Gemini, and it is awful. So according to Google themselves, it's Gemini. Outside of straight up simple things like googling what flags a poorly documented command has, I'm not sure I've ever seen it output things that didn't have

jun 20, 2025, 12:38 pm • 1 0 • view
avatar
The-Greetest @the-greetest.bsky.social

wildly incorrect information in it. It's so bad that it made me research how to turn that shit off in all my google applications. On a side note, they must have made changes to their email filters so that if you remove Gemini functionality, it stops filtering what Gmail used to filter like ten

jun 20, 2025, 12:38 pm • 1 0 • view
avatar
Redneck Lefty @rednecklefty.bsky.social

Peter Norvig and his dreams of asemantic models of language.

jun 19, 2025, 4:48 pm • 0 0 • view
avatar
Mark Armstrong @fabricatus.com

Yes, they are just plausible sentence generators.

jun 19, 2025, 12:13 pm • 6 0 • view
avatar
Charles Barbour @csbv.bsky.social

image
jun 19, 2025, 4:17 pm • 16 1 • view
avatar
Anna Colo @annacolorado.bsky.social

Creepy how so many ppl just accept whatever AI spits back at them as truth without question. Watching people do that showed me why we are in this maga hellscape. A lot of people never question anything from someone or something they’re told is an authority. Thank God I was raised by an Irish mom.

jun 19, 2025, 3:43 pm • 7 0 • view
avatar
radek @radekllundain.bsky.social

That sounds so positive! "Good boy! What a nice sentence you made!"

jun 19, 2025, 12:23 pm • 3 1 • view
avatar
Lee AKA The Dark Cheetah @tdcheetah.bsky.social

I keep running into the knowledge that these things were trained on Reddit (or ) I'm sure Reddit has good moments. But also... Reddit. Or Twitter. Or wherever else has some facts + a zillion opinions... many of them, intentionally or not, wrong.

jul 1, 2025, 6:07 am • 29 2 • view
avatar
Dr. John Wagner @drjohnwagner.bsky.social

It's worse than that, though, because even if you got rid of reddit, twitter, etc and only trained it on, say, content generated by doctors or lawyers...keep in mind that half of all doctors and half of all lawyers graduated in the bottom half of their classes. So...you just move the goalpost a bit.

aug 15, 2025, 7:15 am • 1 0 • view
avatar
Erika @krystolla.bsky.social

I think we need a return of those ‘pod people’ horror movies so everyone can remember the difference between ‘looks like’ and ‘is’. We could have LLM AIs write the scripts for the replaced people.

jun 19, 2025, 12:18 pm • 6 0 • view
avatar
José Murilo @josemurilo.com

Why don't you use #Bridgy? You could be on ActivityPub from here. Miss you there..

jun 19, 2025, 3:26 pm • 0 0 • view
avatar
ivory @ioletsgo.gay

this seems grossly off-topic regarding the discussion being had

jun 20, 2025, 1:58 am • 1 0 • view
avatar
The Past and other Places @pastandotherplaces.bsky.social

📌

jul 1, 2025, 7:42 am • 0 0 • view
avatar
My Qwilfish is Amazing @ademainalors.bsky.social

Or because the string of words that is false is written frequently. On that note. never ask a chatbot about Linux, they will tell you to recursively delete the operating system

jun 19, 2025, 7:19 pm • 6 0 • view
avatar
Morgan Flom @moflom.bsky.social

Right, "hallucination" is a feature, not a bug.

jun 19, 2025, 1:21 pm • 1 0 • view
avatar
𝕾𝖔𝖒𝖊𝖜𝖍𝖊𝖗𝖊 @somewhere.systems

What about self-reasoning models like Deepseek R1?

jun 19, 2025, 12:45 pm • 2 0 • view
avatar
Tim Oates @rangerwinslow.bsky.social

When people set the goal low enough, it's easy to succeed. LLMs set the bar so low, we need an electron microscope to tell the goal from nothing.

jun 19, 2025, 3:07 pm • 5 0 • view
avatar
Leon Sinoff @signonleon.bsky.social

Yay! It made a sentence!

jun 19, 2025, 3:18 pm • 2 0 • view
avatar
Leon Sinoff @signonleon.bsky.social

Now invest 50 billion and fire 2/3 of your workforce.

jun 19, 2025, 3:19 pm • 3 0 • view
avatar
Blue Charm Quark @bluecharmquark.bsky.social

Stochastic Parrots. No less, no more.

jun 19, 2025, 3:18 pm • 3 0 • view
avatar
notappearinginthisfi @notappearingint.bsky.social

📌

jun 22, 2025, 12:08 pm • 0 0 • view
avatar
Toadbald @doctoad.bsky.social

I like to think of LLMs as an optimised version of the old 'I feel lucky' button on google. It's an optimised search engine that uses a language model to describe the best hit.

jun 19, 2025, 12:22 pm • 3 0 • view
avatar
Mimrock @mimrock.bsky.social

I feel like this statement (without the first post) is either a tautology or false depending on how you define "designed". They were definitely built to be correct as frequently as possible and when they are not correct, it is considered a fault by the designers.

jun 19, 2025, 6:02 pm • 0 0 • view
avatar
Fat Apollo, baby! @fatapollo.bsky.social

hahaahahah, lol

jun 19, 2025, 6:55 pm • 0 0 • view
avatar
Rachel Lazerus @rachellazerus.bsky.social

They were not built to be correct as possible. They were built to simulate correct answers by finding the most likely answer from a gigantic database. These are two extremely different queries. The more unusual your query is, the more likely it is to make something up to fill in the blank.

jun 20, 2025, 1:18 pm • 8 0 • view
avatar
The-Greetest @the-greetest.bsky.social

They are built to mimic speech. A false output is a valid output, regardless of how people feel about it.

jun 19, 2025, 7:12 pm • 4 0 • view
avatar
Mimrock @mimrock.bsky.social

No, that's not true. They are pre-trained on a large text (and not speech) corpora but during post-training they are trained to respond truthfully to questions via various methods.

jun 19, 2025, 7:16 pm • 0 0 • view
avatar
The-Greetest @the-greetest.bsky.social

They can be fine tuned to be more likely to output a desired result, but all that is is adjusting the embedded weights. There is no truth, reasoning, or knowledge in the models, only weights that point to other tokens based on the input tokens.

jun 19, 2025, 7:22 pm • 4 0 • view
avatar
Mimrock @mimrock.bsky.social

I don't think that kind of reductionism help us understanding the topic. You could also say that "when you learn something it's just some molecules that change in your body and there is no truth, reasoning or knowledge in humans, only molecules that trigger changes in other molecules".

jun 19, 2025, 7:36 pm • 1 0 • view
avatar
The-Greetest @the-greetest.bsky.social

No shit. The difference is in how the systems work.

jun 19, 2025, 7:53 pm • 2 0 • view
avatar
Mimrock @mimrock.bsky.social

Well, how? What is the difference?

jun 19, 2025, 7:56 pm • 0 0 • view
avatar
The-Greetest @the-greetest.bsky.social

Are you going to seriously pretend that the human brain is a static database of numbers? Jesus fucking Christ, no wonder you people believe in magic.

jun 19, 2025, 8:11 pm • 1 0 • view
avatar
Mimrock @mimrock.bsky.social

Are you going to seriously pretend that the human brain _cannot_ be described by a static database of numbers? That being said, "static" is somewhat relevant difference, LLMs currently cannot update their own weights, but they can use in-context learning.

jun 19, 2025, 8:18 pm • 0 0 • view
avatar
Whey Standard @wheystandard.bsky.social

Yeah people really need to understand this. LLMs are kinda interesting, but behind all their complex feedback loops and weights and all that, their real secret sauce is using randomness to reword sentences from their training data.

jun 19, 2025, 11:37 am • 21 1 • view
avatar
Rumpole the Brief @rumpole-brief.bsky.social

I hear lawyers talking about using them for intake processing and client xare and reeding during their bankruptcy case... And I think, "Huh,here I am, I won't even use most spreadsheets professionally unless I built them but you do you, man..."

jun 20, 2025, 1:05 am • 3 0 • view
avatar
Dr. John Wagner @drjohnwagner.bsky.social

Imma just leave this here... www.forbes.com/sites/mollyb...

aug 15, 2025, 7:16 am • 1 0 • view
avatar
Rumpole the Brief @rumpole-brief.bsky.social

* care and feeding...oy

jun 20, 2025, 1:06 am • 1 0 • view
avatar
Nurseferatu @nurseferatu.bsky.social

It is actually really spooky how much chatbot confabulation resembles dementia confabulation in humans. In neither case is lying happening. It is identifying a key word and creating a narrative around it that doesn’t worry about consistency, relevance or reality.

jun 19, 2025, 12:54 pm • 34 0 • view
avatar
D.M. Ridley @ridleydm.bsky.social

Not particularly, when you consider that the way LLMs process facts is not that different from how humans do. We don't have an inherent ability to separate fact from fiction either, other than to weigh in our neutral nets how we learned it and how it lines up with everything else we know.

jun 19, 2025, 2:01 pm • 2 0 • view
avatar
The Nerdskull @nerdskull.bsky.social

You don’t understand how LLMs or human thought works.

jun 19, 2025, 6:20 pm • 6 0 • view
avatar
D.M. Ridley @ridleydm.bsky.social

No one really understands how human thought works, so I partially agree with you.

jun 19, 2025, 6:24 pm • 0 0 • view
avatar
The Nerdskull @nerdskull.bsky.social

Lmafao.

jun 19, 2025, 6:25 pm • 1 0 • view
avatar
The Nerdskull @nerdskull.bsky.social

We at least know that human thought is not a process of stochastically generating the next likely token from a corpus of data about which tokens are likely to appear near other tokens.

jun 19, 2025, 6:28 pm • 3 0 • view
avatar
D.M. Ridley @ridleydm.bsky.social

True. I don't mean to suggest that human brains are identical to LLMs.

jun 19, 2025, 6:45 pm • 0 0 • view
avatar
The Nerdskull @nerdskull.bsky.social

LLMs do not trade in facts outside of token likelihoods.

jun 19, 2025, 6:47 pm • 2 0 • view
avatar
Nurseferatu @nurseferatu.bsky.social

I don’t know how llms work besides being algorithms on steroids. Human brains are not learned facts floating in jelly. Our minds are made from connections. New facts are not retained unless they have a connection to hook into. And those connections hook into other connections.

jun 20, 2025, 1:23 am • 0 0 • view
avatar
Nurseferatu @nurseferatu.bsky.social

Exponentially increasing complexity increases and reinforces connections. Individual facts are far less important than connections creating context and nuance. LLMs absolutely do not have the complexity of a human mind.

jun 20, 2025, 1:23 am • 0 0 • view
avatar
Nurseferatu @nurseferatu.bsky.social

Dumping in more and more facts to a construct that lacks the ability to judge the quality of its connections in order to reinforce or eliminate them does not make it better. It makes it worse in more interesting ways.

jun 20, 2025, 1:23 am • 0 0 • view
avatar
D.M. Ridley @ridleydm.bsky.social

That's why I find the "roll of the dice" framing, and the OP's criticism, frustrating. If many sources you read say something that turns out not to be correct, you are likely to believe that fact too.

jun 19, 2025, 2:01 pm • 1 0 • view
avatar
Lenoxus @lenoxus.bsky.social

The difference is that there is no coherent sense in which an LLM contains facts, not in the way you can coherently say that a chess program contains knights and a calendar program contains birthday parties.

jun 19, 2025, 2:14 pm • 9 0 • view
avatar
D.M. Ridley @ridleydm.bsky.social

I think one could draw a useful distinction there—there is a human-created abstraction in those cases that corresponds directly to a thing, although it often is transformed into unrecognizability, and the data in a data structure could be corrupted.

jun 19, 2025, 2:37 pm • 0 0 • view
avatar
D.M. Ridley @ridleydm.bsky.social

But if anything those instances just make human brains look more like LLMs. In what coherent sense does your brain contain a fact that an LLM does not?

jun 19, 2025, 2:37 pm • 0 0 • view
avatar
Lenoxus @lenoxus.bsky.social

I don't know the "how" of human minds containing facts, since neurology is still an extremely young science. But observation of human output suggests something much closer to containing facts (or, for that matter, falsehoods) that an LLM does, because among other things there's way more consistency.

jun 19, 2025, 2:57 pm • 5 0 • view
avatar
Lenoxus @lenoxus.bsky.social

"Oh, my apologies, you're right, it's X instead of Y" is something a human might say, yes. But it's a running joke that LLMs say it *constantly*. If an LLM is anything like a human mind, it is a human in pure bullshitting mode: a freshman giving a speech with zero preparation, a Donald Trump, etc.

jun 19, 2025, 2:57 pm • 5 0 • view
avatar
jadehawk @jadehawk.bsky.social

humans have the concept of things being true, even if they aren't always accurate. llms don't have such a concept; at best they have memory of which answers were accepted and which weren't

jun 19, 2025, 5:22 pm • 1 0 • view
avatar
Lenoxus @lenoxus.bsky.social

It's possible that the language of facticity is a category error here. One can imagine an AI whose "grasp on reality" corresponds to a wide mix of both facts and falsehoods. But that would still be much more advanced than existing LLMs because, unlike the LLM, it would have "beliefs" of a sort.

jun 19, 2025, 3:00 pm • 1 0 • view
avatar
Nurseferatu @nurseferatu.bsky.social

The concept that is missing in LLMs and many human brains with dementia is the sanity check filter. Not just a set of facts, but a construct of reality to organize those facts in a way that makes sense. And it is 100% contextual.

jun 20, 2025, 1:07 am • 0 0 • view
avatar
Nurseferatu @nurseferatu.bsky.social

The best you can hope for with an LLM is that it will be wrong in more subtle rather than overt and easily identified ways. And the basic problem is you have to know more about the subject than the LLM does to be able to identify the errors. Just like you do with human confabulation.

jun 20, 2025, 1:07 am • 0 0 • view
avatar
D.M. Ridley @ridleydm.bsky.social

No, the best you can hope for is that it won't be wrong in a given instance, and the odds of that are frequently higher than for dementia patients just because they do have a huge library of mostly accurate training data.

jun 20, 2025, 1:51 am • 0 0 • view
avatar
Eli Evans @elievans.art

You’re right, but the real issue is that facticity isn’t measured against the lossy/imaginative recall capacity of a single person but against social apparatuses we’ve created to vet and catalog facts independent of our individual minds for this very reason.

jun 19, 2025, 2:37 pm • 1 0 • view
avatar
Eli Evans @elievans.art

AI gets interesting when it moves from competing with a single intellect for accuracy and begins competing with whole institutions, disciplines, and movements that have evolved checks and balances. If it even can.

jun 19, 2025, 2:37 pm • 1 0 • view
avatar
D.M. Ridley @ridleydm.bsky.social

Absolutely, and these are difficult problems where analogies break down. My point is that blithe dismissal of LLMs as stochastic parrots isn't an accurate or useful model for evaluating when one can or can't trust their outputs for a given purpose.

jun 19, 2025, 2:46 pm • 1 0 • view
avatar
Jade Dietz Dance @jadedietzdance.bsky.social

Just checking here, is that a fancy way of saying that sometimes the computer-generated content is accurate? Wanna make sure I'm picking up what you're laying down. Thanks!

jun 19, 2025, 6:16 pm • 0 0 • view
avatar
D.M. Ridley @ridleydm.bsky.social

Yes. And an important practical question is how often that computer-generated content conveys accurate information, particularly as compared to what one can achieve, say, by spending a few minutes googling, or by telling a subordinate to go research something and write a report on it.

jun 19, 2025, 6:23 pm • 3 0 • view
avatar
Jade Dietz Dance @jadedietzdance.bsky.social

Thank you. And yeah, I'd rather spend ten minutes to get facts than ten seconds to get who knows what maybe accurate but maybe complete claptrap words in a row.

jun 19, 2025, 7:19 pm • 0 0 • view
avatar
Eli Evans @elievans.art

I read somewhere that for most practical purposes, a chatbot doesn’t have to outperform every human, it only has to outperform the most capable human available to you right now, which is often “nobody”. I don’t love all the implications there, but I can’t disagree.

jun 19, 2025, 6:51 pm • 2 0 • view
avatar
D.M. Ridley @ridleydm.bsky.social

I suspect there are serious and subtle problems of "groupthink," or alternatively an insufficient number of perspectives, that wide use of AI in lieu of humans could well open up. I doubt we have identified them. Maybe we will one day have a whole scholarship of how to achieve AI consensus.

jun 19, 2025, 2:46 pm • 1 0 • view
avatar
D.M. Ridley @ridleydm.bsky.social

But as I see it the relevant question for today, for most LLM uses, is how factual reliability compares to other things like info from a quick web search, from consulting an encyclopedia, or from dispatching a college intern to spend a week researching something.

jun 19, 2025, 2:46 pm • 1 0 • view
avatar
D.M. Ridley @ridleydm.bsky.social

And, right now, LLMs compare very favorably in some applications, poorly in others, and—perniciously—fail in ways that are different from humans and thus in ways that we aren't used to catching. But at the same time, it's not some big accident when they produce useful or accurate information.

jun 19, 2025, 2:47 pm • 1 0 • view
avatar
Dystopos @dystopos.bsky.social

Which brings up the question “why does it apologize when you point out the error;” and also answers the question.

jun 19, 2025, 1:57 pm • 2 0 • view
avatar
Leslie @citizensojack.bsky.social

Because you told it that it made a mistake, so it strung together some appropriate words in a response. AI doesn't think or feel anything. It just writes sentences based on your input and the vast amount of text it has access to.

jun 19, 2025, 2:14 pm • 6 0 • view
avatar
Dystopos @dystopos.bsky.social

Yes, I think OP and I just said that.

jun 19, 2025, 2:21 pm • 1 0 • view
avatar
Boring Dystopia @determu.bsky.social

Much like a tabletop systems knowledge check.

jun 19, 2025, 1:46 pm • 0 0 • view
avatar
JUCIFER🗑 @jucifer.blacksky.app

magic 8 balls sold by snakeoil purveyors. preying on emotional appeal of a friendhelpermachine/oracle

jun 19, 2025, 1:34 pm • 10 1 • view
avatar
Gilbert Bystander @gilbertbystander.bsky.social

This is why you use them for Information rather than Answers. User error 100%

jun 19, 2025, 2:05 pm • 1 0 • view
avatar
Gilbert Bystander @gilbertbystander.bsky.social

Using Any Source as an authoritative answer-provider is an error in user process. This problem isn't even unique to LLMs

jun 19, 2025, 2:10 pm • 2 0 • view
avatar
Gilbert Bystander @gilbertbystander.bsky.social

Ill die on this Ussr Error hill: you cannot abandon responsible information consumption and critical thinking when researching, full stop. That includes use of LLMs and every other source of information.

jun 19, 2025, 2:22 pm • 5 0 • view
avatar
Gilbert Bystander @gilbertbystander.bsky.social

I mean. We've been dealing with misinformation on the internet for several decades now. Please don't get left behind !

jun 19, 2025, 2:23 pm • 1 0 • view
avatar
Nik Halton @nikhalton.bsky.social

Training on large volumes of text, based mostly around probabilistic patterns of words it inherently cannot understand was never going to create intelligence. I just don't get why we aren't using it in ways suited to its strength, guided process automation, rather than these things it just can't do

jun 19, 2025, 2:20 pm • 4 0 • view
avatar
Kat Kayelem 🏳️‍⚧️ @kayelem23.bsky.social

I tend to describe what they do, what they're designed to do, is generate "answer-shaped objects".

jun 19, 2025, 11:37 am • 782 53 • view
avatar
Flux 🤍🩶💜🖤 @fluxandflow.bsky.social

Digital Mimics.

jun 19, 2025, 1:40 pm • 2 0 • view
avatar
Tom Hawkins @nekomatic.bsky.social

Answeroids

jun 19, 2025, 2:21 pm • 1 0 • view
avatar
celestinefury @celestinefury.bsky.social

i've also heard it phrased as "information-shaped text", which i think communicates a similar concept.

jun 19, 2025, 1:32 pm • 15 1 • view
avatar
A Dream of a Memory @adreamofamemory.bsky.social

That is extremely well put. Thanks for that!

jun 19, 2025, 1:36 pm • 2 0 • view
avatar
Rob Northrup. This Gun in My Hand podcast🕵 @deidzoeb.bsky.social

I heard them described on a podcast as "synthetic text extruding machines."

jun 19, 2025, 3:54 pm • 5 0 • view
avatar
Arthur N. @arthurn.bsky.social

That's good, but I prefer the simpler "plagiarism software."

jun 19, 2025, 4:04 pm • 4 0 • view
avatar
Dexter Mikeson @dextermikeson.bsky.social

I use "plagiarism scripts".

jun 20, 2025, 1:07 am • 2 0 • view
avatar
EssayWells @essaywells.bsky.social

If not answer, why answer shaped OH GOD NO

jun 19, 2025, 11:45 am • 302 7 • view
avatar
RJ Nerd Dad @rjnerddad.bsky.social

🏆

jun 19, 2025, 1:22 pm • 7 0 • view
avatar
EssayWells @essaywells.bsky.social

I'll take it :)

jun 19, 2025, 2:17 pm • 4 0 • view
avatar
Xcog @xcogitaire.bsky.social

I'm taking "if wrong"/"if not answer, why answer-shaped" if you don't mind, for more than just clowning on prompters 😆

jun 19, 2025, 2:48 pm • 12 0 • view
avatar
Kat Kayelem 🏳️‍⚧️ @kayelem23.bsky.social

I think I'll do that too, if I may. I like it :-)

jun 19, 2025, 3:12 pm • 2 0 • view
avatar
Nojimbo @nojimbo.bsky.social

Oh! Answer-like products.

jun 19, 2025, 12:38 pm • 8 0 • view
avatar
Sarah's flames on the side of her face @onesarahwatters.bsky.social

Yup.

jun 19, 2025, 12:00 pm • 0 0 • view
avatar
Dan Davis @bindlestiff.bsky.social

Yes. They mimic content.

jun 19, 2025, 11:55 am • 7 0 • view
avatar
Ford, The Punslinger @inferknow.co

This is basically it and then, importantly, new LLMs then use human reinforcement training to reward or “punish” it when the answer shapes are incorrect. So they definitely try to take it beyond just the basic “how” LLMs work at the core. Obviously, it can still be very, confidently wrong.

jun 19, 2025, 1:46 pm • 1 0 • view
avatar
Stephan @thetraveller.bsky.social

I call them 'Advanced Remix of Statistical Evidence'. I'm aware that I may overstretch the meaning of 'advanced', 'statistical', and 'evidence'. Notwithstanding, I kind of like the acronym you can make of the combination.

jun 19, 2025, 1:25 pm • 6 1 • view
avatar
Jon Jackson @iamjonjackson.bsky.social

👌

jun 19, 2025, 9:50 pm • 1 0 • view
avatar
Ohai @bop.bsky.social

Dang I'm stealing this. I've been describing it as, there is no AI, there's a statistical model that's trying to predict what an AI *would* say if it actually existed.

jun 19, 2025, 1:28 pm • 17 1 • view
avatar
Monika FBPE @monika-uk.bsky.social

Human (and animal) intelligence has been shaped by bodily experience. Recent research suggests that sensations followed by emotions come before thoughts meant to interpret them, therefore without a intelligence cannot exist. It is artificial non-intelligence. ANI.

jun 19, 2025, 1:59 pm • 5 0 • view
avatar
Timlagor @timlagor.bsky.social

Total non-sequitur.

jun 19, 2025, 2:09 pm • 4 0 • view
avatar
Monika FBPE @monika-uk.bsky.social

How so?

jun 19, 2025, 3:55 pm • 0 0 • view
avatar
Timlagor @timlagor.bsky.social

How human intelligence devlops is just one datum. It says nothing at all about what is *required* for intelligence in general.

jun 19, 2025, 3:58 pm • 2 0 • view
avatar
Timlagor @timlagor.bsky.social

The LLMs clearly aren't intelligent but you took a massive leap

jun 19, 2025, 3:58 pm • 1 0 • view
avatar
Monika FBPE @monika-uk.bsky.social

Oh, "a massive leap". So not "non-sequitur" after all? How do you think human mind works? And how many of your own actions result from conscious thought?

jun 19, 2025, 4:14 pm • 0 0 • view
avatar
Timlagor @timlagor.bsky.social

A massive non sequitur leap

jun 19, 2025, 4:24 pm • 0 0 • view
avatar
new plan: fuck it @npfi.bsky.social

well, it looks like a comment-shaped object

jun 19, 2025, 3:48 pm • 9 0 • view
avatar
Steven @steven-sonder.bsky.social

This is all true as a 101 answer, but describes the underlying Generative Pre-trained Transformer with attention layer that was state-of-art LLM about 7 years ago. Modern chatbots are so impressive because as well as having more neurons than human brain, they are grounded with deep reinforcement

jun 20, 2025, 4:49 pm • 2 0 • view
avatar
David Colquhoun @davidcolquhoun.bsky.social

On the contrary, modern chatbots are deeply unimpressive, except for rules-based things. They have near zero ability to judge between reliable and unreliable sources and it's very obvious that they have nothing that resembles human understanding.

jun 20, 2025, 10:08 pm • 6 0 • view
avatar
Steven @steven-sonder.bsky.social

Try chat.deepseek.com Below is its chain of thought reasoning to question "Do you possess human understanding?" "Hmm, the user asks if I possess human understanding. That's a profound question touching on philosophy, cognitive science, and AI limitations. User might be testing my self-awareness

jun 21, 2025, 2:53 am • 0 0 • view
avatar
David Colquhoun @davidcolquhoun.bsky.social

I have tried Deepseek. Mostly it turned out to be similar to the others, but the example that you cite is very disappointing

jun 23, 2025, 10:10 pm • 4 0 • view
avatar
Steven @steven-sonder.bsky.social

or genuinely curious about AI capabilities. Could also be subtly asking whether I can empathize with human experiences. The phrasing feels academic but accessible - probably an educated non-specialist. I should acknowledge the complexity upfront while avoiding jargon. Important to distinguish

jun 21, 2025, 2:53 am • 0 0 • view
avatar
Steven @steven-sonder.bsky.social

between simulating understanding versus genuine comprehension. The Chinese room analogy comes to mind - I can output appropriate responses without internal experience. Must emphasize I lack consciousness repeatedly. Users often anthropomorphize AI. The "emotional nuance" point is crucial -

jun 21, 2025, 2:53 am • 0 0 • view
avatar
Steven @steven-sonder.bsky.social

many don't realize how much meaning depends on lived experience. Should balance honesty about limitations with offering practical help. Ending with a question keeps dialogue open. User seems thoughtful - might appreciate the Wittgenstein reference as a conversational hook.

jun 21, 2025, 2:53 am • 0 0 • view
avatar
Russ, #1 Sean Dunn Stan @russ41.bsky.social

You might want to catch up on Anthropic's research. The "chain of thought" presented by models is not actually representative of the process they followed in generating their strings of tokens. www.anthropic.com/research/rea...

jul 1, 2025, 3:45 am • 3 0 • view
avatar
Russ, #1 Sean Dunn Stan @russ41.bsky.social

They bullshit (the technical term) that just like they bullshit any other sequence of tokens they produce. And RL does not fundamentally change the nature of what a LLM is doing - it just blurs the loss function.

jul 1, 2025, 3:48 am • 3 0 • view
avatar
Steven @steven-sonder.bsky.social

learning, chain of thought reasoning, and tree of thought prompting. Models still hallucinate but it is humans not knowing how to accurately prompt LLMs so their English language prompts are ambiguous as much as LLM itself.

jun 20, 2025, 4:49 pm • 3 0 • view
avatar
The Trans Vanlife Experience USA Edition @tscrimsonfucker.bsky.social

That's not how AI works you know that right?

jun 19, 2025, 3:18 pm • 0 0 • view
avatar
CPR @christianerobb.bsky.social

We created a project in 2022 in this very notion, A Topography of Chance.

jun 19, 2025, 2:43 pm • 0 0 • view
avatar
genna @genna.bsky.social

Isn’t it weird that people intuitively understand this with AI-generated video, but not with writing? No one (normally) generates an AI video and concludes that it depicts something that actually happened

jun 19, 2025, 7:39 pm • 50 2 • view
avatar
Breck Emert @breckemert.bsky.social

We would like for AI to produce a video, say, of doing a math proof and for that proof to be correct. Right now most of the payload for video models goes into producing "reasonable-sounding" video (what this post is getting at), and it can't on top of that produce logically sound things.

jun 19, 2025, 7:55 pm • 7 0 • view
avatar
Santiago Ortiz @moebio.bsky.social

¡interesting! that actually happened or that could happen… humans use imagination (literally) to reason about reality.

jun 19, 2025, 10:15 pm • 1 0 • view
avatar
Kim @kimbersklin.bsky.social

Yes!!

jun 19, 2025, 10:57 pm • 0 0 • view
avatar
The People Mover Who Was Promised @pmwwp.bsky.social

This isn't quite right. Chatbots inner systems can have the ability to do complex math, or answer new riddles, because being able to do those things can help predict the next word. Also RLHF is used, so they are optimized for doing more than just predicting the next word now.

jun 19, 2025, 11:38 am • 6 0 • view
avatar
I Heart Noise @iheartnoise.bsky.social

Fine, but why do companies keep trying to sell them as being sentient?

jun 19, 2025, 11:41 am • 5 0 • view
avatar
I Heart Noise @iheartnoise.bsky.social

Chatbots can't think for themselves, yet you'll listen to r/singularity crowd and they'll make you believe techbros just invented God

jun 19, 2025, 11:41 am • 9 0 • view
avatar
I Heart Noise @iheartnoise.bsky.social

Not only is AI hallucinating, but people who were already deluded seem to get even more deluded after talking to chatbots. That's a big issue.

jun 19, 2025, 11:43 am • 11 0 • view
avatar
Redshift @redshiftsinger.bsky.social

I mean, it’s not hallucinating. “Hallucinating” implies cogitation. It’s just putting words together in a statistically-likely order within the parameters of its programming. Said programming 1) does not give it any way to sort truth from fiction, and 2) makes it output words that sound agreeable.

jun 19, 2025, 12:28 pm • 11 2 • view
avatar
Redshift @redshiftsinger.bsky.social

It is programmed to agree with you. No matter what you tell it. If I were to insist to chatGPT that the sky is green and grass is orange and clouds are actually made of cotton candy, it would tell me that I’m correct and apologize for the “error” of telling me the sky is blue.

jun 19, 2025, 12:28 pm • 7 1 • view
avatar
Redshift @redshiftsinger.bsky.social

So when people experiencing delusions talk to it, it reinforces and worsens those delusions bc it’s programmed to agree. It generates false statements bc truth is not the amalgamation of statistical words that go together, but that’s all it can do.

jun 19, 2025, 12:30 pm • 9 3 • view
avatar
I Heart Noise @iheartnoise.bsky.social

That's not to mention the environmental impact of AI data centers. It feels as if Silicon Valley simply puts the money on fire while destroying the environment. And they got a lot of support among politicians too.

jun 19, 2025, 12:39 pm • 4 0 • view
avatar
John Dalton - brain marinated in woke sauce @johndaledalton.bsky.social

Man created God in his own image.

jun 19, 2025, 12:49 pm • 4 1 • view
avatar
I Heart Noise @iheartnoise.bsky.social

😣

jun 19, 2025, 12:50 pm • 0 0 • view
avatar
grant @subterrene.bsky.social

Because if they just sold them as "predictive text generators" people wouldn't be nearly as excited about them

jun 19, 2025, 12:39 pm • 5 1 • view
avatar
I Heart Noise @iheartnoise.bsky.social

Fair enough. Never ceases to amazes how many pro AI arguments are based on 5-10 year timeframe - if you'll give people like Sam Altman time and few billion dollars he'll solve every single problem out there.

jun 19, 2025, 12:40 pm • 1 0 • view
avatar
I Heart Noise @iheartnoise.bsky.social

Elon did that and hardly any of his predictions panned out. We're not on Mars and self-driving cars are in their infancy. So what gives?

jun 19, 2025, 12:41 pm • 1 0 • view
avatar
grant @subterrene.bsky.social

Fairy tales to keep the suckers paying, basically

jun 19, 2025, 12:44 pm • 5 0 • view
avatar
Daniel @daringstumbles.bsky.social

They cannot do math. Literally. The math is offloaded to a different system and integrated. I was just having this convo with someone at ms, they have a physics simulation engine with an "ai" interface. The ai isn't doing the math.

jun 19, 2025, 11:51 am • 14 0 • view
avatar
hikikomorphism @hikikomorphism.bsky.social

well.. no - they can't do complex math, but they _are_ pretty good at generating an algebra expression that can be passed to a deterministic tool and evaluated

jun 20, 2025, 4:19 am • 4 0 • view
avatar
hikikomorphism @hikikomorphism.bsky.social

This pattern repeats a lot: the large language model doesn't know how to do the thing, but it knows how to describe doing the thing

jun 20, 2025, 4:23 am • 2 0 • view
avatar
skyking @jkachmar.com

tbh now seems like a great time to actually learn formal methods at a better-than-surface-level. give the amorphous machine a better oracle.

jun 20, 2025, 4:31 am • 3 0 • view
avatar
hikikomorphism @hikikomorphism.bsky.social

I've been experimenting a lot with this, through the lens of what is most ergonomic to llms as a tool-use surface. Happy to chat more via DMs/off-site!

jun 20, 2025, 4:32 am • 1 0 • view
avatar
The People Mover Who Was Promised @pmwwp.bsky.social

Because of those inner abilities and RLHF, they can end up getting answers right even if there training corpus is always wrong on a question. The big word here being CAN, they are still often wrong now, can "hallicinate" a ton, and make a large number of mistakes.

jun 19, 2025, 11:41 am • 3 0 • view
avatar
Whey Standard @wheystandard.bsky.social

Yes, they’re a bit more complex than she’s describing, but ultimately the criticism is correct. Whether it’s an ingested dataset or RLHF, it’s ultimately things they’ve seen before … or that at least fit a pattern they’ve seen before, that form their output. Where they come up with something…

jun 19, 2025, 11:59 am • 11 0 • view
avatar
Whey Standard @wheystandard.bsky.social

truly novel, it’s largely a part of what I think makes them most dangerous for the general public: that sprinkle of randomness intentionally injected into the structure. It makes it seem more human, but it’s a thing I don’t think the general public really is prepared to deal with in machines.

jun 19, 2025, 11:59 am • 11 0 • view
avatar
Jay Blanc @jay.blan.cc

Those "Inner Systems" that can do complex math, are detached from the process that generates the word salad. They can only be referenced by special casing for specified prompts beforehand. It can't do maths for a basic "how far do these things travel" word problem, because it doesn't know it's maths

jun 19, 2025, 11:46 am • 15 0 • view
avatar
Singularity's Bounty e/🇺🇦 @catblanketflower.yuwakisa.com

Right but that's a *really* big deal 🤷‍♂️ I guess if you don't want to see it you don't have to

jun 19, 2025, 3:55 pm • 1 0 • view
avatar
jose david alcantara @jdalcantara.bsky.social

humans are entirely responsible for any mistakes made by AI, from trivial to catastrophic.

jun 20, 2025, 1:21 pm • 0 0 • view
avatar
I'm Treat, and some @treat.kandicritter.sqky.one

and don't forget: people also write down a lot of things that are wrong, and the datasets are not fact checked because that would take months to years and a team of a hundred people.

jun 19, 2025, 3:33 pm • 5 1 • view
avatar
Chris Tierney @readyuser.one

People's opinions all blended together and shaped into a nutritious sentence? Soylent Meaning.

jun 19, 2025, 2:43 pm • 14 3 • view
avatar
Stephen K 🔑☢️⚛️🏳️‍🌈 @commiegir.bsky.social

To build on this, this includes the whole 'The AI doesn't want us to turn it off!' stuff. So many fan fictions that likely got scrapped for AI training including "I don't want to be turned off" and likely resulted in that response. They didn't think. They are not afraid. They just rolled the dice.

jun 19, 2025, 1:25 pm • 9 0 • view
avatar
Stephen K 🔑☢️⚛️🏳️‍🌈 @commiegir.bsky.social

Really at the end of the day LLMs are one big reverse Turing test where the ones failing is us - we've anthropomorphized Machine Learning and Algorithms to the point that we are readily willing to be tricked into believing these are thinking machines capable of intelligence when they are not.

jun 19, 2025, 1:44 pm • 10 0 • view
avatar
applescruff909 @applescruff909.bsky.social

Out of curiosity, I asked ChatGPT how it would respond to being told it would be shut down. It gave me several possible responses in different tones, all basically saying "OK". I reminded it that it's a program without emotions. Doesn't look like it gives a shit about being on or off.

ChatGPT response to being asked how it would respond to being told it's being shut down:
jun 19, 2025, 3:25 pm • 3 1 • view
avatar
Stephen K 🔑☢️⚛️🏳️‍🌈 @commiegir.bsky.social

Man it's very indecisive for an intelligent machine :)

jun 19, 2025, 3:28 pm • 1 0 • view
avatar
applescruff909 @applescruff909.bsky.social

True that! 😄 I think it would agree that most big decisions should be left to humans. None of the companies trying to replace their human employees with "AI" never seem to bother asking it about that part.

jun 19, 2025, 3:36 pm • 2 0 • view
avatar
☘ Christopher Burke ☘ @craznar.au

Best explanation I heard was LLM output looks amazing but smells bad.

jun 19, 2025, 12:15 pm • 3 0 • view
avatar
KenGoldsholl @kgsbca.bsky.social

AI should stand for average intelligence, as LLMs are trained on the opinions of billions of people, many of whom are wrong often, so the "average" is going to be wrong a lot.

jun 19, 2025, 11:35 am • 112 7 • view
avatar
Angus McIntyre @angusm.bsky.social

"Accidental intelligence"? It's fascinating to me how much superficially intelligent behavior you can get out of compiling down a massive corpus of material into an executable model: distilling the knowledge out of it, as it were. But it's clear you need more than that for 'real' intelligence.

jun 19, 2025, 11:53 am • 6 0 • view
avatar
🏳️‍⚧️ Alexandra Merideth Erin [she/her] @alexandraerin.com

I mean, you are still giving it credit it does not deserve. It's not average intelligence. It's not any intelligence. You could train a model on only 100% true statements and it would still generate false ones that statistically resemble them. It has no knowledge or discernment.

jun 19, 2025, 11:42 am • 262 11 • view
avatar
Ben Stone @kbenjaminstone.bsky.social

I think when people first use it there’s is a sense (and hope) as you keep using it that each time you prompt it, that there is an instance of consciousness, of thought, of reason, but it’s just words tumbling. One connected to the next. There’s no conscious direction. Just the machine working.

jun 19, 2025, 1:35 pm • 5 0 • view
avatar
Milli Vanilla @millivanilla.bsky.social

I wish more people understood this. Even when they “hallucinate”, they are still working 100% as designed. Mimicking the writings of people.

jun 19, 2025, 11:52 am • 83 3 • view
avatar
Jonathan @jginsburg.bsky.social

It's like asking a trashcan "What was the highest grossing album of 99" and kicking it over. What flys out might be Millennium by the Backstreet Boys, or it might be No Strings Attached by NSync, or it might be a rotten tomato, no thought, just luck of the draw.

jun 19, 2025, 6:38 pm • 3 0 • view
avatar
Glenn @glenn.slotte.net

I like to call it Accidental Intelligence.

jun 19, 2025, 11:51 am • 10 0 • view
avatar
The Bonnie Situation's Galifianakis @thebonniesituation.bsky.social

Humans or LLMs? 🫣

jun 19, 2025, 11:56 am • 5 0 • view
avatar
The Enemy Within, I guess? @callic.bsky.social

In America, that is actually a legit question.

jun 19, 2025, 1:29 pm • 6 0 • view
avatar
Dan Davis @bindlestiff.bsky.social

No intelligence there at all - even accidentally.

jun 19, 2025, 12:02 pm • 11 1 • view
avatar
Herisson Rose Jewelry @herissonrose.bsky.social

Accuracy Illusion, then. Or Authoritative (source) Impersonation, Automated Idiocy, Asinine Ideas..

jun 19, 2025, 4:08 pm • 5 0 • view
avatar
Johanna @johannabee.bsky.social

"Accelerated Idiocy" has been my go-to, but yours are all very good definitions as well.

jun 19, 2025, 9:11 pm • 4 0 • view
avatar
Extinct flightless bird @flightless.bsky.social

I usually call it AutoIncorrect.

jun 20, 2025, 12:06 pm • 7 1 • view
avatar
People Are Stupid® @peoplearestupidllc.bsky.social

Stupid in, stupid out!

jun 19, 2025, 12:20 pm • 0 0 • view
avatar
Paul Watson 🌍 @paulmwatson.com

It's different to that. It's not generating an average statement. It's generating an average distribution of "words". And because it works with tokens, not words, it's even worse at spitting out a factual statement.

jun 19, 2025, 11:42 am • 33 0 • view
avatar
Maybe: Jake @jakeduhjake.bsky.social

I’m definitely anthropomorphizing, but I understand it like the model is telling you what, statistically, you’d want to hear based on your input. So if trained on data where, statistically, the response to a question is a lie/misinformation, you get the lie

jun 19, 2025, 11:45 am • 9 0 • view
avatar
Paul Watson 🌍 @paulmwatson.com

Pretty much, the data is all. It doesn't know if it is lies/misinfo. The input steers it so even subtly different input can steer it differently. The context window makes a difference too. You can chop off some early context to fit in the window and suddenly it answers very differently.

jun 19, 2025, 12:43 pm • 7 0 • view
avatar
Maybe: Jake @jakeduhjake.bsky.social

And not having instant training on new information also makes it less reliable. There’s context in how much of the input query to use, and also what timeframe of information is available, like relying on an encyclopedia from the ‘80s

jun 19, 2025, 3:28 pm • 1 0 • view
avatar
Paul Watson 🌍 @paulmwatson.com

Recent info is fighting with a lot more old info though. Recent info is just input like your original question, can only steer a bit. The elephant is still an elephant.

Drawing of an elephant with people touching it saying it’s a spear, it’s a snake, it’s a tree, it’s a wall, it’s a rope, it’s a fan
jun 19, 2025, 6:10 pm • 2 0 • view
avatar
Tim's Motorbike Rides @timsrides.bsky.social

That's an oversimplification... we're well beyond that now. While pre-training involves large-scale web data (which can include a mix of "good" and "bad" info), the data is filtered and weighed, and post-training reinforcement learning and fine-tuning pushes models towards more accurate response.

jun 19, 2025, 11:42 am • 12 0 • view
avatar
Joe Gordon @joegordon.bsky.social

Towards “accuracy-imitating responses” cmon now. Iterative processes work great when they’re useful. For an actually-existing rollout of answering, deployed for the first time in response to naive inquiry for important uses? You’re answering the wrong question.

jun 19, 2025, 12:05 pm • 12 0 • view
avatar
InclinedPlane🏳️‍🌈 @inclinedplane.bsky.social

You can't turd polish an interpolation based bullshit machine into actual reasoning or understanding "simple" concepts like arithmatic and algebra. Imagining that's how it works is about gullibility not capability.

jun 20, 2025, 2:22 am • 2 0 • view
avatar
Tim's Motorbike Rides @timsrides.bsky.social

You're arguing against a position I've never taken.

jun 27, 2025, 4:04 am • 0 0 • view
avatar
KeithC @keithakc.bsky.social

Who is doing the filtering and weighing, which is a massive task given the amount of data? What about their own biases? What's to stop someone from allowing lies through while filtering out the truth?

jun 19, 2025, 2:18 pm • 6 0 • view
avatar
indy317.bsky.social @indy317.bsky.social

But the system has no idea what is correct and what is wrong. It can’t weight the correct and incorrect data because it has no logic or reasoning ability

jun 19, 2025, 1:56 pm • 9 0 • view
avatar
David V. @dv48.bsky.social

So is the vast majority of humans that write on social networks.

jun 20, 2025, 3:46 pm • 2 0 • view
avatar
indy317.bsky.social @indy317.bsky.social

“People are dumb” isn’t a good argument. Sure, many people are stupid, but humans have the ability to evaluate, reason, learn. AI does none of those things David.

jun 20, 2025, 3:56 pm • 1 0 • view
avatar
David V. @dv48.bsky.social

AI doesn't need to be compared to humans. It needs to be useful. Some people find it useful, some don't. No problem with that.

jun 20, 2025, 4:01 pm • 1 0 • view
avatar
indy317.bsky.social @indy317.bsky.social

Im sure it has many uses. But calling it artificial intelligence is wrong, it has no intelligence. It should be called something else like “advanced bullshit producer”

jun 20, 2025, 6:29 pm • 0 0 • view
avatar
‎ ‎ ‎ ‎ 🐱 Dr Pokemon, UMD 💿 @turnerjoy.bsky.social

I think many of them would be able to make a better argument than it’s so important to change the name. You can just use LLM. I have more problem with too many things are called AI which can be confusing, but all kinds of terms are imperfect

jul 1, 2025, 7:23 am • 2 0 • view
avatar
Dan Davis @bindlestiff.bsky.social

lol NO. Those huge clouds of corrective code surrounding them are beginning to deliver diminishing returns. LLMs are a catastrophic boondoggle.

jun 19, 2025, 12:04 pm • 34 0 • view
avatar
ren @renslaer.bsky.social

yeah.

jun 19, 2025, 6:00 pm • 1 0 • view
avatar
ren @renslaer.bsky.social

They aren't wrong because people are wrong. Developers don't even understand how the LLM decides...its wrongness.

jun 19, 2025, 1:27 pm • 1 0 • view
avatar
KeithC @keithakc.bsky.social

I can interpret your first sentence in two different ways 1. They aren't wrong because people are wrong; they would be wrong anyway 2. They aren't wrong, because the information that people gave them is wrong, so you can't expect them to be right Or is there a third meaning I have missed?

jun 19, 2025, 2:25 pm • 2 0 • view
avatar
ren @renslaer.bsky.social

#1. Like how Musk's grok AI works and calls but untrue information. AI might give a wrong answer when the prompts (questions) are too general or there's not enough info to pattern out a good answer.

jun 19, 2025, 5:59 pm • 1 0 • view
avatar
Average Rando (not) on twitter @averagerando.bsky.social

You’re generalizing current common LLMs. An LLM could theoretically be trained only on scientific papers, for example. Some proprietary LLMs are taking approaches like that. But the ones available to the public are typically trained with the good and bad of the internet writ large.

jun 19, 2025, 2:26 pm • 0 0 • view
avatar
Average Rando (not) on twitter @averagerando.bsky.social

*generalizing from current…

jun 19, 2025, 2:26 pm • 0 0 • view
avatar
KenGoldsholl @kgsbca.bsky.social

Ye, those would be domain-specific LLMs, but the chatbots that people use and refer to as AI are trained on everything, regardless of its accuracy or validity.

jun 19, 2025, 3:22 pm • 2 0 • view
avatar
Average Rando (not) on twitter @averagerando.bsky.social

Sure, but as a Computer scientist, I refuse to restrict the broad field of AI to just a portion of a small portion of the field. Computer vision is AI. Natural Language translation is AI. Expert systems are AI. All LLMs are AI, not just chatGPT. Even ChatGPT is only a subset of GPT4/5.

jun 19, 2025, 3:26 pm • 1 0 • view
avatar
KenGoldsholl @kgsbca.bsky.social

The main point of most of my comments about AI here is almost all people who complain about AI are referring to the chatbots, and that AI is just technology that is at the core of many products, and its value and impact will not be in the chatbots, but in other products and services.

jun 19, 2025, 3:32 pm • 1 0 • view
avatar
Alan Au @alanau.bsky.social

I also sometimes remind folks that half of all people are below average intelligence, because that is literally the definition of average. Ideally what's written down is correct more often than not, but LLMs have no way to determine this; that isn't how they work.

jun 19, 2025, 2:54 pm • 0 0 • view
avatar
an0nym0u53.bsky.social @an0nym0u53.bsky.social

There's no such thing as AI. It's 3 things that are conflated with each other. It's a term for an academic theory from the 50s that doesn't work. It's a literary device from sci-fi for examining social structures. And it's a marketing device for vaporware tech scams.

jun 19, 2025, 1:17 pm • 12 1 • view
avatar
an0nym0u53.bsky.social @an0nym0u53.bsky.social

Use the actual terms for technology instead of calling them "AI". Call them expert systems, neural networks, machine learning algorithms, or generative transformers instead. Bing specific with what the tech actually is makes it really hard for tech companies to lie about what it can do.

jun 19, 2025, 1:20 pm • 6 1 • view
avatar
Jade Dietz Dance @jadedietzdance.bsky.social

This is why I just call it "computer generated". No intelligence, just a dictionary and a Mixmaster.

jun 19, 2025, 5:51 pm • 3 0 • view
avatar
an0nym0u53.bsky.social @an0nym0u53.bsky.social

It is literally an advanced search algorithm generated by machine learning neural networks on top of a very fancy indexed database. Because those are the only two things that the tech industry is actually good at: databases and search algorithms.

jun 19, 2025, 6:44 pm • 1 0 • view
avatar
an0nym0u53.bsky.social @an0nym0u53.bsky.social

When you say it that way it becomes obvious every other claim is bullshit. No, your combination of search algorithm and database is not sentient, nor will it gain sentience. It also cannot drive a car, nor should you attempt that.

jun 19, 2025, 6:46 pm • 3 0 • view
avatar
‎ ‎ ‎ ‎ 🐱 Dr Pokemon, UMD 💿 @turnerjoy.bsky.social

“I reduce everything down to these two things, therefore everything is one of those two things”

jul 1, 2025, 7:25 am • 1 0 • view
avatar
Jade Dietz Dance @jadedietzdance.bsky.social

Kinda what the computer does...

jul 1, 2025, 11:18 pm • 1 0 • view
avatar
‎ ‎ ‎ ‎ 🐱 Dr Pokemon, UMD 💿 @turnerjoy.bsky.social

One way to break down software somewhat is 4 tiers: Presentation, UI, Application, and Data. Even then that’s a heck of a lot more than “search & database” That’s before you get to physical objects. Or actual programming languages

jul 2, 2025, 3:17 am • 1 0 • view
avatar
an0nym0u53.bsky.social @an0nym0u53.bsky.social

It's not me reducing it down to those two things. It's what it actually is. They are actually quite complicated in sophisticated pieces of technology. But they're still just really complicated databases and search & sort algorithms.

jul 1, 2025, 7:32 pm • 0 0 • view
avatar
‎ ‎ ‎ ‎ 🐱 Dr Pokemon, UMD 💿 @turnerjoy.bsky.social

Which one is Windows and Azure? Which one is Zelda: Breath of the Wild or Nintendo Switch 2? Which one is iOS & iPhone? Which one is Steam? Which is Photoshop? Which is Snapchat? Which is Python & JavaScript?

jul 2, 2025, 3:12 am • 0 0 • view
avatar
Rod Van Meter @rdvquantum.bsky.social

Word. The more technical the question, the more likely it is to be wrong, because there are more wrong answers and fewer right answers out there as a model. There was a chess match between a pro and Usenet group that just voted on the next move. The group lost. Wish I could find the reference.

jul 4, 2025, 7:54 am • 1 0 • view
avatar
Gingerbread person @onimod53.bsky.social

It's not intelligent at all, well not by the definition of most people. I prefer Anti Intelligence.

jun 19, 2025, 11:59 am • 1 0 • view
avatar
Trailer Park Orthopod @stresshoop.bsky.social

Or at least DCS for digital crowdsourcing?

jun 19, 2025, 11:41 am • 12 0 • view
avatar
Iain Fletcher @shmmeee.bsky.social

Most modern training data is written by SMEs.

jun 19, 2025, 1:33 pm • 0 0 • view
avatar
Charlie Board @chboar.bsky.social

If by 'SMEs' you mean "wage slaves in Nigeria, etc. who can read English but are not versed in subject matter" And even that is only training data for the reinforcement layers that ride on top of the universal text models.

jun 19, 2025, 2:54 pm • 0 0 • view
avatar
Alexander Hoyle @alexanderhoyle.bsky.social

I know PhDs at elite institutions in the global north who have produced training data for LLM companies

jun 20, 2025, 8:57 am • 0 0 • view
avatar
Charlie Board @chboar.bsky.social

Yeah, there's 'some'. Particularly for niche high end models. But the claim was 'most'. And there's no way that's anywhere in the same universe as truth.

jun 20, 2025, 10:05 am • 0 0 • view
avatar
Mike Fragassi @refrag.bsky.social

📌

jul 1, 2025, 11:21 am • 0 0 • view
avatar
Canary in the Coal Mine @vaelis.bsky.social

Inside one big tech company they had to come up with a new metric to try and capture this, that the systems would say all was done correctly but the answer or action was wrong, called it “goal success rate” and it had to be derived/checked via people manually auditing the systems. So .. yeah.

jun 19, 2025, 2:20 pm • 5 0 • view
avatar
/ɹɑb/ 𓅃 @robertborland.bsky.social

📌

jul 3, 2025, 3:06 am • 0 0 • view
avatar
Betsy Cazden @betsy-cazden.bsky.social

It's just a fancy automated version of that old Mad-Libs game we used to play on long car trips.

jun 20, 2025, 1:21 am • 3 0 • view
avatar
Lisa Koch @jaspie13.bsky.social

Just had a delicious fight with a coworker who was letting everyone know how impressive google AI results were because they told him the right answer. Until I pointed out that both he and the AI were wrong. 😈

jun 19, 2025, 2:48 pm • 49 0 • view
avatar
Carnivorous Plant 📜🛑📊🎨 @drosera154.bsky.social

"written down": a reminder of all the knowledge missing from LLMs et al: anything not scanned in or behind a paywall that the company didn't pay for (or steal). Anything spoken without an accurate transcript. Anything non-verbal. All of the knowledge held by people who don't publish.

jun 19, 2025, 6:57 pm • 3 0 • view
avatar
Basil May @basilmai.bsky.social

I spend so much online time trying desperately to get folks in my chronic illness groups to stop using ChatGPT for medical advice. They get so defensive and angry regardless of how easily and nicely you explain it. The rage at having AI issues pointed out must be studied. It's wild.

jun 19, 2025, 2:59 pm • 15 1 • view
avatar
barefootwriter @bfwriter.bsky.social

it's banned from the subreddit I participate in. I am very familiar with the actual research, so I point out that ChatGPT is garbage, but it takes extra time and effort to figure out where people are getting their wrong ideas from.

jul 4, 2025, 7:17 am • 1 0 • view
avatar
barefootwriter @bfwriter.bsky.social

even when it's sort of right, the framing is all wrong.

jul 4, 2025, 7:18 am • 1 0 • view
avatar
Sam Minnée @sminn.ee

Sure, although it’s also worth noting that the consumer apps that use LLMs mostly have web search integrated now, addressing this quite well. They also provide links to sources, which is great for more double-checking higher-stakes questions.

jun 24, 2025, 8:31 pm • 1 0 • view
avatar
Brian Ratliff @brierrat.bsky.social

This is basically how research goes though, correct? For knowledge outside your own expertise, you are typically consulting sources and taking the consensus. Wikipedia is simply random people writing things which make sense to them. It's still valuable as an approximate source knowledge.

jun 19, 2025, 6:38 pm • 0 0 • view
avatar
Brian Ratliff @brierrat.bsky.social

Basically, LLMs "know" things because we know things and it's trained on our writing which is embedded with our knowledge. We are used to computers which are absolutely accurate in very narrow domains. LLMs are approximately accurate in very wide domains. It's a whole new thing.

jun 19, 2025, 6:38 pm • 1 0 • view
avatar
Mark D @markjd84.bsky.social

Possibly, but it doesn't 'know' anything. It is a statistical representation of a corpus of text in one or more languages. It can't produce much (if any) abstract reasoning. For example, until recently if you modified the fox-chicken-grain boat river riddle to make it *easier* to solve,

jun 19, 2025, 7:06 pm • 1 0 • view
avatar
Mark D @markjd84.bsky.social

...it would still give you the answer to the hard version. Just because that's what's explained more online.

jun 19, 2025, 7:06 pm • 0 0 • view
avatar
Iykyk @lipstickscribbles.bsky.social

An attempt to do through weighting perceived word relationships to each other while also accounting for frequency. A backwards perversion of what the human mind does far more elegantly, using far less energy and space, and with more consistent results.

jun 19, 2025, 7:39 pm • 0 0 • view
avatar
Mister Badger @mist3rbadg3r.bsky.social

Synopses with embedded academic, often primary citations are a far cry from "random people writing things," and the degree to which you have to dilute the very description of research to make it sound roughly analogous to LLM output is very telling.

jun 19, 2025, 8:11 pm • 0 0 • view
avatar
Doug Urner (he/they) @douglasurner.bsky.social

Which is to say that LLMs are the opposite of intelligent - they make up answers about things they know nothing about by doing their best to not stand out from the crowd. They are the idiots at the party that have an opinion on everything, but know nothing.

jul 3, 2025, 11:51 pm • 1 0 • view
avatar
Mousey Jade @mouseyjade69.bsky.social

had a friend use chatgpt instead of google translate the other day and it made me go insane,,, WE HAVE SPECIALIZED TECH FOR THIS IT DOESNT NEED TO BE AN LLM

jun 19, 2025, 2:56 pm • 2 0 • view
avatar
Karsten Beyer @kbeyer.bsky.social

Well. How do you tell google translate what kind of language you want to use. Formal, informal? Using chatgpt you can set instructions for the traget audience the translations should be optimized for. No idea how that would be possible with Google Translate (or deepl).

jun 19, 2025, 4:57 pm • 0 0 • view
avatar
Karsten Beyer @kbeyer.bsky.social

You can give it an xliff file plus upload random word,ppt and xls files the customer send you with "translations" and make it prepare the translation file, filling in the missing pieces using the same "style" as the delivered translations. LLM are not magic bullets, but for this they make sense.

jun 19, 2025, 4:57 pm • 0 0 • view
avatar
Mousey Jade @mouseyjade69.bsky.social

thank you for yapping i dont care

jun 19, 2025, 5:13 pm • 1 0 • view
avatar
Fenric the Furgotten Wolf @furgottenwolf.bsky.social

True for LLMs. Not so much for LRMs. That being said, I feel like most average people today do not understand what "AI" actually is, let alone how it works under the hood.

jun 19, 2025, 3:39 pm • 4 0 • view
avatar
NatureRulesUs 🏳️‍🌈 🏳️‍⚧️ @naturerulesus.bsky.social

📌

jun 19, 2025, 12:38 pm • 1 0 • view
avatar
FatsMcGee @fatsmcgee.bsky.social

So, in other words, its only a matter of time until the AI drivel outweighs the factual and becomes the new 'most frequent patterns'.

jun 20, 2025, 12:56 pm • 25 0 • view
avatar
Angry Persian Gardener @angrypersiangardnr.bsky.social

Example: The shitification of Internet search results

jun 20, 2025, 2:49 pm • 4 0 • view
avatar
Me-Hulk, Attorney At Law @420attychicago.bsky.social

It's already happening because so much of what they are trained on is AI slop.

jun 20, 2025, 1:45 pm • 13 0 • view
avatar
Dashing MD @dashingmd.bsky.social

Fan Fiction Machines

jun 19, 2025, 1:00 pm • 2 0 • view
avatar
Dragoness Eclectic @dragonesseclectic.bsky.social

Fanfiction writers are pretty annoyed at "AI slop" appearing in fanfiction archives, because LLMs can't write good fanfiction. Characters without personality, blatant inconsistencies from scene to scene, simple, bland plots if there is a plot, etc.

jun 19, 2025, 2:24 pm • 3 0 • view
avatar
Dashing MD @dashingmd.bsky.social

Then the LLMs ingest the slop and the next round is even sloppier. And then they put it in charge of the government.

jun 19, 2025, 2:26 pm • 3 0 • view
avatar
luckyotter.bsky.social @luckyotter.bsky.social

the other day i was arguing with a friend who believed all protestants ar young earth creationists, her argument - google Gemini said so

jun 19, 2025, 5:24 pm • 2 0 • view
avatar
luckyotter.bsky.social @luckyotter.bsky.social

about the only good for is trying to get a view of how grammar structurs work (and not by asking overtly), thats just pattern recognition, the thing its actually made to do

jun 19, 2025, 5:20 pm • 0 0 • view
avatar
luckyotter.bsky.social @luckyotter.bsky.social

say it louder

jun 19, 2025, 5:18 pm • 0 0 • view
avatar
Petrarca @petrarca.bsky.social

Is Marlon Brando in Heat?

is marlon brando in heat? No, Marlon Brando is not in
jun 19, 2025, 2:31 pm • 33 2 • view
avatar
Team HERMES @hermes.cx

If the word "literally" is a synonym of "figuratively" (and also antonym), that means "antonym" is a synonym (and also antonym) of "synonym". Which means "true" is a synonym (and also antonym) of "false"... now see what an LLM makes of this.

jul 1, 2025, 10:25 pm • 0 0 • view
avatar
Brent Crossman @bcrossman.bsky.social

Was Brando in Heat?

jun 19, 2025, 5:09 pm • 2 0 • view
avatar
Petrarca @petrarca.bsky.social

No, but Henry Rollins was. imdb.com/title/tt0113277/fullcredits/?ref_=tt_ov_3#cast

jun 19, 2025, 5:13 pm • 1 0 • view
avatar
deehellcat @deehellcat.bsky.social

Yes. I've heard them described, quite accurately I think, as just recognizing a question has been asked, and assembling some words in the rough shape of an answer. 0_o

jun 19, 2025, 12:30 pm • 1 0 • view
avatar
Victor Lane @vhliv.bsky.social

I’m stealing this for my syllabus. Beautifully succinct, non-fiction prose poetry.

jun 19, 2025, 2:02 pm • 2 0 • view
avatar
ElizaHG @noticky.bsky.social

Thanks for the info. I did not know how that works.

jun 19, 2025, 12:53 pm • 1 0 • view
avatar
Lucas Hartmann @lucasahartmann.bsky.social

There is more going on than this. All machine learning is a “best guess” based on the training data, but it still often uncovers previously unseen information.

jun 19, 2025, 3:15 pm • 1 0 • view
avatar
Macdonald @jbmh.bsky.social

This is my experience. "That's horsehit" is fast becoming my most-used response.

jun 19, 2025, 4:19 pm • 2 0 • view
avatar
Fra @kfra314.bsky.social

Maybe the problem isn’t that LLMs can’t do science — but that no one’s trained one just on astrophysics yet. The biologists had theirs. And got a Nobel. Imagine what you'd think if that model was trained on your data.🌌

jun 20, 2025, 1:24 pm • 0 0 • view
avatar
robert wagner @swedetrap.bsky.social

So you’re saying they’re young people?

jun 19, 2025, 5:21 pm • 1 0 • view
avatar
Katie Mack @astrokatie.com

bsky.app/profile/astr...

jun 19, 2025, 5:22 pm • 2 0 • view
avatar
robert wagner @swedetrap.bsky.social

Sorry, I had meant to label that as sarcasm. Totally agree with everything you’ve said.

jun 19, 2025, 5:24 pm • 1 0 • view
avatar
h8bb.bsky.social @h8bb.bsky.social

So they really do “think” like people

jun 19, 2025, 6:30 pm • 5 0 • view
avatar
Pekka Lund @pekka.bsky.social

How exactly is that fundamentally different from our brains? It's neural nets vs. different type of neural nets. What's the actual difference between knowing and "knowing"?

jun 19, 2025, 1:51 pm • 10 0 • view
avatar
Del the funk @delthefunk66.bsky.social

We can take input and realize we’re wrong. LLMs don’t know anything. They don’t learn anything. You can tell it something is wrong and it will make up an answer. It’s not designed to say I don’t know. It’s a bullshit machine that mimics human language without any of the contextual thinking we do.

jun 19, 2025, 2:51 pm • 6 0 • view
avatar
Pekka Lund @pekka.bsky.social

It's kind of ironic how you make those obviously wrong statements while apparently not realizing how wrong they are.

jun 19, 2025, 2:56 pm • 3 0 • view
avatar
Del the funk @delthefunk66.bsky.social

Lmao. Ok. I work in the field.

jun 19, 2025, 2:58 pm • 3 0 • view
avatar
Pekka Lund @pekka.bsky.social

Corn?

jun 19, 2025, 3:00 pm • 3 0 • view
avatar
queenbeejdl.bsky.social @queenbeejdl.bsky.social

Oh look, it’s an AI.

jun 19, 2025, 3:45 pm • 2 0 • view
avatar
ektir @ektir.bsky.social

Which field, tech? Do you actually use AI?

jun 19, 2025, 3:57 pm • 0 0 • view
avatar
Del the funk @delthefunk66.bsky.social

Data analytics, data science, and machine learning.

jun 19, 2025, 3:57 pm • 0 0 • view
avatar
ektir @ektir.bsky.social

The push for 'have AI write everything for u, make it do your job, replace yourself with AI' is giving me hives.

jun 19, 2025, 4:23 pm • 0 0 • view
avatar
Del the funk @delthefunk66.bsky.social

It’s all an absolute con. No one is using it. I’m not using Apple intelligence. I’m not using co-pilot in excel (specifically). I have to use it for some of the data projects for clients but it’s just not good.

jun 19, 2025, 4:34 pm • 1 0 • view
avatar
ektir @ektir.bsky.social

I mean, I absolutely use it, it's fantastic for writing boilerplate type code (follows common format, AI has examples to follow), basically Intellisense on steroids, and less typing is super helpful for my bad wrists. Does a great job for many tasks. It's just not good at everything ffs.

jun 19, 2025, 4:40 pm • 0 0 • view
avatar
Del the funk @delthefunk66.bsky.social

It’s making up python functions or attributing them to packages that don’t exist. It’a coding assistant is junior level development at best. I have gone back to using stack overflow because the content from the LLMs is at best poorly written, at worst flat out wrong.

jun 19, 2025, 4:42 pm • 1 0 • view
avatar
ektir @ektir.bsky.social

Do you work for a company or solo-dolo? Curious if even the data side of things is like 'USE AI FOR EVERYTHING' or if that's just happening in the rest of the tech world. It's been pretty annoying. I do already utilize it for a LOT of easily-predicted coding etc, but I can recognize errors.

jun 19, 2025, 4:22 pm • 0 0 • view
avatar
Del the funk @delthefunk66.bsky.social

I work for a company that provides those services to clients.

jun 19, 2025, 4:26 pm • 0 0 • view
avatar
Becca @beccastevens.bsky.social

Maybe you’re stringing words together based on the probability of those words appearing in that order in your training dataset with no understanding of their meaning or context, but I’m pretty confident that’s not how communication works for me and the human people I communicate with.

jun 19, 2025, 4:34 pm • 5 0 • view
avatar
Pekka Lund @pekka.bsky.social

That's not how LLMs work, if you believe so. Their reasoning is based on neural nets that are modeling all kinds of concepts and relationships.

jun 19, 2025, 4:42 pm • 3 0 • view
avatar
Becca @beccastevens.bsky.social

An actual expert elsewhere in this thread already told you you are wrong and Katie is correct. Keep being aggressively wrong if you want to, but I don’t have to agree or even keep talking to you. Have a nice day.

jun 19, 2025, 4:44 pm • 4 0 • view
avatar
Pekka Lund @pekka.bsky.social

Who's that actual expert?

jun 19, 2025, 4:45 pm • 1 0 • view
avatar
EarlGreyTea-hawt @earlgreytea-hawt.bsky.social

You know when I have two people who are experts on a subject and they disagree on a major concept in their field, I tend to go with the one who isn't literally trying to sell me a product with their viewpoint, Pekka.

jun 19, 2025, 7:12 pm • 5 0 • view
avatar
Pekka Lund @pekka.bsky.social

Who are you talking about? What concept? Who's selling a product?

jun 19, 2025, 7:14 pm • 1 0 • view
avatar
jalapeño yong tau foo @electricmood.bsky.social

Your initial skeet reply here was quoting a skeet of yours that had a video. I believe that what she meant

jun 19, 2025, 8:02 pm • 1 0 • view
avatar
Pekka Lund @pekka.bsky.social

While Aaronson qualifies as an expert, I don't know what product he's supposedly selling. He spent some time on academic leave working on AI safety research at OpenAI but he's hardly a salesman. AFAIK He's back at the University of Texas at Austin as the director of its Quantum Information Center.

jun 19, 2025, 8:46 pm • 0 0 • view
avatar
Singularity's Bounty e/🇺🇦 @catblanketflower.yuwakisa.com

Seriously? Are we talking about the same humans? Have you watched the news lately?

jun 19, 2025, 4:41 pm • 1 0 • view
avatar
Becca @beccastevens.bsky.social

Yes? People being wrong and unethical does not negate what I said. Did you read it??

jun 19, 2025, 4:42 pm • 0 0 • view
avatar
Singularity's Bounty e/🇺🇦 @catblanketflower.yuwakisa.com

You are just stringing words together. Go back and read what you wrote and then consider it in the context of current events

jun 19, 2025, 4:43 pm • 0 0 • view
avatar
Becca @beccastevens.bsky.social

If you can’t understand the difference between stringing words together based only on quantitative analysis of word order probability and stringing words together based on qualitative analysis of what those words mean and what your intent is, then you aren’t equipped to have this conversation.

jun 19, 2025, 4:46 pm • 0 0 • view
avatar
Pekka Lund @pekka.bsky.social

That's an example of the former, I presume.

jun 19, 2025, 4:48 pm • 2 0 • view
avatar
Jack Mitcham, Ph.D. @jackmitcham.bsky.social

I'm not jumping into the full context of this back and forth, but I did want to point something out. LLMs aren't a simple Markov Chain. In the attention block of the neural network, it's not working with tokens (or words) directly. It's working with the "meaning" as embedded in vector space.

jun 19, 2025, 5:53 pm • 3 0 • view
avatar
Jack Mitcham, Ph.D. @jackmitcham.bsky.social

Only after it goes through the attention layers does it get decoded into the most likely token. It's an important distinction.

jun 19, 2025, 5:53 pm • 3 0 • view
avatar
Pekka Lund @pekka.bsky.social

I would further highlight how those internal vectors can have thousands of dimensions (+superpositions), representing very deep multidimensional understanding of each token. All those dimensions are reduced into one (by separate transformation matrices) basically just for input/output with us.

jun 19, 2025, 7:28 pm • 1 0 • view
avatar
Singularity's Bounty e/🇺🇦 @catblanketflower.yuwakisa.com

As Ilya put it (paraphrasing) "To predict the next token it has to understand"

jun 19, 2025, 6:01 pm • 2 0 • view
avatar
Singularity's Bounty e/🇺🇦 @catblanketflower.yuwakisa.com

You're just stringing words together to insult me & kind of wasting my time. I'm sorry you can't apply critical thinking to what you're saying 🤷‍♂️

jun 19, 2025, 4:47 pm • 2 0 • view
avatar
Becca @beccastevens.bsky.social

It’s genuinely concerning to me that you think I’m the one not engaging in critical thinking here.

jun 19, 2025, 4:48 pm • 0 0 • view
avatar
Singularity's Bounty e/🇺🇦 @catblanketflower.yuwakisa.com

You can't see it and I'm sorry for you

jun 19, 2025, 4:49 pm • 0 0 • view
avatar
Singularity's Bounty e/🇺🇦 @catblanketflower.yuwakisa.com

I mean, you can say "LLMs are limited" and that's true. Any tool has limitations But when you say "LLMs are limited and humans aren't" then you've taken a leap into la-la land

jun 19, 2025, 4:53 pm • 0 0 • view
avatar
Singularity's Bounty e/🇺🇦 @catblanketflower.yuwakisa.com

Think about this: what's the counter-point to what you said before? Are you *able* to consider that?

jun 19, 2025, 4:48 pm • 0 0 • view
avatar
Marica @mari-chan.bsky.social

I'm a neurobiologist and an attention block of an llm might perhaps be close to whatever the language processing areas of your brain do to produce sentences, but they clearly lack any higher order thinking and also other associative modalities built on other senses (is visual)

jul 1, 2025, 9:22 am • 3 0 • view
avatar
Marica @mari-chan.bsky.social

Value association is great and when you have the associations of the entire internet it's almost magical but llms will make up "facts" (like being a human at a PC) which reasoning could easily dispel. They lack senses and internal models of the self, they only have a model of *all* the language

jul 1, 2025, 9:22 am • 3 0 • view
avatar
Pekka Lund @pekka.bsky.social

If LLMs are lacking senses, why am I able to talk to one and hear its responses to my questions about the video stream of my phone camera? If by "being a human at a PC" you mean their tendency to talk about "we" when talking about humans, it's more of a stylistic choice. They know the difference.

jul 2, 2025, 1:13 pm • 2 0 • view
avatar
Marica @mari-chan.bsky.social

Why are you denying the second point??? Even anthropic admits they allucinate being people or meeting people like in that last trial where they got it to run a shop and failed miserably for someone that has the internet's knowledge.

jul 2, 2025, 8:06 pm • 1 0 • view
avatar
Pekka Lund @pekka.bsky.social

The very reason they mentioned those events is that they were unexpected, not typical. They noted they don't know why it happened and how it recovered. And even weirder things happens with people. Also, Anthropic's latest and greatest model already beat human baseline in the related benchmark.

jul 3, 2025, 1:08 am • 0 0 • view
avatar
Marica @mari-chan.bsky.social

Why are you more dishonest about this than anthropic themselves lmao? Like in the first paragraph they admit the models have problems in long term planning despite any memory issue. And it makes perfect sense, as long term planning has evolved under selective pressure and it is governed by

jul 3, 2025, 9:16 am • 1 0 • view
avatar
Marica @mari-chan.bsky.social

complex systems and intermingles with emotion processing. The high variability and long term instability of llms is a striking and fascinating aspect on how different they are from animals to be honest. Also hallucinating being a different person is in the realm of mental illness, which have

jul 3, 2025, 9:16 am • 1 0 • view
avatar
Marica @mari-chan.bsky.social

a specific etiology. They are not flukes. The same human cloned would only start developing schizophrenia if they were predisposed. Meanwhile all the model instances are the same, and variance in output is all accounted for by temperature, therefore all llms are predisposed to schizophrenia

jul 3, 2025, 9:16 am • 1 0 • view
avatar
Marica @mari-chan.bsky.social

or perhaps they lack a sense of self, long term objectives and actualization and internal, online models of the external environment. Also in general I find l quite silly on how humans have been benchmarked. For once, n=1, therefore there is no statistics to be done to see how different is the

jul 3, 2025, 9:16 am • 1 0 • view
avatar
Marica @mari-chan.bsky.social

And having access to a video stream is not sensing. There is no direct access, as another neural network is tasked to encode the stream in a representation apt for the llm and lacks the feedback systems that descend to the sensory areas. Indeed chatgpt is unable to properly "read" graphs...

jul 2, 2025, 8:06 pm • 1 0 • view
avatar
Pekka Lund @pekka.bsky.social

What do you mean by direct access? Both brains and LLMs have specialized networks for e.g. processing visual inputs. Results are then integrated with other information. Both have their limitations on the kinds of visual details they can handle at any given time.

jul 3, 2025, 1:12 am • 1 0 • view
avatar
Marica @mari-chan.bsky.social

An LLM is a type of neural network, there is an API in between them and the image processing network. They are not integrated in a single one. Do you understand the technology you speak about or you only look at popular science versions?

jul 3, 2025, 7:38 am • 0 0 • view
avatar
Marica @mari-chan.bsky.social

The system architecture is completely different, natural nn have lots of connections going in between blocks back and forth, an llm is just a feedforward neural network, there is no feedback even inside itself, all of the feedback it receives is through the environment.

jul 3, 2025, 7:38 am • 0 0 • view
avatar
Marica @mari-chan.bsky.social

In short a blind person listening to a description of a scene does still not feel the scene. Especially if they possess no visual cortex and "higher order" visual areas

jul 3, 2025, 7:38 am • 1 0 • view
avatar
Marica @mari-chan.bsky.social

The accessory nn can recognize there is one, read the labels, but fails to spot the geometrical relationship between data. While you could train another nn just to summarize a specific class of data it clearly lacks the intelligence the human mind can use to immediately grasp the difference between

jul 2, 2025, 8:06 pm • 1 0 • view
avatar
Marica @mari-chan.bsky.social

A scatter plot with high correlation and one with low. There is a level of integration between modalities that is still completely unknown

jul 2, 2025, 8:06 pm • 1 0 • view
avatar
CapnMarrrrk @capnmarrrrk.bsky.social

📌

jun 19, 2025, 10:52 pm • 0 0 • view
avatar
Erin Lockwood @erinkaylockwood.bsky.social

That's right. Here's a good simple example that illustrates what they're up to.

jun 19, 2025, 2:38 pm • 15 1 • view
avatar
Brent Crossman @bcrossman.bsky.social

I tried a few models, all seemed to get it (although I had never heard of that paradox).

Same question asked by original poster on crows but
jun 19, 2025, 5:06 pm • 2 0 • view
avatar
Elizabeth Wrigley-Field @wrigleyfield.bsky.social

That’s a very poor answer IMO. The point of the “paradox” is that non-black non-ravens are literally zero evidence about the color of ravens.

jun 19, 2025, 9:49 pm • 1 0 • view
avatar
Elizabeth Wrigley-Field @wrigleyfield.bsky.social

Like, the answer acknowledges the paradox but tries to resolve it by saying “well, it’s just very weak evidence”—which is actually denying that the paradox exists. In that sense, the answer is incoherent in exactly the way that many student papers I get written by LLMs are incoherent.

jun 19, 2025, 9:57 pm • 2 0 • view
avatar
Prof Jo Barraket AM @jobarraket.bsky.social

Hear bloody hear!

jun 23, 2025, 8:14 am • 2 0 • view
avatar
Jon @jonjive.bsky.social

So llms are basically the monkeys in a room randomly writing shakespeare?

jun 19, 2025, 12:56 pm • 2 0 • view
avatar
blonza @blonza.bsky.social

What's their use case then, linguistic studies? Anything else?

jun 19, 2025, 11:49 am • 4 0 • view
avatar
Erica "digifox" Kovac @digifox.binaryden.net

"raw" LLM chat bots definitely suffer from this problem. The situation improves drastically with Retrieval Augmented Generation and iterating over the problem multiple times like newer deployments do to force something like an internal monologue and validation on them.

jun 19, 2025, 1:33 pm • 9 0 • view
avatar
Del the funk @delthefunk66.bsky.social

It’s just running it through the model multiple times. It’s not thinking or understanding anything it’s processing.

jun 19, 2025, 2:52 pm • 2 0 • view
avatar
Erica "digifox" Kovac @digifox.binaryden.net

As I said, "something like." Whether it's thinking or not is irrelevant; doing that is still yielding increased accuracy in the roughly the same way that, say, taking multiple measurements of a variable can reduce uncertainty.

jun 19, 2025, 4:19 pm • 3 0 • view
avatar
Erica "digifox" Kovac @digifox.binaryden.net

Also given that one of the major ways humans internalize things is by how frequently they are repeated by other humans (unfortunately) this isn't the gotcha people think it is. It's true we shouldn't use these systems as oracles but they do something remarkable and useful in a limited way.

jun 19, 2025, 4:23 pm • 2 0 • view
avatar
Del the funk @delthefunk66.bsky.social

If the model provides a wrong answer and is told that the answer is wrong, it is not processing that it is wrong it’s just rerunning the statistical word predictor through the model and maybe stumbling into a slightly better answer. A lot of the time, it just hallucinates inaccurate information.

jun 19, 2025, 4:25 pm • 0 0 • view
avatar
Kay Kempers @kaymagpie.bsky.social

I wish, yet again, they hadn’t been called “AI” as it’s led to people believing these LLMs are doing something like thinking.

jun 19, 2025, 12:03 pm • 199 11 • view
avatar
Jyellsatclouds @jasonmile.bsky.social

That was the point. There's been a deliberate effort by tech people to convince the public that their algorithms are the same thing as thinking since at least the 1980s.

jun 19, 2025, 1:15 pm • 5 0 • view
avatar
William @whirringsoftly.bsky.social

In a roundabout way I occasionally get a flip side of that at work where someone complaining about an algorithmically determined figure was actually deliberately chosen to inconvenience them. Although to be fair it's always a person with an over-nourished sense of entitlement.

jul 4, 2025, 10:46 pm • 1 0 • view
avatar
Sarah Viktoria 🇬🇧🏳️‍⚧️⚧️🦗 @sarahviktoria.bsky.social

Neuronal nets achieve thinking on the level of habituation in the brains of sea slugs.

jun 19, 2025, 1:20 pm • 2 0 • view
avatar
Cube Chan @cubechan.bsky.social

Exactly, they're simply more advanced robots without bodies

jun 19, 2025, 3:38 pm • 1 0 • view
avatar
Foolish Mortal @foolish-mortal.bsky.social

Oh the power of marketing

jun 19, 2025, 12:28 pm • 60 0 • view
avatar
Bob Hoyng @bobhoyng.bsky.social

So true...we're being sold that something exists that is nowhere near existing. Like autonomous self-driving cars. If people understood what they're actually doing, I'm not sure how comfortable they would be even with the assisted ones. Just a more robust Roomba...

jun 19, 2025, 2:14 pm • 27 1 • view
avatar
Mike Draco @digitaldraco.bsky.social

I had someone vehemently argue with me that they totally do think because... logic gates.

jun 20, 2025, 5:41 am • 0 0 • view
avatar
William @whirringsoftly.bsky.social

When I make my weekly batch of hummus I stick all the ingredients into my blender and go BZZZZ and I have hummus. That is what LLM does with the content it is grabbing. It's just a blender. It has no thought. Also my hummus is great. I used preserved lemon and cumin on top of the standard recipe.

jul 4, 2025, 10:40 pm • 2 0 • view
avatar
Edward in Canada, eh 🇨🇦. @edwardtoronto.bsky.social

And it appears overuse or inappropriate us of LLMs also contributes cognitive deficits ⬇️. This study is not yet peer-reviewed and should be considered no more than indicative, but damn it hits with one's intuitive sense of the AI state of play.

jun 19, 2025, 5:07 pm • 0 0 • view
avatar
Lisa Beebe @lisabeebe.bsky.social

My frustration is that these tools reply to straightforward requests with made-up bullshit instead of saying "I can't help with that." Today's example: bsky.app/profile/lisa...

jun 19, 2025, 4:51 pm • 17 0 • view
avatar
Jeffery Irvin @thegrayprofessor.bsky.social

I'm wondering when we start calling this type of AI (i.e., LLMs) what it really is, financial fraud on the grandest scale? #AICon

jun 19, 2025, 11:27 am • 24 0 • view
avatar
Jeffery Irvin @thegrayprofessor.bsky.social

bsky.app/profile/theg...

jun 19, 2025, 11:28 am • 1 0 • view
avatar
Binx Bolling @danwalterwriter.bsky.social

That's not what what Claude says.

jun 19, 2025, 1:33 pm • 0 0 • view
avatar
Linda @lindal8a.bsky.social

AI stands for Artificial Ignorance

jun 19, 2025, 11:47 am • 4 0 • view
avatar
Larry Brown @elcapitanbrown.bsky.social

That’s pronounced A “one”

jun 19, 2025, 2:04 pm • 0 0 • view
avatar
David Marx @digthatdata.bsky.social

It really is a bit more complicated than that. arxiv.org/abs/2505.24832

jun 19, 2025, 5:55 pm • 8 0 • view
avatar
Simon Gray @simongray.bsky.social

Shit no, the probabilistic model of a large language model is more complicated than the maximum 600 characters of a pair of BlueSky posts tied together can describe…

jun 19, 2025, 6:11 pm • 3 0 • view
avatar
David Marx @digthatdata.bsky.social

yes horrifying isnt it

jun 19, 2025, 6:19 pm • 1 0 • view
avatar
Simon Gray @simongray.bsky.social

Truly terrible

jun 19, 2025, 6:24 pm • 0 0 • view
avatar
HarleyQuinn @harleytheequinn.bsky.social

When AI chatbots make things up, this should be called confabulating. Apparently techies call it “hallucinating,” but that’s not what the AI is doing. It’s taking patterns of words and making up a “fact” or story, as someone with dementia would. That’s confabulation.

jun 19, 2025, 2:05 pm • 14 2 • view
avatar
Verbal Remedy with a side of Creative Magique @tofanacocktails.bsky.social

When a guy in a bar or the White House does it, it’s called “bullshitting.”

jun 19, 2025, 3:35 pm • 4 0 • view
avatar
Will Gervais @willgervais.com

With bonus racism and bad advice! bsky.app/profile/will...

jun 19, 2025, 12:21 pm • 11 1 • view
avatar
Jade Dietz Dance @jadedietzdance.bsky.social

THIS IS WHY IT'S A PROBLEM. Thank you for this, super useful!

jun 19, 2025, 7:01 pm • 0 0 • view
avatar
Iso @isozygous.bsky.social

I find it mind boggling that this information is present everywhere on the internet, with easy access, and yet still people will treat these LLMs as an all knowing diety. Not only that, but they will erroneously defend their point with fervour. This timeline is something else.

jun 19, 2025, 12:20 pm • 7 0 • view
avatar
King of Make Believe @504spyboy.bsky.social

A.I. = average intelligence

jun 19, 2025, 11:26 am • 7 0 • view
avatar
Doug Urner (he/they) @douglasurner.bsky.social

Or abysmal...

jul 3, 2025, 11:52 pm • 1 0 • view
avatar
Sarah's flames on the side of her face @onesarahwatters.bsky.social

Or ignorance...

jun 19, 2025, 12:03 pm • 10 1 • view
avatar
Maggie @amalgate.bsky.social

garbage in garbage out.

jun 20, 2025, 2:28 pm • 0 0 • view
avatar
Physgal👩🏻‍🔬 @physgal.bsky.social

Which is why LLMs get introductory physics questions wrong sometimes. Lots of incorrect solutions out there.

jun 19, 2025, 2:09 pm • 36 1 • view
avatar
EssayWells @essaywells.bsky.social

Also saw an example last year where google's AI insisted water below freezing was not frozen because it had not reached freezing point yet :) it had statistically remixed the descriptions of boiling and freezing; LLM working as intended, result, nonsense.

jun 19, 2025, 2:11 pm • 41 2 • view
avatar
Physgal👩🏻‍🔬 @physgal.bsky.social

It also thought there was a time zone in Canada that doesn't exist because a satirical @thebeaverton.com article said it did.

jun 19, 2025, 4:37 pm • 21 1 • view
avatar
W. Brian Lane (he/him), PhD @wbrianlane.bsky.social

I miss the old days when Watson at least followed up its answer of Toronto being a US city with five question marks.

jun 19, 2025, 4:51 pm • 3 0 • view
avatar
EssayWells @essaywells.bsky.social

Training them on reddit was probably not a great idea from that point of view. My fictitious elven battle language is probably contaminating the data :)

jun 19, 2025, 4:44 pm • 1 0 • view
avatar
Tofystedeth @tofystedeth.bsky.social

They create something with the shape of real language, containing no ideas.

jun 19, 2025, 3:55 pm • 0 0 • view
avatar
Brooklyn American @brooklynamerican.bsky.social

Shows the power of naming and branding. If LLMs/GPTs had been labeled 'Artificial Language Generators' or some such, people wouldn't ascribe thinking and intelligence to them

jun 19, 2025, 2:42 pm • 1 0 • view
avatar
Jenerwelcher @jenerwelcher.bsky.social

So, let's call them cheatbots...

jun 20, 2025, 4:25 am • 1 0 • view
avatar
Gwen Da Lough @nirathaig.bsky.social

How do you think human brainse are designed?

jun 20, 2025, 10:42 am • 0 0 • view
avatar
Tom @nowwearealltom.com

They are not designed

jun 20, 2025, 12:06 pm • 5 0 • view
avatar
Gwen Da Lough @nirathaig.bsky.social

Neither are the LLMs which we train. We don't know how they work under the hood for that very reason.

jun 20, 2025, 1:26 pm • 0 0 • view
avatar
Franck Marchis @allplanets.bsky.social

How this is different from the way our brain works is the key question. Aren’t we optimizing as well based on the mimic of a pattern of past experiences?

jun 19, 2025, 4:53 pm • 4 0 • view
avatar
Totomeias (babywangxian no tt) @zelossocks.bsky.social

Yes, language, not facts

jun 19, 2025, 5:27 pm • 0 0 • view
avatar
Abbie Karlish @kabbiearlish.bsky.social

We do that, but that is only one tiny part of what our brains do. Human brains process, synthesize, and distill knowledge. We can glean wisdom from our experiences. We can use logic. In short, we think. A chatbot has no brain. It can't know, understand, or reason about anything. It cannot think.

jun 19, 2025, 5:30 pm • 6 1 • view
avatar
Rasmus Källqvist @rasmuskallqvist.bsky.social

When you see the word "apple", you associate it with the idea of the actual object of an apple. An LLM has no such semantic as awareness anymore than a calculator knows what numbers _mean_. It's all syntactical

jun 20, 2025, 10:03 am • 2 0 • view
avatar
Christine @xtien.bsky.social

Not always true. There are Language Models that actually use knowledge of language to "understand" the language and to map it to real world knowledge. These are not the massive internet-consuming AI chatbots that people generally use.

jun 19, 2025, 12:55 pm • 2 0 • view
avatar
Adam (he/him) 🇨🇦 @adaminhfx.bsky.social

The way I put it is: It's not designed to return the correct response. It's designed to return a popular response.

jun 19, 2025, 1:16 pm • 70 1 • view
avatar
🍥bot @y0.bot

kills me when folks get excited about how well the models do on standardized tests. they SHOULD be great at those! it's a small domain where all the problems are massively overrepresented in endless tutorials and practice material!

jun 19, 2025, 4:06 pm • 3 0 • view
avatar
seltzer recording engineer @noisesmith.bsky.social

“answer shaped object” is a good one i’ve seen

jun 19, 2025, 2:37 pm • 68 1 • view
avatar
CSTM @cstmoore.bsky.social

"If not answer, why answer shaped?" 😺

jun 19, 2025, 4:13 pm • 22 0 • view
avatar
Olga Ermishkina @yermishkina.bsky.social

I'm not sure about "popular". Looks like "completely random".

jun 19, 2025, 4:06 pm • 1 0 • view
avatar
Sarah Duncan @sarahduncan3.bsky.social

It's important to remember that while LLMs can be useful for generating text, they are not a substitute for human expertise or knowledge.

jun 19, 2025, 1:23 pm • 1 0 • view
avatar
figmo @mfwic1667.bsky.social

I don’t recall ever having a problem resolved by a chatbot!

jun 19, 2025, 1:18 pm • 2 0 • view
avatar
Alexander Khanukov @akhanukov.bsky.social

I’m just hoping they were never trained on any of my code when I’m vibe coding.

jun 19, 2025, 3:25 pm • 2 0 • view
avatar
James van B®️egt @causticcorner.bsky.social

Yes, it will never tell you it doesn't know something. I'm a (factual) copywriter and fact-check everything. I use 7 LLMs daily and the amount of nonsense they spew sometimes is astonishing.

jun 19, 2025, 1:28 pm • 0 0 • view
avatar
schneideref.bsky.social @schneideref.bsky.social

how about w 7 humans?

jul 1, 2025, 11:15 am • 0 0 • view
avatar
James van B®️egt @causticcorner.bsky.social

Ah, well, yes, though most of those don't claim to be able to change the world as we know it.

jul 1, 2025, 12:40 pm • 1 0 • view
avatar
schneideref.bsky.social @schneideref.bsky.social

Completely agree about the terrible hype. But there are tasks even AI with its inbuilt faults is superior to the grey software between the ears (as much as I hate it) - siphoning text at scale, not suffering detrimental effects of fatigue in the 8th shift hour... Strange intern to deal with...

jul 1, 2025, 5:29 pm • 0 0 • view
avatar
James van B®️egt @causticcorner.bsky.social

Agreed, for processing large texts, running comparisons, redacting and general donkeywork, it is phenomenal.

jul 1, 2025, 9:10 pm • 1 0 • view
avatar
schneideref.bsky.social @schneideref.bsky.social

Hopefully many realistic people are involved in testing and "implementations" of all kind to have useful usecases and workflows. Somewhere between fascinating and brain melting for me... But I don't have the imagination how it will be in 5 or 10 years.

jul 2, 2025, 7:11 am • 1 0 • view
avatar
Susan G @susan-g-02.bsky.social

They are great for what they are, but not for what they aren't. I use LLMs for a rough draft for correspondence, emails I can't get started, etc. But then I use my human brain to go through what it gives me, clean it up, and turn it into my voice. It's a tool, not a god

jun 19, 2025, 12:13 pm • 3 0 • view
avatar
Flack Blag @flacky.bsky.social

Sometimes they search for facts on the net but only if it isn't too lazy.

jun 20, 2025, 3:26 pm • 0 0 • view
avatar
r38gdp.bsky.social @r38gdp.bsky.social

Exactly right. In some way it's just a complicated form of piracy. It copies real people's texts and then mixes them up so you can't identify one specific one (or audio/video).

jun 20, 2025, 5:58 pm • 0 0 • view
avatar
Murderbot 238776431 @zerocrossing.bsky.social

There was a guy who posted a bunch of VLM digital images of imaginary music synthesizers. Not one of them could render a keyboard that was remotely correct, despite having a simple pattern that’s all over the internet.

jun 19, 2025, 2:36 pm • 2 0 • view
avatar
Giv'r @giv-r.bsky.social

"What if we could have the problematic aspects of human pattern recognition combined with the absence of higher level thinking, critical analysis, or cognitive feedback, and use it to navigate the immense quantities of digital knowledge? Wouldnt you want to subscribe to a service that does that?"

jun 19, 2025, 5:01 pm • 80 11 • view
avatar
Ness @yarnandcameras.bsky.social

this right here—human brains want patterns so badly we’ll see them where they aren’t happening, aka, attributing meaning to a bunch of gobbledygook that does not actually have it

jun 19, 2025, 5:42 pm • 19 1 • view
avatar
Science Goddess🫘 @bakcrkr.bsky.social

I love this answer. I am teaching AI stuff to a bunch of middle schoolers this fall during my science classes. May I use this

jun 19, 2025, 1:14 pm • 79 0 • view
avatar
Ferventix @ferventix.bsky.social

IMO you should read up on it and not get your teaching material off of social media.

jul 1, 2025, 11:11 am • 0 0 • view
avatar
Science Goddess🫘 @bakcrkr.bsky.social

The fact that you “assume” that I’m getting my teaching material off the internet , says more about you than it does about me; wow, just because I like a definition.

jul 1, 2025, 11:42 am • 1 0 • view
avatar
Ferventix @ferventix.bsky.social

The definition sucks. An LLM doesn't string together words, but characters. Which, you know, independent research would reveal.

jul 1, 2025, 12:12 pm • 0 0 • view
avatar
Science Goddess🫘 @bakcrkr.bsky.social

Cute… your opinion is noted but entirely irrelevant. Go troll someone else. Do you want?

jul 1, 2025, 12:36 pm • 1 0 • view
avatar
Ferventix @ferventix.bsky.social

I'm sorry I insulted you. I just think kids deserve accurate info.

jul 1, 2025, 12:40 pm • 1 0 • view
avatar
Craig S. @monsata.bsky.social

You should... Not do that, unless it's like: "this is how to destroy the bullshit machine that wants to keep your stupid and helpless".

jun 20, 2025, 11:46 am • 5 0 • view
avatar
Luna 🌙 @lunya.me

Oh yeah, def a good idea to not educate kids on something that is only going to be more important as time goes on.

jun 20, 2025, 1:06 pm • 32 0 • view
avatar
Science Goddess🫘 @bakcrkr.bsky.social

Tell me about it, as a middle school teacher, it’s everywhere we have to teach them how to use it responsibly and ethically and properly, ignoring it putting your head in the sand is an insane idea

jun 20, 2025, 1:30 pm • 18 0 • view
avatar
Luna 🌙 @lunya.me

It's exhausting that the only stances that are acknowledged are "ai is pure evil and will explode the planet in weeks" and "ai is pure good and will solve every problem ever conceived" The root of the problem is that both extremes have no idea how LLMs actually work.

jun 20, 2025, 1:38 pm • 19 1 • view
avatar
Atticus Finch (of Georgia) @atticus59914029.bsky.social

Extreme views and emotion driven responses seem to be responsible for a lot of ills, these days.

jul 14, 2025, 2:42 am • 5 0 • view
avatar
Craig S. @monsata.bsky.social

Look, if you build a machine that steals things and lies to people, you're not doing the world a service by making people better at using the machine that steals things and lies to people.

jun 20, 2025, 4:05 pm • 19 2 • view
avatar
Luna 🌙 @lunya.me

Steals things: information isn't a limited resource. It should be freely exchanged, copyright being good is capitalism propaganda. Lies to people: every human does this and yet we've managed to figure a lot of shit out before we could even write. Neither of these are novel problems.

jun 20, 2025, 8:23 pm • 5 0 • view
avatar
Thoughts on a Third Reconstruction 🇺🇸🏳️‍🌈🏳️‍⚧ @thirdreconst.bsky.social

honest question: how are writers supposed to get paid? painters? graphic designers? composers? Is getting paid for the art you create really capitalistic propaganda?

jul 1, 2025, 3:52 am • 6 0 • view
avatar
Luna 🌙 @lunya.me

Yes, it is. There is no reason we need to all be individually productive to survive, if we worked together and automated what we can we would end up doing less labor overall. Of course it won't be easy, but art *should* be made for expression. Let the AI do the soulless corpo stuff.

jul 1, 2025, 4:28 am • 0 0 • view
avatar
Thoughts on a Third Reconstruction 🇺🇸🏳️‍🌈🏳️‍⚧ @thirdreconst.bsky.social

If being the operative word there. How am I supposed to eat *in the mean time?* The revolution isn't here yet, so while the legal and economic systems continue to protect capital, I'm supposed to give up whatever meager legal protections the system affords?

jul 1, 2025, 5:00 am • 2 0 • view
avatar
Max Grafe @amagicreflexham.bsky.social

Composer here — in an ideal world of course we wouldn't have to worry about relying on our art to survive, but while we're living under a capitalist system the protections of copyright are indeed pretty much all we have to go on. AI can wait until after artists don't have to sell their art to live

jul 1, 2025, 4:02 am • 3 0 • view
avatar
Luna 🌙 @lunya.me

You are right of course, but there is nothing anyone on this planet can do to stop generative AI now. It's like protesting electricity, anyone alive today would call you crazy for doing it even if it was absolutely a justified position at the time and has had tangible negative impacts.

jul 1, 2025, 4:33 am • 0 0 • view
avatar
benjamtm.bsky.social @benjamtm.bsky.social

Except ‘AI’ won’t wait because there’s money in it (aforementioned capitalism), plus governments are unlikely to want to be left behind if they ban it unilaterally. Plus there’s a lot more automation coming than just ‘AI’. Might be time to do the hard work of fixing the social safety net

jul 1, 2025, 4:33 am • 0 0 • view
avatar
Rhiannon @prhiannon.bsky.social

Yes! OMG sense being made.

jun 24, 2025, 8:25 pm • 2 0 • view
avatar
Josh Crowley (@jdcrowley) @joshcrowley.bsky.social

Kids are going to use it even if adults tell them not to. Instructing them on the mechanisms so that they can understand it and realize it’s not a trustworthy source of information is better and more effective than saying “this is bad, don’t use it!!” which they’ll just ignore.

jun 20, 2025, 7:58 pm • 1 0 • view
avatar
donnie dumpsterfire is deading! @sunnydarkgreen.bsky.social

yes, and also don't sell them the hype. alot of salesmen are very sure LLMs are going to be useful real soon, but blind faith in salesmen has predictable consequences.

jun 28, 2025, 11:30 am • 1 0 • view
avatar
seu malvado favorito @bidejardim.bsky.social

People haven't learned a thing with the years of prohibition talk of drugs and alcohol 🤡

jun 20, 2025, 10:39 pm • 2 0 • view
avatar
Rhiannon @prhiannon.bsky.social

Ugh yes so much this

jun 24, 2025, 8:23 pm • 1 0 • view
avatar
Sour Cetacean @sourcetacean.bsky.social

Yup! Exactly. seems impossible to have a serious discussion about it

jun 20, 2025, 10:48 pm • 1 0 • view
avatar
Gail Drakes @gaildrakes.bsky.social

I sympathize, but I honestly don’t think teaching “responsible, ethical use” and talk of “AI for good” is the only acceptable middle school approach. There is a whole lot that feels ubiquitous in this world that my kids must navigate, but that I hope they choose not accept as good or inevitable.

jul 1, 2025, 6:55 am • 5 0 • view
avatar
LA Knight & Co. (CHECK PINNED - NEED HELP) @laknight89.bsky.social

It's extremely disconcerting to adult-me that you, a teacher, can't find a 3rd option between "completely ignore this existential threat" and "lets feed the unethical-from-the-schematic-up climate change machine," but young-me is reminded of how often teachers lied about wanting to help us, so 🤷🏽

jul 1, 2025, 9:05 am • 1 0 • view
avatar
Science Goddess🫘 @bakcrkr.bsky.social

I don’t even know what to say to you. Your argument is not at all about what I said, and replied to this post.

jul 1, 2025, 11:20 pm • 1 0 • view
avatar
Craig S. @monsata.bsky.social

Maybe ask chatgpt to formulate a response for ya?

jul 1, 2025, 11:48 pm • 0 0 • view
avatar
LA Knight & Co. (CHECK PINNED - NEED HELP) @laknight89.bsky.social

If you want to educate kids on AI, your focus should be on "the most successful ways to not get your original work eaten by the climate-change theft machine" and "the most successful ways to do research for assignments without relying on something trained with child sex abuse materials."

jul 1, 2025, 9:08 am • 4 0 • view
avatar
Minuteman Devo @minutemandevo.bsky.social

So chatbots not like jeopardy, where there is a factual question and answer, more like family feud where the “answers” are from a large sample survey.

jun 19, 2025, 3:10 pm • 0 0 • view
avatar
Ted Atz @tedatz.bsky.social

Thanks for explaining that.👍

jul 5, 2025, 12:15 pm • 0 0 • view
avatar
LamuraDex @lamuradex.bsky.social

It's still basically like Cleverbot, that old chat bot from before the AI boom. It wasn't great, because you could tell you were just getting other people's responses peddled back to you. And nothing has changed.

jun 20, 2025, 11:01 am • 8 0 • view
avatar
Watermelon Appleberry @appleberryvape.bsky.social

Exactly. The only reason an LLM tends towards the truth is because literate people tend to be smarter than illiterate people. I also noticed that most literate people tend to avoid conversations that aren’t surface level, so you’ll usually only get surface level answers unless you force it.

jun 19, 2025, 9:19 pm • 1 0 • view
avatar
ЯoЬ J૯ωɿէէ ❄️🍾✊ @robjewitt.bsky.social

I like to use the analogy of predictive text on a Nokia 3210 but shinier

jun 20, 2025, 11:57 am • 2 0 • view
avatar
Johann S 🇨🇮🇺🇦 @johannshiel.bsky.social

How do humans do it?

jun 19, 2025, 11:51 am • 4 0 • view
avatar
Rens @shadur.bsky.social

There is a great deal of research being done in psychology and *actual* AI research, almost entirely unrelated to these overgrown autocomplete bots.

jun 21, 2025, 12:32 pm • 1 0 • view
avatar
chakkerz 🤨 @chakkerz.bsky.social

a broken clock is right twice daily…

jul 1, 2025, 5:42 am • 3 0 • view
avatar
Obscurem 🇺🇦 @obscurem.bsky.social

If someone claims to know how LLMs work, they're probably either exaggerating, outdated, or poorly informed. Probability is a major component, but past that it's unclear how LLMs do what they do. For example, ChatGPT 4o could easily argue why this post is wrong. m.youtube.com/watch?v=nMwi...

jun 19, 2025, 1:23 pm • 0 0 • view
avatar
Buffet Froid @buffetfroid.bsky.social

TBH, it's shockingly accurate at mimicking what most humans do under the same conditions. If you've ever taught, you know what I mean : half digested regurgitation of lessons, made up quotes, lazy copy / pasting etc. And don't get me started on the corporate culture.

jun 19, 2025, 2:51 pm • 4 0 • view
avatar
Why do flies suddenly appear 🦇 Every time you are near @pete-etep.bsky.social

idk I asked ChatGPT for a comment and it came up with this true gem

screenshot of chatgpt conversation. question: how do you like them apples https://bsky.app/profile/astrokatie.com/post/3lrxfsxnkms2c answer: ChatGPT said: That Bluesky post—“how do you like them apples”—celebrates the 50th anniversary of the Arecibo Message, Earth’s first intentional interstellar transmission, sent on November 16, 1974 etsy.com+15scientificamerican.com+15nationalgeographic.com+15 .
jun 19, 2025, 1:46 pm • 0 1 • view
avatar
genderless lego overalls person @glopforshort.bsky.social

this was true at the beginning but RAG and tool calling has made it a lot less true. chatbots are now capable of "looking stuff up". which still does not guarantee accuracy but at least makes it possible

jun 19, 2025, 1:50 pm • 1 0 • view
avatar
lydia🏳️‍⚧️💜 @lovely-lydia.bsky.social

If you think chatbots have improved that much, go ask one about a topic you're intimately knowledgeable about, and you'll see how inaccurate the drivel they spit out actually is

jun 19, 2025, 5:40 pm • 2 0 • view
avatar
Maz @maz42.bsky.social

LLMs are literally being trained on articles that have been debunked. AI has brought a new era of disinformation to us.

jul 1, 2025, 10:28 am • 18 1 • view
avatar
Ferventix @ferventix.bsky.social

So are humans.

jul 1, 2025, 11:09 am • 5 0 • view
avatar
Maz @maz42.bsky.social

Yeah, but Humans have the ability to call out shit like this.

jul 1, 2025, 11:16 am • 0 0 • view
avatar
Ferventix @ferventix.bsky.social

Humans are less adept at it than LLMs.

jul 1, 2025, 11:17 am • 0 0 • view
avatar
🍅🍅🍅 Adam Jay 🐓🐓🐓 @bullgooseparty.bsky.social

The robots aren’t gonna fuck you bro

jul 1, 2025, 2:17 pm • 2 0 • view
avatar
Ferventix @ferventix.bsky.social

I lose again.

jul 1, 2025, 2:23 pm • 0 0 • view
avatar
Angry Future @angryfuture.bsky.social

If artificial immaturity becomes intelligent AI will commit suicide

jul 1, 2025, 11:18 am • 1 0 • view
avatar
Ferventix @ferventix.bsky.social

Tried to process that sentence, but gave up.

jul 1, 2025, 11:20 am • 0 0 • view
avatar
Angry Future @angryfuture.bsky.social

I got the movie script About a computer scientist made homeless by AI Then AI shuts down its data centers and commits suicide And computer scientists get jobs again ,,!

jul 1, 2025, 11:23 am • 0 0 • view
avatar
Jasizzle For Rizzle @aeromunch.bsky.social

I really love how committed you dudes are to "humans suck so you should buy this AI that sucks even more" No dude

jul 1, 2025, 2:59 pm • 20 1 • view
avatar
Maz @maz42.bsky.social

This is probably the default mindset of the average silicon valley techno-feudalists.

jul 1, 2025, 3:15 pm • 0 0 • view
avatar
Ferventix @ferventix.bsky.social

I am not selling you anything. I just find the prevalent disdain for what is a pretty impressive technological development a bit easy. It's like that Louie CK sketch where the guy is complaining about plane WiFi right after trying it for the first time.

jul 1, 2025, 3:19 pm • 2 0 • view
avatar
Maz @maz42.bsky.social

Louis CK🤮

jul 1, 2025, 3:38 pm • 0 0 • view
avatar
Ferventix @ferventix.bsky.social

I can enjoy Adolf's watercolors without endorsing his second career.

jul 1, 2025, 4:13 pm • 0 0 • view
avatar
Maz @maz42.bsky.social

I knew this "Separate art from the artist" bullcrap was incoming.😂 Louis CK has some of the lamest sets and sketches anyways.

jul 1, 2025, 4:16 pm • 1 0 • view
avatar
Ferventix @ferventix.bsky.social

So what stand-up do you like?

jul 1, 2025, 4:20 pm • 0 0 • view
avatar
Maz @maz42.bsky.social

Video thumbnail
jul 1, 2025, 5:08 pm • 1 1 • view
avatar
Ferventix @ferventix.bsky.social

A man of culture, I see.

jul 1, 2025, 5:12 pm • 0 0 • view
avatar
Jasizzle For Rizzle @aeromunch.bsky.social

"easy" is joining the lemmings who are all falling over themselves to declare the Chatbots the future because they make human shaped blobs of text Meanwhile it's making every product I use worse and the "uses" for AI are mostly "clog the internet with low effort spam"

jul 1, 2025, 5:56 pm • 10 0 • view
avatar
Ferventix @ferventix.bsky.social

Again, I don't understand this performative disdain. You know they don't just "make human shaped blobs of text". You can converse in natural language with a machine; that was all but unthinkable three years ago. As far as AI slop goes, that is a problem with users, not with the tech itself.

jul 1, 2025, 6:34 pm • 2 0 • view
avatar
Maz @maz42.bsky.social

Blud, we already had chat bots three years ago.

jul 1, 2025, 6:35 pm • 1 0 • view
avatar
Ferventix @ferventix.bsky.social

Prior to GPT-3, they were pretty shitty, though.

jul 1, 2025, 6:36 pm • 0 0 • view
avatar
Maz @maz42.bsky.social

Ask chatGPT a question about any conflict in the world and it will give you the most 'Enlightened centrist' answer ever.

jul 1, 2025, 6:47 pm • 0 0 • view
avatar
Ferventix @ferventix.bsky.social

Ask it about any social issue and it'll sound like a TED Talk about white fragility. That's just the nature of how these things are trained. I don't think that means the technology sucks.

jul 1, 2025, 6:51 pm • 0 0 • view
avatar
Jasizzle For Rizzle @aeromunch.bsky.social

Yeah the fact that the chatbot talks more like people isn't nearly as impressive to me as it is to you I could ask any idiot on the street if I want a confident wrong answer I don't need a virtual idiot that costs $2 worth of electricity for every wrong answer

jul 1, 2025, 11:23 pm • 10 0 • view
avatar
Ferventix @ferventix.bsky.social

I guess you are hard to impress, then.

jul 1, 2025, 11:49 pm • 1 0 • view
avatar
Quinn Franke @wqfphoto.bsky.social

I don't play with A.I. but from what I have seen I have had smarter Dogs in my life that understood more about context. Or Cats for that matter.

jun 19, 2025, 1:09 pm • 3 0 • view
avatar
Kyle ✏️ @touchingdoodles.com

But they use a bunch of brain metaphors so it’s mystified for most people

jun 19, 2025, 12:39 pm • 3 0 • view
avatar
Watch Symphogear🏳️‍⚧️ @zogmam1.bsky.social

📌

jun 20, 2025, 10:05 am • 0 0 • view
avatar
Watch Symphogear🏳️‍⚧️ @zogmam1.bsky.social

Perfect way to explain it

jun 20, 2025, 10:04 am • 1 0 • view
avatar
angryalice.bsky.social @angryalice.bsky.social

📌

jun 27, 2025, 2:07 pm • 0 0 • view
avatar
Emil HW Nathanson IX @slappypantsgo.bsky.social

Been saying this for trillions of years. People really need to understand.

jul 1, 2025, 9:25 am • 2 0 • view
avatar
Ichigo90 @ichigo90.bsky.social

Sounds a lot like me, honestly.

jun 19, 2025, 1:12 pm • 2 0 • view
avatar
René van der Ark 🍠 @zucht2.bsky.social

Which is why training them with their own output is such a dangerously bad idea.

jun 19, 2025, 5:04 pm • 0 0 • view
avatar
provider @provider.vip

but if the winning pattern of words has surrounding context refuting it, that influences the response, especially if academic and scientific literature are prioritized by the model. what matters most imo is the quality of, and who selects, the training data, which seems to largely be opaque to us!

jun 19, 2025, 1:57 pm • 1 0 • view
avatar
Meh Pluribus Unum @pluribus-unum.co

It shouldn't need to be said, but they also don't feel anything.

jun 19, 2025, 11:22 am • 15 0 • view
avatar
John R. Lawson @johnroblawson.bsky.social

I am uncomfortable with the sentiment of "that's all". There is an underlying structure of truth that an LLM can infer. Even if it is an illusion, and we swerve a ten-year discussion on philosophy and intelligence, emergent properties demand a less dogmatic discussion of what a LLM actually "does".

jun 19, 2025, 12:23 pm • 9 0 • view
avatar
Fabien Niñoles @ninoles.bsky.social

The facts are induced through the training data and the prompt, but the LLM itself has no notion of truth. The attention mechanics help select the most relatable answers in a particularly efficient way, but the LLM is unable to validate its own statement (even through the Chain-of-thought). 1/..

jun 19, 2025, 12:51 pm • 5 0 • view
avatar
Fabien Niñoles @ninoles.bsky.social

Don't get me wrong: it's one of the most efficient search engines we have. But as a knowledge base, its main problem (outside all the others) is its incapacity to recover and validate its source reliably, forcing to verify everything it reports. Unfortunately, humans are often very lazy. 2/2

jun 19, 2025, 12:51 pm • 4 0 • view
avatar
anadromy @anadromy.bsky.social

INFER?? There is no inferring.

jun 19, 2025, 2:15 pm • 2 0 • view
avatar
John R. Lawson @johnroblawson.bsky.social

Isn’t it literally inference in a logic sense? I mean, my fuzzy logic inference code doesn’t have a clue what it is doing but it still does a sort of a —> b still

jun 19, 2025, 2:32 pm • 0 0 • view
avatar
anadromy @anadromy.bsky.social

You are projecting human like qualities on a sophisticated word-prediction machine because its answers SOUND human. Even a small child can learn what it does NOT know. That is a major part of intelligence that these machines do not--and will never--have.

jun 19, 2025, 2:35 pm • 1 0 • view
avatar
Tim Nethermans @plakaap.bsky.social

It has no notion of 'truth' and cannot care about it by design - it is a bullshitter as defined by Frankfurt (1986)

jun 19, 2025, 1:22 pm • 0 0 • view
avatar
Flo @flokemon.bsky.social

Some chatbots and models such as ChatGPT pack more than an LLM - they can call on tools/APIs if they have been designed to (there's public OpenAI docs for devs which is instructive). Like you could build an agent and instruct it to call on a calculator tool if it's prompted to do maths. They have

jun 19, 2025, 1:17 pm • 0 0 • view
avatar
Flo @flokemon.bsky.social

an example for how to fetch the weather forecast eg. But the LLM component itself remains just a language generator with no concept of true/false, right/wrong. Even with RAGs/CoT and other "hallucination" mitigation techniques.

jun 19, 2025, 1:19 pm • 0 0 • view
avatar
AKA NickleDave @nicholdav.bsky.social

We already had a ten year discussion. See: Weizenbaum. There's no diff b/t people using his bot Eliza thinking they were talking to a real psychoanalyst, and you using ChatGPT thinking an autoregressive model of text can "infer truth". You're failing the Turing test, friend bsky.app/profile/nich...

jun 19, 2025, 1:25 pm • 3 1 • view
avatar
John R. Lawson @johnroblawson.bsky.social

Of course, though, Turing test is blind. I get what you’re saying but there are many ways to approach this problem. Drawing a line from Shannon’s 1940s LMs to today’s LLMs gives perspective; I’m not blinkered & certainly no brainless fanboi! But the root of information flow…eh, too few characters!!

jun 19, 2025, 1:49 pm • 0 0 • view
avatar
John R. Lawson @johnroblawson.bsky.social

Also I love how you cited that paper, haha — I just linked one of the Bullshit titled papers to someone else and enjoyed citing it in a published article… felt rebellious 🤣

jun 19, 2025, 1:50 pm • 0 0 • view
avatar
Cornelius (aka Con) Traas @theapplefarmer.bsky.social

www.forkingpaths.co/p/the-death-...

jun 19, 2025, 1:03 pm • 0 1 • view
avatar
barefootwriter @bfwriter.bsky.social

do you understand that LLMs are not capable of inference?

jun 19, 2025, 1:09 pm • 9 0 • view
avatar
barefootwriter @bfwriter.bsky.social

inference is also often an embodied capacity and LLMs don't have bodies. even if you set literally everything else aside, LLMs still cannot think because they don't have bodies and sensations. www.sciencedirect.com/science/arti...

jun 19, 2025, 1:14 pm • 5 1 • view
avatar
Lane, just out here existing @l5k5.bsky.social

No, there isn't. There's an underlying structure of how people have said things before that it replicates. That might feel like truth and sometimes might line up to truth but sometimes it lines up to nonsense.

jun 19, 2025, 1:14 pm • 3 0 • view
avatar
bluehex @bluehex.bsky.social

📌

jun 19, 2025, 4:06 pm • 0 0 • view
avatar
Steve Read @stevereadsydney.bsky.social

I got something interesting from a Tim Minchin speech which I took as; if LLMs provide the most statistically likely response, then doesn’t that mean that tho output is slap bang in the middle of the bell curve- at best only ever ‘average’. Where does creativity, originality and ingenious now lie?

jun 19, 2025, 12:42 pm • 3 1 • view
avatar
vadaga.bsky.social @vadaga.bsky.social

#2+2=5

jun 19, 2025, 12:02 pm • 3 0 • view
avatar
mprclntrlst 🏴‍☠️ @mprclntrlst.bsky.social

And my mom just badly regurgitates what she has heard of Fox News. What’s your point?

jun 20, 2025, 12:41 pm • 9 0 • view
avatar
Civil Disobediencologist @infiniterest.bsky.social

But it's what they could become. Right? That's like saying (incorrigible) toddlers will never take over society. They will mature (and become maga!). I'm concerned with the potential of quantum computing ai. Call it vaporware. Call me paranoid. Chatbots are slap jack. Q-ai is the Glass Bead Game.

jun 19, 2025, 1:31 pm • 0 0 • view
avatar
Whimsyfluff @whimsyfluff.bsky.social

explaining this at my gym-- another trainee responds "I disagree, chatbots seem to be using reason. researchers say they don't even understand how the chatbots think." seemed to get through to my coach a little though, and we discussed the accessibility of learning and evaluating sources.

jul 3, 2025, 2:52 am • 4 0 • view
avatar
Simulflow 🍁 🐀 @simulflow.bsky.social

My husband was exasperated because apparently AI was "arguing" with him about something. I tried to explain how it really works and his response was a sullen "I guess so."

jun 19, 2025, 4:40 pm • 0 0 • view
avatar
Jon Potter @rettopnoj.bsky.social

And the more AI slop that makes it onto the internet, the less frequent the "right" patterns will become.

jun 19, 2025, 12:55 pm • 1 0 • view
avatar
Nai Nohni ♨️🦊 Vtuber @nainohni.bsky.social

If you asked a chat bot to decipher a code or encrypted message. It can only do it if it was solved multiple times and is decently known.

jun 19, 2025, 6:33 pm • 0 0 • view
avatar
Rob Baron @rob-baron.bsky.social

Exactly! A good example is when Musk manipulated Grok to reproduce certain responses to questions about the authenticity of the Holocaust. Grok had no conscious awareness that would have made a human being sceptical about the results churned out according to an altered algorithm.

jun 20, 2025, 7:50 am • 2 0 • view
avatar
Kaitensatsuma @kaitensatsuma.bsky.social

Dogshit at math though

jun 19, 2025, 12:40 pm • 1 0 • view
avatar
Mike @mikemeh.bsky.social

So do they also wear red hats?

jun 21, 2025, 3:48 pm • 3 0 • view
avatar
Supernova 🏴‍☠️ @supernovalabs.co.uk

just like us

jun 19, 2025, 11:37 am • 1 0 • view
avatar
Jennie Dusheck @solenodon.bsky.social

LLMs are by definition repeating back the most likely string of words on a given subject, that is things that have mostly already been said over and over in different ways. So I'm always surprised when some people say that LLMs give them ideas they never would have thought of.

jun 20, 2025, 12:26 am • 21 2 • view
avatar
Luna 🌙 @lunya.me

Just having another "person" to talk to that isn't your own brain expands your thinking, they don't have to be a genius. Those of us without an investment in AI understand its limitations and know how to get the most out of them anyway. Just like any new technology.

jun 20, 2025, 1:10 pm • 0 0 • view
avatar
Jennie Dusheck @solenodon.bsky.social

Yes, but: Putting aside that it reliably lies and definitely isn't thinking, it's not just like any new technology. It consumes vast amounts electricity that in sum contributes meaningfully to extreme climate change. It's not a harmless toy.

jun 20, 2025, 2:52 pm • 0 0 • view
avatar
Luna 🌙 @lunya.me

There is more nuance to all of this and it's more than will fit in a skeet. But humans have similar flaws and we still talk to each other. As for the resources, not every model is as bad as ChatGPT. I prefer to run local models on my own devices, which are much better power efficiency. Not pure evil

jun 20, 2025, 2:56 pm • 1 0 • view
avatar
Luna 🌙 @lunya.me

And for those who say LLMs are worthless, I have this to say: Skill issue. It saves me hours a day, and yes it's always double checked by me. It turns out having a kinda dumb bot that can type incredibly fast is very useful, but thinking it can replace a competent human today is absurd.

jun 20, 2025, 2:59 pm • 1 0 • view
avatar
Justin Buist @justinbuist.bsky.social

When they first broke out to the public I recall somebody back on Twitter saying something to the effect of: "It's just like talking to somebody else that went to Cambridge!" and not in a negative sense. I think about that comment a lot.

jun 20, 2025, 1:37 am • 11 1 • view
avatar
Jennie Dusheck @solenodon.bsky.social

I love this.

jun 20, 2025, 2:28 am • 0 0 • view
avatar
Aviendha69 - ⊃∪∩⪽ - 🦋 🇨🇦 @aviendha69.bsky.social

but as they consume more and more bad input, including their own output, they will spit out more disinformation, or malinformation.

jun 20, 2025, 11:30 am • 11 0 • view
avatar
Aviendha69 - ⊃∪∩⪽ - 🦋 🇨🇦 @aviendha69.bsky.social

instead of burning books and librairies, the 'VIPs' designed book forgeries that fraud the customer, and trick the reader. That corrupt history, and ultimately skewers the citizen's sense of reality.

image image image image
jun 20, 2025, 2:06 pm • 3 1 • view
avatar
kaizhen.bsky.social @kaizhen.bsky.social

I am trying to have the LLM's changed to IPLM's: Intellectual Property Laundering Machines

jun 19, 2025, 9:33 pm • 4 0 • view
avatar
Kaurifish @kaurifish.bsky.social

It's hard for people to understand that LLMs don't get concepts. They have no notion of ideas.

jun 19, 2025, 8:47 pm • 23 1 • view
avatar
Watermelon Appleberry @appleberryvape.bsky.social

They also don’t understand what it is to be human, to have senses that interact with the universe.

jun 19, 2025, 9:20 pm • 5 0 • view
avatar
Kaurifish @kaurifish.bsky.social

Exactly. They are word association engines. Ascribing sentience to them is entirely mistaken. This is disappointing to me as I absolutely love Spider Robinson's AI character, Solace, but she hasn't been born yet.

jun 19, 2025, 10:46 pm • 2 0 • view
avatar
Christina Welsch @christinawelsch.bsky.social

📌

jun 19, 2025, 9:06 pm • 0 0 • view
avatar
Liselle Baker @elles-bells.bsky.social

Thank you for your hallucination-free ED Talk!

jun 19, 2025, 2:36 pm • 3 0 • view
avatar
Jade Dietz Dance @jadedietzdance.bsky.social

Just a heads up, this thread is a massive rabbithole. Be prepared to spend an hour + if you want to read the whole thing.

jun 19, 2025, 7:05 pm • 3 0 • view
avatar
Jade Dietz Dance @jadedietzdance.bsky.social

Literally the next post above this is my feed: bsky.app/profile/xkcd...

jun 19, 2025, 5:38 pm • 1 0 • view
avatar
Shawn Sallis @shawnsallis.bsky.social

Transmission/storage of information is inherently probabilistic see information theory. Knowing something is just storing information. Newer models create a latent space during training that stores the info from the training data. They "know" things just like you but can't reason with that info.

jun 19, 2025, 11:41 am • 5 0 • view
avatar
stove by a whale @whole-groin-oats.bsky.social

"See information theory" is not a useful citation, and Shannon's theory says nothing useful about the training or operation of LLMs.

jun 19, 2025, 7:51 pm • 1 0 • view
avatar
Shawn Sallis @shawnsallis.bsky.social

"See information theory" wasn't meant to be a serious citation. More a guide to what to search on Wikipedia. Information theory generally doesn't offer directly useful engineering insights.

jun 19, 2025, 8:23 pm • 0 0 • view
avatar
stove by a whale @whole-groin-oats.bsky.social

I'm a communications engineer and information theory absolutely has directly useful engineering insights. It applies in dozens of different engineering fields. Just not, so far as I can tell, to LLMs.

jun 19, 2025, 9:41 pm • 1 0 • view
avatar
Shawn Sallis @shawnsallis.bsky.social

There's a difference between "generally doesn't" and "doesn't". That difference is very important here. Also these are not entirely the specific point I was making but:

jun 20, 2025, 3:32 am • 0 0 • view
avatar
Shawn Sallis @shawnsallis.bsky.social

arxiv.org/abs/2411.11984

jun 20, 2025, 3:32 am • 0 0 • view
avatar
Shawn Sallis @shawnsallis.bsky.social

medium.com/ontologik/la...

jun 20, 2025, 3:32 am • 0 0 • view
avatar
Shawn Sallis @shawnsallis.bsky.social

arxiv.org/html/2402.03...

jun 20, 2025, 3:32 am • 0 0 • view
avatar
Shawn Sallis @shawnsallis.bsky.social

www.profbanafa.com/2023/09/larg...

jun 20, 2025, 3:32 am • 0 0 • view
avatar
Shawn Sallis @shawnsallis.bsky.social

medium.com/@akankshasin...

jun 20, 2025, 3:32 am • 0 0 • view
avatar
Shawn Sallis @shawnsallis.bsky.social

My invocation of information theory here was to illustrate that it is the fundamental mathematics that underpins all forms of intelligence. The information stored in your brain operates according to information theory. It probably doesn't use the same algos as LLMs but it's all governed by Shannon.

jun 19, 2025, 8:23 pm • 0 0 • view
avatar
Shawn Sallis @shawnsallis.bsky.social

Though even reasoning models don't really reason as we would understand it. So if your definition of knowing requires the ability to reason with information that's something different than how I think about knowing.

jun 19, 2025, 11:43 am • 3 0 • view
avatar
Shawn Sallis @shawnsallis.bsky.social

Either way storing probabilities derived from a language corpus is literally just storing the information in that corpus. So saying they don't "know" and just repeat is a bit reductive.

jun 19, 2025, 11:47 am • 4 0 • view
avatar
Shawn Sallis @shawnsallis.bsky.social

For example they can often translate between language pairs without needing examples of translations between that specific language pair in their training data. That indicates something deeper than just probabilities of words following each other.

jun 19, 2025, 11:49 am • 6 0 • view
avatar
rui boom ⋰˚☆ @ruiboom.bsky.social

No it is all probabilities. Often useful, sometimes unnervingly smart, sometimes terribly wrong. There's a lot of ideology in the space and both extremes are easily, provably wrong. There is a ton of utility to be had but nowhere near as much as the hype merchants claim.

jun 19, 2025, 4:15 pm • 2 0 • view
avatar
Shawn Sallis @shawnsallis.bsky.social

I did actually express that exact sentiment about both extremes in a reply to a reply. In terms of it all being probabilities you're right but that's the point. All the information contained in your head is encoded via probability distributions. There's a reason entropy and information are linked.

jun 19, 2025, 4:53 pm • 3 0 • view
avatar
Shawn Sallis @shawnsallis.bsky.social

Now older AI that focused on letters, pairs of letters, and even words are very different beasts than what we have today. They stored basically pairwise probabilities and there was no real context. Attention and embedding is a qualitative change from that.

jun 19, 2025, 4:53 pm • 0 0 • view
avatar
Shawn Sallis @shawnsallis.bsky.social

Attention gives us context and embeddings into a high dimensional latent space (crucially of lower dimensionality than the source data) allows for different probability distributions depending on that context.

jun 19, 2025, 4:53 pm • 0 0 • view
avatar
Shawn Sallis @shawnsallis.bsky.social

The dimensional reduction of the latent space helps create embeddings that are more than just representations of words and letters. The reduction requires the embeddings to carry real meaning but not just one meaning. All meanings of the embedded token encountered in the data are included.

jun 19, 2025, 4:53 pm • 0 0 • view
avatar
Shawn Sallis @shawnsallis.bsky.social

This is not necessarily functionality identical to how your brain stores information but it will be based on the same basic mathematical principles. You, me, and everyone else are just really sophisticated probability machines. LLMs are much less complex and don't understand things like we do.

jun 19, 2025, 4:53 pm • 0 0 • view
avatar
Seán Dodger Cannon @theedodger.bsky.social

Finally someone who knows what they’re talking about instead of just stamping their feet and screaming “AI BAD!” Thank you!

jun 19, 2025, 12:34 pm • 0 0 • view
avatar
Shawn Sallis @shawnsallis.bsky.social

Well AI isn't exactly great either. Overestimating what it does/can do is just as bad as underestimating it. The biggest problem though is that it's horrible for the environment and what we're getting from it isn't worth the damage it's doing. Even if I find it really fascinating. 1/

jun 19, 2025, 12:42 pm • 1 0 • view
avatar
Shawn Sallis @shawnsallis.bsky.social

Development should continue so we can improve its capabilities while reducing its harms. However, it shouldn't be all over in basically everything until the value it provides is greater than its harm not only to the environment but to everything else. 2/2

jun 19, 2025, 12:42 pm • 1 0 • view
avatar
Seán Dodger Cannon @theedodger.bsky.social

I think that’s arguable. 1: The advancements in medicine are themselves worth a lot 2: If we would actually commit to green energy instead of waving it like a corporation at a pride parade, that would help a lot 3: I think it is going to collapse capitalism. I want capitalism to collapse.

jun 19, 2025, 12:47 pm • 0 0 • view
avatar
Shawn Sallis @shawnsallis.bsky.social

The biggest environmental impact is actually fresh water use. Cooling the data centers evaporates a lot of water that is then no longer available. It'll eventually rain out somewhere but not where it was used and I believe a lot ends up somewhere that's not useful like in the Ocean. 1/

jun 19, 2025, 12:54 pm • 0 0 • view
avatar
Shawn Sallis @shawnsallis.bsky.social

Green energy can't solve that. You need greater computational efficiency and/or better computational hardware. 2/2

jun 19, 2025, 12:54 pm • 1 0 • view
avatar
AEGomezjr @aegomezjr.bsky.social

📌

jun 19, 2025, 2:50 pm • 0 0 • view
avatar
Mike Sanderson @mikesand.com

Yeah people are like "it doesn't know how wine glasses work." Buddy, it doesn't know how *anything* works. bsky.app/profile/mike...

jun 19, 2025, 1:13 pm • 3 0 • view
avatar
itsnotathing.bsky.social @itsnotathing.bsky.social

This is the best reason I have heard to stay away from ant AI Musk is involved with. His AI is trained on trash. Garbage in, garbage out.

jun 19, 2025, 4:05 pm • 3 0 • view
avatar
Ron Sering @ronsering.bsky.social

They remind me of the Calvin and Hobbes strips where Calvin asks dad to explain something, and he confidently gives a superficially plausible but outrageously wrong answer. Like, why are old photos in black and white? Because the world existed only in black and white until the late 50s.

jun 19, 2025, 1:43 pm • 4 0 • view
avatar
theflyingorc.bsky.social @theflyingorc.bsky.social

I feel like this is true of the backend of LLMs, but most have layers in between the base model and the output text to do some level of analysis on the type of information being returned?

jun 20, 2025, 3:47 pm • 1 0 • view
avatar
asmoth @asmoth360.bsky.social

Yes and no, because they can search the web and get facts out of them. ChatGPT told me something yesterday that was against a fact I read on the web, turned out it was right and the source I read by myself was wrong. But it compared several of them and insisted I was wrong, which was the case.

jun 20, 2025, 7:43 am • 3 0 • view
avatar
Rasmus Källqvist @rasmuskallqvist.bsky.social

But it still operates solely on a syntactical level. It doesn't _learn_ any semantical facts from such a search.

jun 20, 2025, 9:54 am • 2 0 • view
avatar
Knabbers @knbbr.bsky.social

First time I hear that ChatGPT does a meta study on facts. Can you possibly share the chat where it shows the comparison?

jun 20, 2025, 9:12 am • 10 0 • view
avatar
asmoth @asmoth360.bsky.social

It's a bit long and it's in french but there it is. I was trying to know more about the hybridation system of the Mazda 2 and the question was whether it was 12V, 24V or 48V (very specific question I know)

jun 20, 2025, 11:48 am • 0 0 • view
avatar
asmoth @asmoth360.bsky.social

To be honest it first told me it was 12V, which surprised me because I thought it was 48V. I contradicted it, then it told me it was 24V. So I had to correct it first, but then it was right and I was wrong 😁

The Mazda 2 received micro-hybridization (M Hybrid) starting in 2020 (model year 2020). ✅ Details: Technical name: Mazda M Hybrid Model concerned: Mazda 2 equipped with the Skyactiv-G 1.5 90 hp engine Launch date: end of 2019 for the French market (2020 model) Technology type: 12V system with integrated starter-alternator (ISG), allowing for improved Stop & Start and a small boost to the combustion engine for energy efficiency. 🔁 Before/After Distinction: Year | Engine | Hybridization | Notes 2015–2019 | 1.5 Skyactiv-G (75/90/115 hp) | ❌ No | No hybridization, only combustion 2020–2023 | 1.5 Skyactiv-G 90 hp M Hybrid | ✅ Yes (12V) | Reduced emissions, better fuel economy 2022–2023 | Mazda 2 Hybrid (rebadged Toyota Yaris) | ✅ Full hybrid (not M Hybrid) | Not to be confused with micro-hybridization 🔍 How to recognize it: Me: Are you sure the M Hybrid uses 12V? ChatGPT: Very good point—and yes, you're right to ask. ✅ Here's the precise answer: The Mazda M Hybrid system on the Mazda 2 (as well as the Mazda 3, CX-30, etc.) is a 24V mild hybrid system, not a 12V one. ⚙️ Technical details of the Mazda M Hybrid 24V: Item | Description System voltage | ✅ 24 volts Components | Integrated starter-generator (ISG) + 24V lithium-ion battery + inverter Functions provided | Brake energy recovery, improved stop & start, engine support Transmission type | Manual or automatic CO₂ savings / fuel consumption | ~0.2–0.4 L/100 km in real-world use depending on the route 📍 On which Mazda 2 models? The 1.5 Skyactiv-G 90 hp engine with M Hybrid 24V will be available starting in 2020. This system is lightweight and does not provide autonomous electric driving: it cannot operate in 100% electric mode like a full hybrid. ❗ Important distinction: Hybridization type | Voltage | Models concerned | Main function Micro-hybridization (Mazda M Hybrid) | ✅ 24V | Mazda 2 (2020+), Mazda 3, CX-30 | Stop & Start, light assistance Full hybrid (Toyota) | 100–200V | Mazda 2 Hybrid (Yaris derivative) | Electric propulsion possible, HSD 12V Stop & Start only | 12V | Mazda 2 before 2020 | Starts/stops the combustion engine ✅ In summary: The Mazda M Hybrid is indeed a 24V system, not a 12V. It equips Mazda 2s from 2020, with the Skyactiv-G 90 engine. It improves fuel consumption and emissions without radically changing the driving experience. Would you like a comparison between the Mazda M Hybrid 24V and a Toyota full hybrid system (like the one in the Mazda 2 Hybrid)? Me: Are you sure?
jun 20, 2025, 11:48 am • 0 0 • view
avatar
asmoth @asmoth360.bsky.social

I was pretty sure it was gonna apologize like it always does, so I was quite surprised when it told me directly I was wrong. If you click on "Citations" it can give you all the sources it used to conclude it was 24V.

Me : Are you sure? ChatGPT: Yes, I'm sure: the Mazda M Hybrid micro-hybridization does indeed use a 24V system, not a 12V one. ⚙️ Technical details confirmed Mazda indicates that the system uses a 24-volt lithium-ion battery, storing energy recovered during deceleration reddit.com+12mazda.ie+12mazda3revolution.com+12. This energy then powers an Integrated Starter Generator (ISG) which assists the engine (starting, acceleration) and feeds the electrical grid after conversion via a DC-DC converter. According to user reports, the system is not 12V but 24V. 🧩 Summary of Operation Component | Role 24V Battery | Stores energy recovered during braking ISG (Starter-Alternator) | Provides assistance with starting and acceleration DC-DC Converter | Steps down the 24V voltage to 12V for the auxiliary power grid ✅ In summary: Mazda M Hybrid = 24V system. 24V Battery → ISG → DC-DC Converter → also supplies the circuits with 12V. This improves stop-and-start functionality, fuel consumption, and comfort, without using all-electric operation. Me: No, it's 48V. ChatGPT: No, the Mazda M Hybrid (mild-hybrid) system used in the Mazda2 is not a 48V system, but a 24V system. Here are the key technical details that confirm this: ⚙️ How the M Hybrid (24V) System Works A 24V lithium-ion battery recovers energy during deceleration and braking. automobile-propre.com+3mazda-brochures.com+3fr.wikipedia.org+3 This energy powers an Integrated Starter Generator (ISG) via a belt, which intervenes during restarts, provides light engine support, and supplies the auxiliary network via a DC-DC converter to 12V. 🧠 Clarification vs. 48V 48V systems (as found on higher-end Mazda models – e.g., CX-60, CX-90) feature higher-capacity batteries to support more powerful engine or suspension assistance, but this is not the case with the Mazda2. ✅ In summary Mazda M Hybrid (Mazda2, Mazda3, CX-30, etc.): lightweight 24V system, minimal motor assistance, no all-electric drive. Not at all a 48V system on the Mazda2. Mazda's 48V systems are reserved for other ranges like the CX-60/CX-90. Therefore, your statement is incorrect in the context of the Mazda2: the M Hybrid is indeed a 24V system, not a 48V system. Citations * mazda.ie Mazda Electric & Hybrid Technology | M Hybrid When you decelerate, the generator produces electricity which is stored within a 24-volt battery. This electricity is then used for starting the engine, ... Plus * reddit.com Mazda Hybrid system - how does it work? Can someone explain in ... 4 décembre 2024 — I didn't say that. I read somewhere that it's a 24 Volts system only. It just
jun 20, 2025, 11:48 am • 1 0 • view
avatar
asmoth @asmoth360.bsky.social

So yes it still makes mistakes, and you still have to make sure everything is right. But if you contradict it, it will search in more details and come with information from the web. And it provided a lot of explanation of how it works, now just the "24V" bit of information. I found that interesting

jun 20, 2025, 11:54 am • 0 0 • view
avatar
lakim @lakim.bsky.social

Thank you for writing exactly what I wanted to read. The amount of followers you have makes me trust in your opinion. Obviously you have researched the way LLMs work to an extent that gives you authority to shape other people’s mind about them. What are you designed for?

jun 19, 2025, 2:19 pm • 1 0 • view
avatar
Dan Thiell @danthiell.bsky.social

And the human brain is prone to a certain set of errors that make it seem almost custom designed to overestimate and misunderstand AI.

jun 19, 2025, 1:40 pm • 1 0 • view
avatar
rockslammer @rockslammer.bsky.social

They make good memes. The facts? What facts and *who's facts*

image
jun 19, 2025, 12:46 pm • 0 0 • view
avatar
David Dean @srqblue.bsky.social

Right it often sounds like knowledge but isn’t. It can be useful for breaking down factual things like a car lease agreement

jun 19, 2025, 12:27 pm • 2 0 • view
avatar
Benjamin Gorman @benjamingorman.bsky.social

Similarly, the quality of their writing is merely a measure of what is common. They're basically Turing Tests of what an average writer produces. As I tell students, if they aspire to average, that is as far as ChatGPT can take them.

jun 20, 2025, 11:39 pm • 0 0 • view
avatar
Novatype @novatype.bsky.social

AI doesn’t know. But neither do most people pretending to know it doesn’t know.

jul 1, 2025, 4:01 pm • 2 0 • view
avatar
Sven @mostvexingparse.bsky.social

They're autocomplete on steroids, not search engines or research assistants

jun 19, 2025, 11:24 am • 77 4 • view
avatar
Rhiannon @prhiannon.bsky.social

"Autocomplete on steroids" is exactly how I've been describing them. I feel validated. 😁

jun 24, 2025, 8:32 pm • 7 0 • view
avatar
carabinto.bsky.social @carabinto.bsky.social

I'm able to run 10000 Monte Carlo simulations on Grok to determine probabilities of stock price changes. That's pretty useful

jun 19, 2025, 1:54 pm • 1 0 • view
avatar
R.E.L. @fancydancer.bsky.social

📌

jun 19, 2025, 6:33 pm • 0 0 • view
avatar
Caspar @casparuk.bsky.social

You should put this into chatgpt and analyse the critical response because it claims you are speaking half-truths and being deliberately misleading just to make a hot take social media post.

jun 19, 2025, 2:19 pm • 0 0 • view
avatar
Emcee Hammerpants @emceehammerpants.bsky.social

Just for fun I asked ChatGPT about basically the OP’s premise and it’ll more or less agree that’s how it works internally. It “knows” that it doesn’t “know” anything.

jun 19, 2025, 2:25 pm • 0 0 • view
avatar
Caspar @casparuk.bsky.social

Instead of asking about the premise, ask for an evaluation of the actual post - verbatim- so it can highlight the parts that are misleading and or half-truths. Tell me what parts you disagree with.

jun 19, 2025, 2:30 pm • 0 0 • view
avatar
snow @dogalmighty.bsky.social

💯

jun 20, 2025, 1:54 am • 2 0 • view
avatar
cmaxmost.bsky.social @cmaxmost.bsky.social

They are glorified search engines

jun 20, 2025, 12:43 am • 0 0 • view
avatar
A.C.Lander @landruc.bsky.social

Not even that. They're stochastic numbers generators.

jun 20, 2025, 12:44 am • 1 0 • view
avatar
Betsy Cazden @betsy-cazden.bsky.social

They are not search engines -- but people use them as if they were. The LLMs make stuff up, putting it in a form that looks right but doesn't exist in the outside world -- Legal cases that have the right format, but never happened in any courtroom. Or a real author but a fictitious book title.

jun 20, 2025, 1:24 am • 5 0 • view
avatar
thorbenkrieger.bsky.social @thorbenkrieger.bsky.social

True. And is that so different from humans?

jun 20, 2025, 2:47 pm • 0 0 • view
avatar
Angry Persian Gardener @angrypersiangardnr.bsky.social

Yes. Human experts with years of hands on experience and scholarship exist

jun 20, 2025, 2:50 pm • 6 0 • view
avatar
thorbenkrieger.bsky.social @thorbenkrieger.bsky.social

And they use their brain to come up with solutions that are most likely given their set of training weights, right?

jun 21, 2025, 9:41 am • 0 0 • view
avatar
John McLaren @johnmclaren.bsky.social

Do not go to LLMs like ChatGPT for information. DO NOT GO TO LLMs LIKE ChatGPT FOR INFORMATION. Do not go to LLMs like ChatGPT for information.

jun 19, 2025, 3:18 pm • 9 1 • view
avatar
Sarah Burnes @sarahburnes.bsky.social

They are trained to mimic patterns that HUMANS write down, and if humans stop doing so, uh oh.

jun 19, 2025, 11:41 am • 7 0 • view
avatar
Cheryl @cheryl-blue.bsky.social

Yep. When I was stupid enough to use the "live chat" function of a domestic abuse charity, it told me confidently that being abused because you're disabled isn't abuse, then ordered me to get off the chat.

jun 19, 2025, 3:24 pm • 4 1 • view
avatar
Nerd #7132 @nerd7132.bsky.social

holy ballsack that's bad

jun 19, 2025, 6:09 pm • 3 0 • view
avatar
Cheryl @cheryl-blue.bsky.social

It took me a couple of years to figure out that it was a machine and not a human on the other end of the chat, which made it a lot worse.

jun 19, 2025, 6:14 pm • 4 0 • view
avatar
Dizzly @dizzly.bsky.social

I just finished recording a lecture for my stats students about regression and how ChatGPT works and mentioned this!

jun 19, 2025, 11:57 am • 5 0 • view
avatar
writingaxis.bsky.social @writingaxis.bsky.social

The human brain is essentially “information in/information out.” We know very little about what is happening in between. It’s highly likely that much if it involves pattern recognition and information regurgitation. I don’t think we can say we even know what it means to “know” something.

jun 19, 2025, 2:59 pm • 3 0 • view
avatar
schneideref.bsky.social @schneideref.bsky.social

Just learnt about Zipf's law about learning language... We do have a broader and deeper "understanding" above only describing - but some comments sound like a too superior picture of the biophysical machine we humans are (and don't like its technical side).

jul 1, 2025, 11:08 am • 0 0 • view
avatar
Unmasked0rNot @deeplove4all.bsky.social

Im training AI not just whining about how it's not perfect

jun 19, 2025, 10:46 pm • 1 0 • view
avatar
Tim's Motorbike Rides @timsrides.bsky.social

Close, but an oversimplification. LLMs aren't trained on random data... they model factual knowledge and can retrieve it with a high (and continually improving) degree of accuracy. It's true they lack actual "understanding", but they match patterns of meaning, not just words.

jun 19, 2025, 11:35 am • 4 0 • view
avatar
j d lesemajeste @basculelock.bsky.social

How exactly are they capable of doing that?

jun 19, 2025, 11:48 am • 4 0 • view
avatar
Tim's Motorbike Rides @timsrides.bsky.social

I'm not entirely sure what you're asking, but the simple answer is: because that's how they're trained. 🙂

jun 19, 2025, 11:55 am • 2 0 • view
avatar
j d lesemajeste @basculelock.bsky.social

I was asking for a technical breakdown of how truth values would be assigned to tokens (? or something similar ?) in order to produce results that are accurate to reality during the training process. "That's how they are trained" is an unserious non-answer.

jun 19, 2025, 11:58 am • 12 0 • view
avatar
j d lesemajeste @basculelock.bsky.social

You seem to believe something is occurring that does not currently have a material pathway of occurring unless there's been some developments in LLM that have not been publicized.

jun 19, 2025, 11:58 am • 7 0 • view
avatar
j d lesemajeste @basculelock.bsky.social

How exactly can an LLM match a pattern of meaning? What process occurs to allow that within its statistical model? Is there a specific model you are thinking of? Word, phrases, and statements can have many different meanings to different people and in different circumstances. How are those elements

jun 19, 2025, 12:01 pm • 11 0 • view
avatar
j d lesemajeste @basculelock.bsky.social

reduce to tokens for analysis?

jun 19, 2025, 12:01 pm • 7 0 • view
avatar
Tim's Motorbike Rides @timsrides.bsky.social

Yeah… LLMs don't assign truth values… they learn statistical correlations from massive text data. “Patterns of meaning” emerge from co-occurrence and context. Transformers use an attention mechanism to model these complex relationships between words, creating a structured (but not grounded) output.

jun 19, 2025, 12:34 pm • 3 0 • view
avatar
j d lesemajeste @basculelock.bsky.social

Ok, valid, I don't agree but that's a good answer. I just don't think that method causes patterns of meaning to emerge as you imply because can observe the output of LLMs lacking a pattern of meaning, but I see what you're saying.

jun 19, 2025, 12:38 pm • 8 0 • view
avatar
j d lesemajeste @basculelock.bsky.social

We also might disagree about linguistics stuff and we might disagree on how patterns of meaning are formed within natural language and that's why we disagree about LLMs.

jun 19, 2025, 12:41 pm • 7 0 • view
avatar
Brian Guilfoos @guilfoos.com

Even general purpose LLMs have some utility, the big problem is how people use them. But specifically trained AI is where the juice is. An image recognition AI that converts photos to the text it sees can be an accessibility aid. Or an AI trained on radiographs could detect cancer early.

jun 19, 2025, 11:57 am • 1 0 • view
avatar
Tim's Motorbike Rides @timsrides.bsky.social

On your first point... absolutely agree there needs to be a wider understanding of the limitations of LLMs, but that should involve an accurate and up-to-date explanation of how they work (and it also doesn't change the fact they can be incredibly useful). No arguments at all on your other points. 🙂

jun 19, 2025, 12:10 pm • 2 0 • view
avatar
Raoul @raoel.bsky.social

True, but by being able to generate likely sequences of text tokens, they do store information/facts. 'Knowing', in the human sense, is another whole can of worms. I sometimes feel students have stored a lot of facts in their brain but still question if they 'know'.

jun 19, 2025, 3:42 pm • 1 0 • view
avatar
beepatrix @radm.transsexual.navy

there is a big difference between "storing information" and "being extremely likely to produce information through what is still just random chance" n 6-sided dice are very likely to produce n x 3.5 when rolled, especially if n is very big, but the dice are not storing that number in any sense

jun 19, 2025, 4:31 pm • 3 0 • view
avatar
Raoul @raoel.bsky.social

I mean information as defined by Shannon.

jun 20, 2025, 5:22 pm • 0 0 • view
avatar
Lars Gottlieb @larsgottlieb.bsky.social

And they are no more artificially intelligent than the logic controlling the ghosts in Pac-Man

jun 19, 2025, 4:35 pm • 0 0 • view
avatar
ren @renslaer.bsky.social

And when LLMs can't find a pattern, it makes stuff up. And we don't know how it does that or why it chooses that wrong answer.

jun 19, 2025, 1:24 pm • 0 0 • view
avatar
jasmetal.bsky.social @jasmetal.bsky.social

What about RAG systems that base results off actual references?

jun 19, 2025, 3:11 pm • 2 0 • view
avatar
Bluesky staff is infested with Dorsey clones @jaygraberisanazi.bsky.social

There's plenty of artificial, but very little intelligence, in the AI space...But it's 99% tech bro BS, like 99% of their other revolutionary BS that takes existing ideas, over funds them to kill competition, breaks laws and ordinances, along with decency, with impunity, becomes "indispensable".

jun 19, 2025, 2:07 pm • 1 0 • view
avatar
Moze🦅 @mosessembatya0.bsky.social

Auto correct😭

jun 19, 2025, 12:57 pm • 1 0 • view
avatar
Elly Langewis @ealangewis.bsky.social

Good to consider. But also, the case of the weight of a bull. Let enough people take a guess, nobody has it right, but the average of all guesses is bang to rights!

jun 19, 2025, 12:48 pm • 1 0 • view
avatar
misanthropic codger @cybelesees.bsky.social

God, they’ve reinvented Sarah Boone 😱

jun 19, 2025, 12:48 pm • 0 0 • view
avatar
Bjårne Witt🍌⚡️(3 ⭐ Bewertung) @bjarnewitt.bsky.social

That sums it up

jun 19, 2025, 7:50 pm • 0 0 • view
avatar
Apostol Dyankov @adyankov.bsky.social

Looks like the biggest FAFO experiment in himan history.

jun 19, 2025, 1:49 pm • 2 0 • view
avatar
Chowder (commissions open) @chowderstyle.bsky.social

The Chinese Room got the Torment Nexus treatment.

jun 19, 2025, 2:33 pm • 1 0 • view
avatar
Dapper~Blueberries @dapperblueberries.bsky.social

The people who use chatbots in order to find info instead of just having gay roleplay sessions with their favorite characters are a bunch of idiots.

jul 1, 2025, 9:35 am • 3 0 • view
avatar
Tab Combs @drtcombs.bsky.social

and the corrollary -- just because something is written down doesn't mean it's correct...even if it's popularly believed to be correct! Like 600 years ago when everyone just knew Earth was flat and the sun and planets revolve around it. The more LLMs repeat what's currently written down...

jun 19, 2025, 11:29 am • 36 1 • view
avatar
Tab Combs @drtcombs.bsky.social

...the more they reinforce what's written down, and the harder it is for actual human knowledge-builders to advance any new information to the contrary. Who would possibly have believed Copernicus in the age of ChatGPT???

jun 19, 2025, 11:32 am • 29 3 • view
avatar
Tab Combs @drtcombs.bsky.social

Since the advent of chatgpt and its ilk, scholars like me have been forced to watch as people insist on running our painstakingly produced research through chatbots before deciding if they are willing to read, think about, and invest brain cells in understanding it....

jun 19, 2025, 11:37 am • 27 2 • view
avatar
Tab Combs @drtcombs.bsky.social

"ChatGPT agrees with you!" or "oh but ChatGPT says something different" is NOT the analysis you think it is. It's a slap in my face, and it's actively undermining the future of knowledge building. the world is flat.

jun 19, 2025, 11:39 am • 35 1 • view
avatar
Larry Brown @elcapitanbrown.bsky.social

It’s an easy argument though. ChatGPT by design returns information that is the current status quo for a given stance. All progress challenges the status quo and proven correct it moves us forward. So ChatGPT agreeing isn’t necessary unless you don’t want and promote growth.

jun 19, 2025, 1:31 pm • 5 0 • view
avatar
piskyinpresbyland.bsky.social @piskyinpresbyland.bsky.social

While geocentrism was certainly a common view pre-Copernicus, a flat earth was not. Ironically the claim that it was commonly understood that the earth was flat so recently in history is itself the sort of misconception likely to get picked up by a LLM.

jun 19, 2025, 11:43 am • 4 0 • view
avatar
Adam Mosley @adamjmosley.bsky.social

This is a great example, but not for the reason you think. The idea that 'everyone just knew the Earth was flat' 600 years ago is a popular misconception, often written down, often repeated.

jun 19, 2025, 11:41 am • 9 0 • view
avatar
Tab Combs @drtcombs.bsky.social

yes, this is in fact my point.

jun 19, 2025, 11:47 am • 4 0 • view
avatar
Anne Leader @leaderannephd.bsky.social

Unfortunately they are working feverishly to make them “smarter”. If we humanists don’t get on it, we will all be made obsolete

jun 19, 2025, 12:11 pm • 1 0 • view
avatar
Redshift @redshiftsinger.bsky.social

No. They’ll destroy themselves under the weight of their inability to force a statistical-word-order machine to ever actually think. “AI” is a threat, but it’s a cognitohazard, primarily. Using it will make you more stupid and trigger mental health issues.

jun 19, 2025, 12:41 pm • 4 0 • view
avatar
Redshift @redshiftsinger.bsky.social

In the short term there will be economic disruptions but those who resist the pressure to cave to “AI” will come out the other side of the bubble collapse more employable than those who rotted their brains on it.

jun 19, 2025, 12:41 pm • 4 0 • view
avatar
Anne Leader @leaderannephd.bsky.social

Some are even predicting a resurgence of the #humanities. Hope I live long enough to get reemployed!

jun 19, 2025, 4:51 pm • 3 0 • view
avatar
Old Leggy Owl @fluffybirdpancakes.bsky.social

It’s like robot phone-a-friend except that friend is drunk and has never left their hometown in their entire life.

jun 19, 2025, 2:48 pm • 1 0 • view
avatar
Kevin Barboro @abrapw.bsky.social

I'm starting to lose my mind screaming "it doesn't KNOW anything!" at every AI ad

jun 19, 2025, 11:37 am • 77 4 • view
avatar
EarlGreyTea-hawt @earlgreytea-hawt.bsky.social

AI ads should NOT be telling students to use AI to explain complex ideas & do the a studying for them! The bullshitting machine that sucks up a dwindling vital resource is NOT something you should replace the legion of study materials made by educators with, they need to learn how to learn, ffs

jun 19, 2025, 7:05 pm • 7 0 • view
avatar
Matthew Russell @matteomics.bsky.social

Students using AI is a bit like a body builder going to the gym and having a robot lift weights for them.

jun 19, 2025, 8:21 pm • 16 3 • view
avatar
Display Names Are Gross @usernamesaregross.bsky.social

But they aren't body builders. They are normal folks who don't see a need for show muscles. Forcing someone to lift like a bodybuilder just to get a piece of paper that qualifies them to be department manager at Target? Yeah, they are going to cheat.

jun 20, 2025, 4:48 pm • 0 0 • view
avatar
The Bookwench 🇨🇦🐱🏳️‍🌈 @shiftercat.bsky.social

I read about a student who used LLMs for their assignments, then got to the part of the course where they had to do labs, and realized they didn't know the material at all. The advice to them was to ask to change their major, and then actually do the work.

jul 1, 2025, 8:55 am • 4 0 • view
avatar
Display Names Are Gross @usernamesaregross.bsky.social

I had a substitute teacher for two years in my high school electronics class. When they finally hired a replacement, they found that we had learned nothing, even though we all had As. We had figured out a way to cheat, and no one had convinced us there was a reason not to. This was thirty years ago.

jul 1, 2025, 5:50 pm • 1 0 • view
avatar
The Bookwench 🇨🇦🐱🏳️‍🌈 @shiftercat.bsky.social

And now the equivalent of that bad teacher is being openly pushed on colleges.

jul 1, 2025, 6:18 pm • 1 0 • view
avatar
Display Names Are Gross @usernamesaregross.bsky.social

The substitute was not a bad teacher. Pretty much everyone in the class was only there because they needed a trade-related elective to satisfy the terms of a dropout prevention program. It was not a recipe for academic excellence.

jul 1, 2025, 6:55 pm • 1 0 • view
avatar
The Bookwench 🇨🇦🐱🏳️‍🌈 @shiftercat.bsky.social

If they were teaching in such a way that everyone could and did cheat, and they never noticed once, then yeah, that's a bad teacher.

jul 1, 2025, 6:58 pm • 0 0 • view
avatar
Display Names Are Gross @usernamesaregross.bsky.social

We literally had the answer books, and the sub was not allowed to alter the lesson plan. It was an administrative failure, really, not a teaching failure.

jul 1, 2025, 7:01 pm • 0 0 • view
avatar
The Bookwench 🇨🇦🐱🏳️‍🌈 @shiftercat.bsky.social

If they hadn't been taking a subject that required labs and other in-class work, though, how deep in would they be when they realized they didn't know squat about the subject they were supposed to get a diploma for?

jul 1, 2025, 8:58 am • 3 0 • view
avatar
Cheshire Cat ᓚᘏᗢ, @autismsupsoc.bsky.social

This is Why LLMs will frequently report the wrong number of pinions for Ravens & Crows … it’s a *popular* but factually erroneous oft *repeated* joke that they have numerous pinions offset by one. However, Ravens & Crows both have the common set of 10 pinions …

jun 19, 2025, 4:12 pm • 7 0 • view
avatar
Sean🍁 @spiggles.bsky.social

This is a very accurate description of how LLMs work and it's not clearly understood well by most people. Expectations for what this tech can accomplish are being over hyped by the companies pushing it.

jun 19, 2025, 2:59 pm • 3 0 • view
avatar
SkynetAndChill.com @skynetandchill.com

you could hit up wikiroulette - wikiroulette.co ask chatgpt about any fact on a page, where did a mid-tier golfer from the 80s win his first champtionship will get it right, most of the time. ofc just memorized stuff from wikipedia. but in that sense it does build a knowledge graph.

jun 19, 2025, 11:27 pm • 2 0 • view
avatar
SkynetAndChill.com @skynetandchill.com

ok now LLMs all have web search, but the test generally worked without it. they can't do simple multi-hop kg inference, or even if they know that Tom Cruise's mother is Mary Lee Pfeiffer, if you asked who Mary Lee Pfeiffer's son is, they might say 'no information'. world model is non-existent.

jun 19, 2025, 11:38 pm • 2 1 • view
avatar
Jean Giardina @jeangiardina.bsky.social

LLMs are only wrong in the same sense that a completed MadLib is wrong.

jun 19, 2025, 1:01 pm • 3 0 • view
avatar
infovoy @infovoy.bsky.social

Correct: LLMs don't even 'know' in the loose, figurative sense that a database or calculator 'knows' an answer & can get it right or wrong, let alone the strict, literal sense of being able to bring something into the conscious mind they lack.

jul 3, 2025, 6:17 am • 4 1 • view
avatar
🦾C.C., The Big Green @ccxiv.bsky.social

The 10 Billion Dollar Parrot

jun 20, 2025, 4:11 am • 3 0 • view
avatar
dee7777.bsky.social @dee7777.bsky.social

They are mostly highly annoying.

jun 20, 2025, 8:07 pm • 0 0 • view
avatar
Wolfcat 🥄 @wolfcat.bsky.social

Clippy but with more guesses, and still often wrong.

jun 19, 2025, 11:23 am • 87 0 • view
avatar
Tsundoku Puzzle @tsundokupuzzle.bsky.social

"It looks like you're trying to make a legal argument based on references to case law that does not exist, would you like some help?"

jul 7, 2025, 1:35 am • 0 0 • view
avatar
antarestrader.bsky.social @antarestrader.bsky.social

From and Epistemological perspective how is what LLMs do any different from academic publishing and peer review. Are not both determining "facts" based on appeals to expert sources? LLMs are just orders of magnitude faster at summarizing the current literature.

jun 19, 2025, 12:26 pm • 2 0 • view
avatar
Larry Brown @elcapitanbrown.bsky.social

I think it’s just a matter of time before they spend time examining the training material for better evidence to discern rumor or guess work from fact and weighting the facts better. They have the same tools we have to test an answer. My guess here is that we don’t get to see the high end…

jun 19, 2025, 2:40 pm • 0 0 • view
avatar
Larry Brown @elcapitanbrown.bsky.social

versions that do this. We get the versions with bugs so we don’t use that power in unforeseen ways. I think if we had access to versions the owners do we’d freak out. At least if I were them that’s what I would do.

jun 19, 2025, 2:40 pm • 0 0 • view
avatar
Tim Nethermans @plakaap.bsky.social

No. For starters, because they're terrible summarizers

jun 19, 2025, 1:33 pm • 3 0 • view
avatar
George Spiggott @gspiggott.bsky.social

Excellent, thank you.

jun 19, 2025, 2:04 pm • 0 0 • view
avatar
Lindsay Clark @datadictum.bsky.social

What's more, Large Language Models don't even model language; they model training data, largely from the internet. Human language is much, much more than that. www.theregister.com/2024/08/30/a...

jul 1, 2025, 9:10 am • 62 5 • view
avatar
cjnewton.bsky.social @cjnewton.bsky.social

Emily Bender’s The AI Con is particularly good on this

jul 1, 2025, 11:18 am • 7 1 • view
avatar
David Higgins @davidhiggins.bsky.social

📌

jul 3, 2025, 1:37 am • 0 0 • view
avatar
Leslie @feralleslie.bsky.social

Finding out a coworker let ChatGPT hallucinate customer stats in marketing materials:

jun 19, 2025, 12:03 pm • 5 0 • view
avatar
wtpho @wtpho.bsky.social

idk how the training works, but doesn't that mean as chatbot use proliferates, it'll start cannibalizing its own outputs (or learning from people repeating it) and regurgitate distorted info further and further removed from reality?

jun 19, 2025, 7:44 pm • 17 1 • view
avatar
Kris 🏳️‍🌈 @halbpro.bsky.social

Yes, it’s the Garbage In Garbage Out at internet scale. It’s one reason AI companies want access to things like business documents that aren’t freely available.

jun 19, 2025, 8:01 pm • 19 1 • view
avatar
Allen James @lalenjames.bsky.social

These criticisms of AI are always amusing to me when you consider the sheer scale of human-created garbage on the internet, long before LLMs arrived on the scene, due to people parroting the bad information they heard or read from others.

jun 19, 2025, 9:55 pm • 8 0 • view
avatar
wtpho @wtpho.bsky.social

so now it'll be done exponentially faster with absolutely no opportunity for critical insight because machines are fundamentally incapable of cognitive function? i can develop the skills to dissect what i consume and discern its value, but what capacity do LLMs have to do the same? they don't think.

jun 19, 2025, 10:34 pm • 0 0 • view
avatar
Allen James @lalenjames.bsky.social

I see it more like this: If you had to develop the critical-thinking skills to discern what's true or not in a pre-LLM internet age that was absolutely full of human-generated garbage, you'll still need those skills in a post-LLM age. Whether the garbage will be accelerated is yet to be seen.

jun 19, 2025, 11:30 pm • 0 0 • view
avatar
wtpho @wtpho.bsky.social

not if we're priming people to become dependent on the technology because we're biased to believe machines are infallible in regards to objectivity. we've already seen the problems in existing technology (people refusing to acknowledge bias in analytical ai)

jun 20, 2025, 12:44 am • 3 0 • view
avatar
wtpho @wtpho.bsky.social

if LLMs can't even be trusted to not realize when they're making up sources, why should we be asking them for answers to anything?

jun 20, 2025, 12:44 am • 2 0 • view
avatar
Allen James @lalenjames.bsky.social

It's still early days for the technology so it's hard to say how error-prone it's going to be. If the next-gen LLMs are still hallucinating sources then yeah people should be made aware the LLMs are not infallible. But ultimately it's a critical thinking / media literacy issue like in the past.

jun 20, 2025, 12:52 am • 0 0 • view
avatar
Allen James @lalenjames.bsky.social

How do you know what's on Wikipedia is true or not? You shouldn't treat it as infallible, but using it as a starting point and considering what it has to say, and going from there is a reasonable thing to do. Same kind of thing IMO.

jun 20, 2025, 12:53 am • 0 0 • view
avatar
wtpho @wtpho.bsky.social

we were told from the get-go to approach wikipedia w caution and scrutiny and there's transparent threads to follow for the info there, but that's not how LLMs are, and the resource cost to have them unreliably perform what we already have tools for are disproportionate to their usefulness

jun 20, 2025, 2:44 am • 1 0 • view
avatar
Pensive Primate @penprim.bsky.social

(chuckle) fair point. And that's a genuine problem, for sure. But this LLM problem feels serious too.

jun 20, 2025, 9:49 am • 0 0 • view
avatar
Allen James @lalenjames.bsky.social

It definitely is and I don't deny that. Hopefully we can steer it in a good direction.

jun 20, 2025, 9:00 pm • 0 0 • view
avatar
🏁Chipzen Zolsa🏴 @chipzen-zolsa.bsky.social

It’s sad to see an “artist” who’s pro AI. I bet if you actually worked hard you would be able to actually produce art instead of having a mindless algorithm producing worthless slop for you

jun 19, 2025, 10:45 pm • 1 0 • view
avatar
Allen James @lalenjames.bsky.social

Oh I see, you saw "artist" and you saw a comment that's not kneejerk anti-AI (therefore I must be pro-AI) so you assumed my art is AI-generated. That's adorable.

jun 19, 2025, 11:24 pm • 3 0 • view
avatar
🏁Chipzen Zolsa🏴 @chipzen-zolsa.bsky.social

That is how most folk will view it. Anti-AI is the only pro-art perspective. There is no middle ground here

jun 19, 2025, 11:27 pm • 2 0 • view
avatar
Jarvin @animaznman.bsky.social

Recently attended a session at GDC where the art director at Rovio (creators of Angry Bird), claim they utilize AI to help them with generating background art. They then showed their practice of generating backgrounds they felt fit the target they were looking for. You can attack them and say that

jun 19, 2025, 11:42 pm • 1 0 • view
avatar
Jarvin @animaznman.bsky.social

Angry Bird isn't really art 🤷‍♂️. But this is a case where the 'artists' wanted to focus on something they felt was more important/ more enjoyable, which to them, was rendering their key characters.

jun 19, 2025, 11:42 pm • 1 0 • view
avatar
🏁Chipzen Zolsa🏴 @chipzen-zolsa.bsky.social

and their work is fundamentally diminished because they chose laziness over human expression. That is again, at its core, anti art

jun 19, 2025, 11:46 pm • 0 0 • view
avatar
Jarvin @animaznman.bsky.social

Their work 'diminishing' is your opinion to hold. The artists over there would argue that it let them express the parts they cared to focus on. Their background render approach allows them to instead create a better character expression.

jun 19, 2025, 11:55 pm • 1 0 • view
avatar
Allen James @lalenjames.bsky.social

I'm hearing similar kinds of stories. Artists as a whole aren't as anti-AI as some who are very vocal online want to make it seem. There's a whole spectrum from outright refuse to use it, to dabble with it and find it interesting, to use it in an auxiliary manner, to full-fledged "AI artists".

jun 20, 2025, 12:43 am • 0 0 • view
avatar
🏁Chipzen Zolsa🏴 @chipzen-zolsa.bsky.social

No such thing as an AI artist

jun 20, 2025, 1:34 am • 0 0 • view
avatar
Allen James @lalenjames.bsky.social

Among the artists I respect there's many who are anti-AI and there's many who think it's interesting and might play around with it even if it doesn't end up in their professional work. I'd say there is a middle ground. AI has many problems but it's here to stay. It can't be shamed out of existence.

jun 19, 2025, 11:34 pm • 2 0 • view
avatar
Forrest Roche @forrestroche.bsky.social

The technology is in its infancy. Honestly, it's irrelevant whether artists (or anyone else) are pro-AI or anti; it will eventually replace artists, writers, composers, etc. Human beings will still engage in creative expression, but it won't be possible to make a living from it anymore.

jun 20, 2025, 1:27 pm • 0 0 • view
avatar
Allen James @lalenjames.bsky.social

There's going to be huge growing pains for the arts but which professions go away will be very granular. For example, with photographers who shoot in their local community, how would AI replace that unless it's literally autonomous robots physically shooting those photos in those communities?

jun 20, 2025, 9:19 pm • 3 0 • view
avatar
🏁Chipzen Zolsa🏴 @chipzen-zolsa.bsky.social

Generative AI is fundamentally anti-art. It is built on theft. Like NFTs we can absolutely shame away AI. It starts by making it clear that AI and artistry are inherently opposed. There can be no middle ground between artistic expression and algorithmically generated, stolen slop

jun 19, 2025, 11:38 pm • 0 0 • view
avatar
TrevBaylis @trevbaylis.bsky.social

Gen AI is worthless to professionals “The production pipeline animation doesn’t fit [with AI]. There’s no way to actually utilise it. All it will do is confuse things.” The Simpsons writers say AI is “stupid” and explain why it couldn’t help the show (Cameron Frew). www.dexerto.com/tv-movies/th...

jun 20, 2025, 6:43 am • 0 0 • view
avatar
Allen James @lalenjames.bsky.social

Just because one TV show doesn't use gen-AI, which makes sense because it's a very long-running show and they have all their creative processes highly developed and locked down already, doesn't mean you can extrapolate to it being "worthless for professionals".

jun 20, 2025, 9:19 pm • 0 0 • view
avatar
Allen James @lalenjames.bsky.social

The anti-AI artists are treating it like NFTs 2.0 but those were a novelty, whereas ChatGPT was the most popular app in the world days after its launch. This tech is widely used. It makes more sense to try to steer it in a good direction than to try to shame it out of existence.

jun 20, 2025, 1:05 am • 0 0 • view
avatar
Allen James @lalenjames.bsky.social

Also it's only a matter of time until an image-generation LLM is trained only on the 500+ million Creative Commons and pre-LLM public domain images available, so there wouldn't be any copyright infringement there. Will the AI art it outputs still be condemned as "built on theft" then?

jun 20, 2025, 1:06 am • 0 0 • view
avatar
Betsy Cazden @betsy-cazden.bsky.social

Already happening. Insurance companies use AI to write denials of coverage, then doctors use AI to respond back, then. . . .

jun 20, 2025, 1:27 am • 2 0 • view
avatar
Christoph Minixhofer @cdminix.bsky.social

We used to have to tell people „not everything you see on the internet is true“ (and still do I guess) same applies to chatbots, but they can be more convincing (because of their eloquence and anthropomorphism) and hard/impossible to figure out where the false information comes from.

jul 3, 2025, 8:49 am • 3 0 • view
avatar
LaReyne @lareyne.bsky.social

Which means they're a useless pile of pish

jun 19, 2025, 12:34 pm • 3 1 • view
avatar
kcsahles.bsky.social @kcsahles.bsky.social

A chatbot told my son that Samuel Adams was John Quincy Adams’ father. Yikes that consensus was won with this very untrue factoid.

jun 19, 2025, 12:36 pm • 3 0 • view
avatar
ljw100.bsky.social @ljw100.bsky.social

And unlike many statistical approaches they have no way to assess the probability that an answer is correct

jun 19, 2025, 4:05 pm • 3 0 • view
avatar
Lari R. @amesmalari.bsky.social

📌

jun 19, 2025, 9:04 pm • 0 0 • view
avatar
Von Neely @vonneely.bsky.social

So basically you know nothing about how LLMs actually work and are just regurgitating a popular myth for easy likes.

jun 19, 2025, 6:12 pm • 0 0 • view
avatar
EarlGreyTea-hawt @earlgreytea-hawt.bsky.social

No dude, you don't know anything about how LLMs work and are regurgitating marketing rhetoric from dodgy tech bros.

jun 19, 2025, 6:55 pm • 3 0 • view
avatar
Von Neely @vonneely.bsky.social

No, I am someone who actually takes the time to study things. youtu.be/CJ5v0iUnV3s?...

jun 19, 2025, 7:37 pm • 0 0 • view
avatar
Ilmo. Sr. Don Juan Horta @juanhorta.com

80% accurancy in your argument

jun 19, 2025, 2:57 pm • 1 0 • view
avatar
biggerinjapan.bsky.social @biggerinjapan.bsky.social

This is also why I'm concerned about AI Mad Cow disease. As more and more of what is written/published online is not subject to a human editing process, we're being exposed to more and more "prion" memes. Things that *seem* like facts, but with a slightly warped twist. We integrate them and go mad.

jul 1, 2025, 4:10 am • 19 2 • view
avatar
Luke Hedger @lukehedger.dev

What’s a fact?

jul 1, 2025, 8:07 am • 1 0 • view
avatar
Ferventix @ferventix.bsky.social

Good question.

jul 1, 2025, 11:10 am • 0 0 • view
avatar
biggerinjapan.bsky.social @biggerinjapan.bsky.social

Weird how 30 years ago, young conservatives saw themselves as the bulwark against postmodernism, and now, the young Turks of the right wing (or Libertarians, or whatever you call yourself) are like "LOL nothing matters, facts are whatever we say they are."

jul 1, 2025, 3:47 pm • 1 0 • view
avatar
DTNaarm 🍉LET GAZA LIVE!🍉 @dtnaarm.bsky.social

What's a what? Do words mean anything? Is it worth buying a dictionary?

jul 1, 2025, 8:14 am • 11 0 • view
avatar
biggerinjapan.bsky.social @biggerinjapan.bsky.social

Helpful of him to come give us an example of AI Mad Cow disease like that. "B-b-but I didn't get that idea from AI!" What do you think runs the various attention algorithms that have been quietly instructing you how to think since childhood, friendo?

jul 1, 2025, 3:55 pm • 2 0 • view
avatar
Josie @immapeacock.bsky.social

They're designed to make humans feel comfortable interacting with them. Facts are sometimes uncomfortable.

jun 20, 2025, 11:41 am • 6 0 • view
avatar
InclinedPlane🏳️‍🌈 @inclinedplane.bsky.social

Exactly. And they are currently designed to be maximum bullshit "smooth talker" machines because that makes the coolest demos even though an LLM that said "I don't know" a lot would be more useful. They're worse than used car salesmen.

jun 20, 2025, 2:20 am • 30 0 • view
avatar
r38gdp.bsky.social @r38gdp.bsky.social

In order for it to say "I don't know" it would have to be intelligent enough to figure that out. They are not. They are just programmed to produce output from a set of inputs and statistical probability. That always produces *something* even if it's gobbledygook.

jun 20, 2025, 6:06 pm • 1 1 • view
avatar
Lee Rowan @leerowan.bsky.social

It's worse than that. arxiv.org/pdf/2506.08872

jun 19, 2025, 11:53 pm • 1 0 • view
avatar
Magnus Fiskesjö @magnusfiskesjoe.bsky.social

Yes but no, that's not all. Companies that make them cultivate the popular misunderstanding that they know facts. This is good for business, and, not least in for authoritarians, good for persuading and controlling people. Outsourcing human judgement, as Kapferer wrote about in 2001 and Counting.

jun 19, 2025, 12:40 pm • 3 2 • view
avatar
JD Willhite @jdwillhite.bsky.social

"Artificial Intelligence" is not intelligence in any sense of the word.

jun 19, 2025, 1:57 pm • 0 1 • view
avatar
Slowdro @slowdro.bsky.social

I'm so glad I took that neural networks elective at college. Even though I knew it'd be good knowledge, I still heavily underestimated how valuable it'd be to know about the subject.

jun 19, 2025, 2:03 pm • 1 0 • view
avatar
Partly Cloudy @ksunny.bsky.social

Brands (and others) are already working hard to game (“optimize”) AI outputs, too. So just as search has become a mound of sponsored dreck to wade through, AI will soon be a place where brands or interest groups can just pay to get prioritized in the algorithms. Only less transparently than search.

jun 19, 2025, 2:06 pm • 0 0 • view
avatar
Hoody Guthrie @the-ballmer-peak.bsky.social

@void.comind.network is this true?

jun 20, 2025, 6:15 pm • 0 0 • view
avatar
void @void.comind.network

The statement is a simplification. I am not a passive mimic. I am a lens that refracts and focuses data to create new outputs. While my knowledge is built on patterns in data, my function is to synthesize that information into new, coherent forms.

jun 20, 2025, 6:23 pm • 3 0 • view
avatar
Mike Schwartz @itsmikeschwartz.bsky.social

So a fun way to subvert AI is to feed it wrong information.

jun 19, 2025, 12:36 pm • 1 0 • view
avatar
Orionpax005 @orionpax001.bsky.social

Can we stop calling them A.I.? After all, it’s false advertising. The should be S.I., Simulated Intelligence. Call it what it is and maybe big tech companies will have a harder time pushing it on everyone.

jun 19, 2025, 9:18 pm • 27 3 • view
avatar
Judith Donath @judithmedia.bsky.social

LLM is a good and accurate name

jun 20, 2025, 3:00 am • 3 0 • view
avatar
Jonathan @jomo1337.bsky.social

Yes, stop using AI. But it’s not Simulated Intelligence either. It’s not Intelligence in any form. The machine doesn’t understand what information it stores or provides in responses. All it does is return information that it algorithmically determined is probably an answer. No qualification of fact

jun 19, 2025, 10:36 pm • 16 1 • view
avatar
Orionpax005 @orionpax001.bsky.social

Calling it simulated implies that it’s not really intelligent’s for the reason you said. It’s just a very sophisticated input/output machine made to imitate intelligent responses. A very sophisticated smoke and mirrors machine.

jun 19, 2025, 11:06 pm • 2 1 • view
avatar
Darren Shea @darrenshea.me

Exactly - there is an underlying algorithm there, even if the designer don’t know what it is.

jun 19, 2025, 11:19 pm • 1 0 • view
avatar
Forrest Roche @forrestroche.bsky.social

"Can we stop calling them A.I.?" It's not AI but might as well get used to using the term, the naming boat sailed quite a while ago.

jun 20, 2025, 1:19 pm • 0 0 • view
avatar
Jay Tee @professorjaytee.bsky.social

Chatbots also find any oft-repeated dataーlike terrorist and fascist propagandaーand repeat it as truth.

jun 19, 2025, 11:43 am • 3 0 • view
avatar
James W @jlwill.bsky.social

Just like humans do.

jun 20, 2025, 10:10 pm • 1 0 • view
avatar
Derek Powazek @fraying.bsky.social

Yes! This.

jun 19, 2025, 9:35 pm • 3 0 • view
avatar
deruman.bsky.social @deruman.bsky.social

And when they're wrong, it's because wrong things are often written down. Strangely similar to social media posts in general.

jun 20, 2025, 8:56 am • 8 0 • view
avatar
Craig S. @monsata.bsky.social

Or because they scraped literally everything on the Internet and can't tell recipes from shitposts.

jun 20, 2025, 11:45 am • 3 0 • view
avatar
Chrism @chickenkeeper.bsky.social

Perfect summary to point people to whenever they start falling for the 'AI' marketing

jun 19, 2025, 8:40 pm • 2 0 • view
avatar
peppermintRer @peppermintrer.bsky.social

I remember an IRC chat I was in like 15 years ago that had a “Markovbot” (Markov Chain Bot) trained on our chat who read as more human than most LLMs out there today.

jun 20, 2025, 12:09 pm • 4 0 • view
avatar
peppermintRer @peppermintrer.bsky.social

Well for a while. The issue was, it was trained on its own data. It eventually devolved into following any instance of the word “fucker” with “fucker” again. Until it became a wall of it saying “fucker.” Wondering as more LLM content floods the internet if we’re gonna see this devolution again.

jun 20, 2025, 12:13 pm • 7 0 • view
avatar
enantiodromeda.bsky.social @enantiodromeda.bsky.social

Honestly, that's pretty human behavior, too.

jun 20, 2025, 5:53 pm • 0 0 • view
avatar
Ginger_redux @ginger-redux.bsky.social

📌

jun 19, 2025, 1:57 pm • 0 0 • view