avatar
Daniel José Older @djolder.bsky.social

I did not make those stats up (unlike genAI). Source 1: research.aimultiple.com/ai-hallucina... Source 2: ( a nyt article but the link is to a reddit copy of it bc we don't link to the times here) www.reddit.com/r/technology...

aug 6, 2025, 2:11 am • 467 78

Replies

avatar
Daniel José Older @djolder.bsky.social

But even if something lies to you "only" 15% of the time, it's not helpful, it's not saving you time, it's not making you smarter, it's just lying to you. That's more work. If your research assistant lied to you 15% of the time you would no longer work with them.

aug 6, 2025, 2:14 am • 752 140 • view
avatar
Daniel José Older @djolder.bsky.social

To everyone trying to argue with me in this thread, you're not making any sense. allow me to refer you to this crucial point

aug 6, 2025, 4:25 pm • 153 7 • view
avatar
Daniel José Older @djolder.bsky.social

"it lies only about twice that of humans." Nothing in this argument makes sense. It's AI slop defending AI slop. Incoherent.

Response to my post about 15% lying rate being bad:
aug 6, 2025, 4:50 pm • 127 7 • view
avatar
Daniel José Older @djolder.bsky.social

This one came out today www.theguardian.com/world/2025/a... And there are numerous examples of AI companies actively collaborating with the Israeli government in the genocide and ICE in domestic repression. Just about every head of every major AI company has bent the knee to trump.

aug 7, 2025, 12:30 am • 85 19 • view
avatar
Daniel José Older @djolder.bsky.social

See, all of this. Racism is embedded in the DNA of it because racists are programming it.

aug 7, 2025, 1:17 am • 87 16 • view
avatar
Daniel José Older @djolder.bsky.social

Finally genAI is right about something

aug 8, 2025, 1:38 am • 46 5 • view
avatar
Daniel José Older @djolder.bsky.social

💀💀💀💀

aug 8, 2025, 1:48 am • 62 8 • view
avatar
Daniel José Older @djolder.bsky.social

doc of resources on genAI's destructiveness and uselessness for all those naysayers in your life and in these comments

aug 8, 2025, 2:44 pm • 34 5 • view
avatar
Daniel José Older @djolder.bsky.social

Bros: CHATGPT IS THE FUTURE IT IS INEVITABLE YAAAAA chatgpt:

Query in ChatGPT: how many B are there in Blueberry Chatgpt: the word
aug 9, 2025, 9:18 pm • 33 8 • view
avatar
Daniel José Older @djolder.bsky.social

Source: open.substack.com/pub/xriskolo...

aug 9, 2025, 9:18 pm • 18 0 • view
avatar
Jon Danziger @danziger.bsky.social

🎶 I found my thrill On Bluebberry Hill🎶 — Fats GPTomino

aug 9, 2025, 9:30 pm • 2 0 • view
avatar
LadiMuzikLuva🪻🎶✨ @ladimuzikluva.bsky.social

Beer almost came outta my nose, wtactualfuck 😂😂😂

aug 8, 2025, 1:53 am • 1 0 • view
avatar
God Emperor Paul @ovid9.bsky.social

This is fine. The fact that the US stock market is propped up on the backs of 6 companies 1 of whom, NVIDIA, is propped up by the other 5 burning billions on AI is fine. No bubble. No climate crisis. No problem. (No product. no profit. no nothing except bad bad bad shit)

aug 8, 2025, 1:51 am • 5 0 • view
avatar
alanscarl.bsky.social @alanscarl.bsky.social

Aw come on, the way Dafoe whipped his light saber against Harvey Keitel was wicked pissa

aug 8, 2025, 1:56 am • 0 0 • view
avatar
Darrin Calvin @darrincalvin.bsky.social

AI:

image
aug 8, 2025, 1:51 am • 1 0 • view
avatar
Daniel José Older @djolder.bsky.social

😂😂😂

aug 8, 2025, 1:52 am • 3 0 • view
avatar
Index @indexx.dev

ty for the clarification! will read that article in a little bit

aug 7, 2025, 12:55 am • 2 0 • view
avatar
Daniel José Older @djolder.bsky.social

Thanks for the question!

aug 7, 2025, 12:57 am • 2 0 • view
avatar
marcussimmons.cc @marcussimmonscc.bsky.social

notice how often genocide supporters, ai sloppers, and other bald heads can't argue the merits of their things as they exist in reality? "the thing *could* be awesome if these 68 necessary hypothetical conditions happen first!"

aug 6, 2025, 5:00 pm • 8 1 • view
avatar
DJ @djiini.bsky.social

i'd also add that i don't ask other humans to write things for me, so it's not a particularly strong argument anyway

aug 7, 2025, 1:46 am • 2 0 • view
avatar
Mark @aguasonic.com

Wrong. Hallucination amongst large-language-models is running approximately _thirty percent_ across the board, and is: -- getting worse with time ( 'model collapse' ) -- can not be removed ( it is in the architecture )

aug 7, 2025, 4:24 am • 0 0 • view
avatar
Mark @aguasonic.com

~ 30 percent means it "lies" ( if it even knew what that meant!!! ) at more than _four times_ the rate of the population of humans, in general. Progress!!!

aug 7, 2025, 4:25 am • 0 0 • view
avatar
lledra @lledra.bsky.social

My gooosh what an absolutely awful argument they're trying to use

aug 6, 2025, 5:07 pm • 3 0 • view
avatar
Mike Draco @digitaldraco.bsky.social

They don't have even _one_ good argument! It's all just false analogies and "well it's happening so suck it up Luddite lulz".

aug 7, 2025, 2:00 am • 3 0 • view
avatar
Redshift @redshiftsinger.bsky.social

“It only lies (slightly more than) twice as often as humans” do you’re saying that humans are twice as reliable as the best “AI”, and using that as an argument… FOR the slop robot??? 🤨 (Not you Daniel, the other guy you’re responding to)

aug 6, 2025, 6:40 pm • 4 0 • view
avatar
lesbian ASMR thirst posting 🏳️‍⚧️ @psychpunk.bsky.social

Got into it with this person and it ended with "at least AI doesn't murder, cause war, etc." Like what are we even doing

aug 7, 2025, 12:32 am • 4 0 • view
avatar
Daniel José Older @djolder.bsky.social

Lmao please

aug 7, 2025, 12:35 am • 3 0 • view
avatar
RDS (formerly shawphd) @rds773.bsky.social

Even that’s not true Insurance companies using it to deny medical coverage are absolutely using it to murder people

aug 7, 2025, 3:27 am • 6 0 • view
avatar
StuffAboutHockey @stuffabouthockey.bsky.social

I expect humans to lie. I dont expect human paralegals to make up fake cout cases to cite as precedent when writing legal briefs. I dont expect humans writing research papers to cite fake non-existent studies as sources. Thats the difference.

aug 6, 2025, 7:51 pm • 10 0 • view
avatar
The Barbarienne - I will wish ill where I want to @thebarbarienne.bsky.social

And if a human does do those things, they get fired.

aug 6, 2025, 10:56 pm • 3 0 • view
avatar
Ian Wilmoth @cydonprax.bsky.social

so it's twice as unreliable as any other average online search which is in fact the baseline. an incredible endorsement from them really

aug 6, 2025, 4:52 pm • 36 0 • view
avatar
StuffAboutHockey @stuffabouthockey.bsky.social

The problem isnt you looking up something in a search engine. Its lawyers submitting legal briefs AI spat out that cite non-existent cases as precedent.

aug 6, 2025, 7:53 pm • 5 0 • view
avatar
StuffAboutHockey @stuffabouthockey.bsky.social

Or the head of the CDC releasing & making decisions based om AI generated reports that cite fake non-existent studies.

aug 6, 2025, 7:54 pm • 7 0 • view
avatar
LN @lususnaturae0.bsky.social

If Excel randomly subtracted instead of adding (without warning) even 1% of the time, no one would ever have used it.

aug 6, 2025, 4:27 pm • 13 2 • view
avatar
fsherman.bsky.social @fsherman.bsky.social

Yes, having to double-check stuff to make sure its right kills the time-saving aspect.

aug 6, 2025, 1:50 pm • 6 2 • view
avatar
plaidpanties.bsky.social @plaidpanties.bsky.social

Heck, back when I worked retail we dreaded the sales where they'd effed up... maybe 3% of the pricing bc that was a Nightmare - we'd have to check every price or customers would be furious.

aug 6, 2025, 5:40 pm • 2 0 • view
avatar
ditchx1.bsky.social @ditchx1.bsky.social

I wish I could repost this more

aug 6, 2025, 2:35 pm • 0 0 • view
avatar
Yet Another Emily @anomily.bsky.social

I saw a great quote from a court opinion that explained that AI is just a word predictor. It’s not even that good at predicting words! I had a suggested response to an email sending me a signed contract that was just, “Yum!”

aug 9, 2025, 9:24 pm • 1 0 • view
avatar
the frass and the furious @apophis426.bsky.social

i'd say 15% is in that range where something is the most dangerous - right often enough for a person's garbage instincts to start trusting it implicitly until it's too late

aug 7, 2025, 1:19 am • 1 0 • view
avatar
Transmental Me @transmentalme.bsky.social

So it lies only about twice that of humans. Research has shown that about 7% of all human communication is a lie. Or do you believe humans never lie? Even if they didn't lie at all, how often do you think they're confidently wrong? Or cause accidents due to mistakes?

aug 6, 2025, 7:31 am • 1 0 • view
avatar
Josh @jman4747.bsky.social

People lying 7% in general doesn’t mean that research assistants lie 7% of the time! People don’t lie equally across all types of communications.

aug 6, 2025, 12:42 pm • 16 0 • view
avatar
Transmental Me @transmentalme.bsky.social

7% of total communication means in general yes.

aug 6, 2025, 4:09 pm • 0 0 • view
avatar
mjfgates @mjfgates.bsky.social

I think that if I fucked up payroll 15% of the time, I would be out on my ass *immediately.* Also true of pretty much any other business function.

aug 6, 2025, 9:45 am • 42 0 • view
avatar
Transmental Me @transmentalme.bsky.social

I guess that's why so many people get fired for being bad at their job then.

aug 6, 2025, 4:08 pm • 0 0 • view
avatar
badly-drawn bee 🐝 @soapachu.bsky.social

"only"

aug 6, 2025, 12:57 pm • 4 0 • view
avatar
Transmental Me @transmentalme.bsky.social

So then do you feel people already lie a lot? Because yes, it's only twice that of a human, which is about 7% of total communication is a lie.

aug 6, 2025, 4:55 pm • 0 0 • view
avatar
badly-drawn bee 🐝 @soapachu.bsky.social

Even assuming that 7% was consistent across all forms of human communication (it's not), embracing technology which is twice as mendacious seems really fucking stupid.

aug 6, 2025, 5:07 pm • 3 0 • view
avatar
Transmental Me @transmentalme.bsky.social

7% of total communication is 7% of total communication, that is in fact the average across all forms of communication (it is). If we break down specific areas of communication that number can be significantly higher or lower.

aug 6, 2025, 5:11 pm • 0 0 • view
avatar
Transmental Me @transmentalme.bsky.social

As for embracing the technology, you're ignoring that each major model has improved on this metric by half. If that holds for the next generation of models coming this month, that would make them either as likely or less likely to lie than a human. See how the argument falls apart?

aug 6, 2025, 5:49 pm • 0 0 • view
avatar
Marsanges @marsanges.bsky.social

then why do we need to compound human fallibility with AI fallibility? It would be nice if it were better than us or would correct our weaknesses. Instead it just plays into, and greatly enhances our weaknesses (e. g. psychological ones).

aug 6, 2025, 11:55 am • 8 0 • view
avatar
Transmental Me @transmentalme.bsky.social

It generally is better than most people when given the right context.

aug 6, 2025, 4:10 pm • 0 0 • view
avatar
Antiqueight @antiqueight.bsky.social

only twice on it's best day is still a bit much!

aug 6, 2025, 8:11 am • 5 0 • view
avatar
Transmental Me @transmentalme.bsky.social

No, the chart shows the newer models on the left and the older models on the right. Imagine that, the person who posted the chart was being intentionally misleading. Also known as lying.

aug 6, 2025, 4:11 pm • 0 0 • view
avatar
Antiqueight @antiqueight.bsky.social

I didn't reference any numbers, so take your disagreement to which ever reference they gave and take it up with them directly. You didn't cite your 7%. I was merely suggesting that yes, twice that is bad.

aug 6, 2025, 4:40 pm • 1 0 • view
avatar
Transmental Me @transmentalme.bsky.social

No you said on its best day, these are all different models. You could have just Googled how often people lie and found it, I'm not trying to prove anything to you. The OP cited their statement and it was still intentionally misleading because none of you read and understood the cited data.

aug 6, 2025, 4:52 pm • 0 0 • view
avatar
Antiqueight @antiqueight.bsky.social

Considering various models have fabricated data every time i use them i have experience of it being fairly highly crap but ok. If it's worst day is 15 it's still fairly terrible. And people transferring data don't generally lie, they may make mistakes. It regularly lies to me.

aug 6, 2025, 4:57 pm • 1 0 • view
avatar
Antiqueight @antiqueight.bsky.social

Technically it's every session when i sit down to use them. Eventually I either give up or get what i needed. Earlier models were actually better than now.

aug 6, 2025, 4:59 pm • 0 0 • view
avatar
Transmental Me @transmentalme.bsky.social

I completely disagree. Early models, as shown in the chart, were statistically and quantifiably much worse. I'm glad you're experimenting with AI, but you clearly want to blame it for its failure rather than take equal ownership of your partnership in that failure. AI isn't always right.

aug 6, 2025, 5:06 pm • 0 0 • view
avatar
Antiqueight @antiqueight.bsky.social

I have worked with it a fair bit and was enthusiastic as it would simplify things extensively. But I've found llms just aren't trustworthy for handling data. AIs can be great. I haven't been impressed with llms for data.

aug 6, 2025, 5:09 pm • 0 0 • view
avatar
Transmental Me @transmentalme.bsky.social

Considering various models have allowed me to build entire applications and research topics with cited materials I can verify, I have experience of it being fairly great but ok. If its worst day is 15, it's not far off from a person and you must think people are kinda terrible instead of fairly so.

aug 6, 2025, 5:03 pm • 0 0 • view
avatar
Antiqueight @antiqueight.bsky.social

15 is a lot worse than a person if a person is 7.

aug 6, 2025, 5:10 pm • 1 0 • view
avatar
Transmental Me @transmentalme.bsky.social

It is only a lot worse depending on the tolerance to error. Twice as bad doesn't mean bad. If the tolerance for error is 30% it is still far beyond acceptable. If the tolerance of poop shoveling is 30% and AI had an error rate of 15% and can self correct, why would you force a human to shovel shit?

aug 6, 2025, 5:14 pm • 0 0 • view
avatar
Transmental Me @transmentalme.bsky.social

Another possibility is that you're just not good at using AI, it isn't generally intelligent and how you prompt it and what context it is given matters a lot. The same goes for a human, what you ask, how you ask and what data they have greatly matters. People will pretend to know things they don't.

aug 6, 2025, 5:03 pm • 0 0 • view
avatar
Antiqueight @antiqueight.bsky.social

That doesn't explain why it used to be better and since I've had ai experts try to create prompts to solve it once it started this and failed or that i can eventually get it to work on occasion but same data same prompt another day gives a different answer

aug 6, 2025, 5:06 pm • 0 0 • view
avatar
r @recombobulating.bsky.social

as if you have to see something as perfect and pure and deserving of absolute authority and AI is better for that job than people could be

aug 6, 2025, 11:23 am • 1 0 • view
avatar
who cares @maaaatttt.bsky.social

I think people lie predictably. They do so out of fear, or anger, or compassion. AI follows no such logic.

aug 6, 2025, 12:13 pm • 7 0 • view
avatar
Transmental Me @transmentalme.bsky.social

They don't, but in general they do believe they're white lies.

aug 6, 2025, 4:16 pm • 0 0 • view
avatar
Joe Robertson @joro.bsky.social

If a machine is built to make up lies & present them as truth 15% of the time, while billions are spent to market that machine as being your best way to access truth, the whole thing is a lie. AI cannot be lying only 15% of the time. That level of fraud is fraud, period.

aug 6, 2025, 1:17 pm • 20 1 • view
avatar
Arty-san @arty-san.bsky.social

People lie. Shall we get rid of them as useless? It's up to US to use our judgment when using tools like AI, just as it's up to US to use our judgment when listening to assertions by other people.

aug 6, 2025, 6:55 am • 2 0 • view
avatar
Courtney Milan @courtneymilan.com

Turning off humans is called murder and turning off machines is called saving power

aug 6, 2025, 3:57 pm • 23 0 • view
avatar
Soylentninja @soylentnjnja.bsky.social

Would you keep an employee around that lies twice as much as the average human?

aug 6, 2025, 9:30 am • 9 0 • view
avatar
Tim Popelier, Tankhead Text Typer @tim-popelier.com

Only in CEO positions, going by the news.

aug 6, 2025, 9:35 am • 8 0 • view
avatar
r @recombobulating.bsky.social

the whole fucking point of AI is to replace human judgement with an automated patriarch the people at the top have given up on the species because it keeps refusing to give them what they want

aug 6, 2025, 11:21 am • 5 0 • view
avatar
Arty-san @arty-san.bsky.social

No, that's not the point of AI. That's not why it was developed. It's entrepreneurs and greedy corporate interests that want to replace human judgment. You have to make a distinction between the technology and how people use it (or want to abuse it). They are separate issues.

aug 6, 2025, 11:23 am • 0 0 • view
avatar
r @recombobulating.bsky.social

It was developed by those people for the reasons I described. I wasn't saying it created itself. The technology doesn't exist at all without humans. It's always connected to them. And I don't have to follow whatever rhetorical rules/social norms you think apply here.

aug 6, 2025, 11:29 am • 4 0 • view
avatar
Arty-san @arty-san.bsky.social

No, the early developers of AI did not do it for those reasons. It was developed to help humans with education and creativity back in the 1970s and 1980s. How people are implementing it now is different from the vision the original pioneers had.

aug 6, 2025, 11:32 am • 0 0 • view
avatar
r @recombobulating.bsky.social

I am talking about AI as a superior replacement for human judgement. If that's not what that was then it isn't relevant to this conversation.

aug 6, 2025, 11:43 am • 2 0 • view
avatar
who cares @maaaatttt.bsky.social

when people lie, they do it for a reason. an associate at a law firm is not going to make up cites while writing a brief because that makes no fucking sense. the problem with AI hallucinations is that you have no idea where the lie is, so all of the output has to be validated.

aug 6, 2025, 12:12 pm • 14 0 • view
avatar
Arty-san @arty-san.bsky.social

Yes, exactly, all of the output has to be validated. That's part of the process.

aug 6, 2025, 12:27 pm • 0 0 • view
avatar
John Rogers @johnrogers.bsky.social

No, if a person did that, *you would fire or stop working with that person*. If your ATM was wrong 15% of the time, you would not be so casual. If a brand of hammer broke 15% to 30% of the time you’d stop using it.

aug 6, 2025, 1:32 pm • 29 0 • view
avatar
John Rogers @johnrogers.bsky.social

The question is why are you making an exception for a tool which not only doesn’t function like it’s supposed to, is being sold to us and its use mandated *as if it did not have those flaws*.

aug 6, 2025, 1:34 pm • 27 2 • view
avatar
John Rogers @johnrogers.bsky.social

You are working under the assumption that tools are neutral, which they are not, particularly the ones that are marketed with lies, used by people who don’t understand them *because* of those lies, and also have this environmental effects. The thing is what it does.

aug 6, 2025, 1:40 pm • 17 1 • view
avatar
Arty-san @arty-san.bsky.social

Online interactive video gaming and bitcoin mining (to name only two examples) take a lot of computing resources. The environmental effects do concern me but China has demonstrated (twice) that American systems are bloated and AI can be run with far fewer resources.

aug 6, 2025, 1:49 pm • 1 0 • view
avatar
who cares @maaaatttt.bsky.social

bitcoin mining is also complete garbage, and online gaming uses a tiny fraction of the resources that either of the other sectors use.

aug 6, 2025, 7:15 pm • 2 0 • view
avatar
Redshift @redshiftsinger.bsky.social

People shouldn’t spend huge amounts of energy solving sudoku puzzles for fraud “money” either.

aug 6, 2025, 6:53 pm • 5 0 • view
avatar
Arty-san @arty-san.bsky.social

If the ATM were wrong 15% of the time I wouldn't use it for banking transactions. But I'm not using AI for banking. If an AI could make lottery, stock market, or slot machine predictions with 85% accuracy would you use it?

aug 6, 2025, 1:39 pm • 1 0 • view
avatar
Stan Fresh @stan-fresh.bsky.social

the affirmation machine that can't even model the present is going to model the future LMAO bridge-sellers are having a field day with this stuff

aug 6, 2025, 1:49 pm • 2 0 • view
avatar
John Rogers @johnrogers.bsky.social

But people aren’t using it to make predictions, they are using it to find, organize and present information, *which is the thing it is bad at*. And, also, knock-on effects.

aug 6, 2025, 1:47 pm • 11 0 • view
avatar
John Rogers @johnrogers.bsky.social

The phrasing of your own response carries its refutation.

aug 6, 2025, 1:48 pm • 10 0 • view
avatar
Arty-san @arty-san.bsky.social

LLMs are only the tip of the iceberg. Not even the tip, just a little chunk off to one side. AI is not an app, it's a coding approach. I don't know why so many people use LLMs as the exemplar for AI because it isn't. It just happens to be one that's easily available and thus frequently used.

aug 6, 2025, 1:57 pm • 0 0 • view
avatar
Julian @jl8e.bsky.social

It is literally what is being sold as “AI”, the magic technology. If you’re arguing for machine learning, which is actually useful in some problem domains, don’t chime in when people are talking about LLMs, which everyone talking about “AI” right now is.

aug 6, 2025, 2:07 pm • 4 0 • view
avatar
Arty-san @arty-san.bsky.social

If they are talking about LLMs then they should be specific about it because LLMs in no way exemplify the whole field of AI. It makes no sense to "define" AI as LLMs, that's like making assertions about the universe by talking only about the moon.

aug 6, 2025, 2:12 pm • 0 0 • view
avatar
Will "scifantasy" Frank @scifantasy.bsky.social

But you're not using AI for making lottery, stock market, or slot machine predictions.

aug 6, 2025, 1:48 pm • 5 0 • view
avatar
Arty-san @arty-san.bsky.social

The point I was making is that some tools need to be accurate to be useful (rulers, ATMs, clocks) but not all tools need to be 100% accurate to be useful. A tool that could predict the stock market or winning lottery numbers, or even some aspects of weather, with 85% accuracy would be useful.

aug 6, 2025, 1:54 pm • 0 0 • view
avatar
Will "scifantasy" Frank @scifantasy.bsky.social

AI is not one of those tools. In fact it is the opposite. If it is not accurate, much like a ruler, it is actively harmful.

aug 6, 2025, 1:56 pm • 4 0 • view
avatar
Arty-san @arty-san.bsky.social

Once again, that's a people problem. Unrealistic expectation and lack of education.

aug 6, 2025, 1:59 pm • 0 0 • view
avatar
Redshift @redshiftsinger.bsky.social

“AI” cannot and will never be able to do that.

aug 6, 2025, 6:55 pm • 2 0 • view
avatar
Redshift @redshiftsinger.bsky.social

“If unicorns lived in your shoes and could grant wishes, would you buy more shoes?” Is how you sound rn.

aug 6, 2025, 6:54 pm • 13 1 • view
avatar
who cares @maaaatttt.bsky.social

that’s fucking stupid. if an AI produces something and I have to check everything it did, that saves me no time.

aug 6, 2025, 12:40 pm • 6 0 • view
avatar
Arty-san @arty-san.bsky.social

Checking takes less time than gathering, organizing, and checking (since everything needs checking anyway).

aug 6, 2025, 12:42 pm • 0 0 • view
avatar
who cares @maaaatttt.bsky.social

no, not everything needs checking. a calculator that is wrong 15% of the time would be useless. If I am preparing a document and 15% of the cites are completely made up, that most likely changes the entire fabric of what I’m writing. And to be wrong 15% of the time it lights forests on fire.

aug 6, 2025, 12:44 pm • 3 0 • view
avatar
Arty-san @arty-san.bsky.social

That doesn't mean everything that's wrong 15% of the time is useless. I don't know why you think something has to be perfect to be useful. A narrow windy gravel road full of potholes isn't much good for regular cars or wheelchairs, but it's still useful for getting places you couldn't go otherwise.

aug 6, 2025, 12:53 pm • 0 0 • view
avatar
who cares @maaaatttt.bsky.social

to be clear, though, I don’t think LLM are useless. they are fancy autocomplete. I do think they are not worth feeding human creative products into an IP woodchipper for, nor are they worth the amounts of money, silicon, and water they are consuming at present.

aug 6, 2025, 1:05 pm • 0 0 • view
avatar
who cares @maaaatttt.bsky.social

Get back to me when people are spending billions of dollars to build narrow windy gravel roads and touting them as the future of computing.

aug 6, 2025, 12:58 pm • 0 0 • view
avatar
Arty-san @arty-san.bsky.social

When the settlers first came here, a narrow windy gravel road was a big step forward. AI will improve. Some will benefit, some will never learn to use it effectively, but just as roads improved, AI will too. It's not going away so we'd better learn to steer it in positive directions.

aug 6, 2025, 1:06 pm • 0 0 • view
avatar
Redshift @redshiftsinger.bsky.social

It’s twice as bad as the average of all human communication. Things do not have to be perfect to be useful. They DO have to be better than doing without them to be useful. “AI” is worse at everything than humans are, ergo, it is not useful.

aug 6, 2025, 6:57 pm • 2 0 • view
avatar
Kat Vernon @katbird.bsky.social

You have to gather the information to validate the AI output, so you’re doing double work! If I’ve gathered and organized the information, then I don’t have to check it because I already know it’s correct!!

aug 6, 2025, 1:08 pm • 5 0 • view
avatar
Arty-san @arty-san.bsky.social

Double work isn't always a bad thing. We find things in one way and the AI finds it in another. Put them together and the whole may be better because of the different parts. I always check my work anyway, even if I gathered it myself. I tell people if you don't think AI is useful, don't use it.

aug 6, 2025, 1:14 pm • 0 0 • view
avatar
who cares @maaaatttt.bsky.social

…this is pointless. If you are gonna sit here and say “well actually it’s a good thing if it causes double the work” then I’m truly at a loss for words. ChatGPT, please draw me “Robert Downey jr rolling his eyes in the studio ghibli style”

aug 6, 2025, 1:18 pm • 3 0 • view
avatar
Arty-san @arty-san.bsky.social

Yes, sometimes it's a good thing to have input from another source. That doesn't automatically mean twice the work. There is overlap between what the human gathers and what the AI gathers. [ChatGPT doesn't know how to draw. It passes the request to Dall-E which is a different AI that draws it.]

aug 6, 2025, 1:23 pm • 0 0 • view
avatar
A Jelly Squishes Slowly By @ubiquisquish.bsky.social

I mean, in my experience the people using LLMs aren’t validating. *I* am. They’re “saving time” by wasting mine. The close editing required is more time consuming and harder than writing something on your own. Which, of course, is why they don’t do it.

aug 8, 2025, 2:28 am • 1 0 • view
avatar
Arty-san @arty-san.bsky.social

I wouldn't be surprised if those using LLMs without validating are the same ones who don't know how to write or do research in the first place. I've met far too many people who think AI apps are "magic" and believe not only what they tell them but assume they are omniscient and infallible.

aug 8, 2025, 2:41 am • 0 0 • view
avatar
A Jelly Squishes Slowly By @ubiquisquish.bsky.social

Sure, that’s probably true. However, it is the LLMs making references to non-existent papers and jumbling numbers. These are mistakes different from human mistakes that cause wider problems.

aug 8, 2025, 5:25 pm • 0 0 • view
avatar
Matt (new nym here) L @matt12l.bsky.social

image
aug 6, 2025, 1:28 pm • 6 0 • view
avatar
Arty-san @arty-san.bsky.social

I don't "play" with AI. I don't even use LLMs that much. I'm a software developer. I see this from a historical perspective. The software is primitive compared to what it will be in five years. It's also not going away. People need to understand it better so it doesn't end up running them.

aug 6, 2025, 1:34 pm • 0 0 • view
avatar
Redshift @redshiftsinger.bsky.social

“AI” is a toy. It is not useful nor time-saving, it’s just shiny and entertaining enough to foolish people that they mistakenly think they’re “saving time”, even though they are, on average, 20% slower at tasks when using it.

aug 6, 2025, 6:51 pm • 2 0 • view
avatar
Matt (new nym here) L @matt12l.bsky.social

I’m a historian with a PhD and twenty years of experience in teaching a d research as a professor. You are making some big assumptions about what software can do in the future based on a flawed premise about how technology develops over time. Sure there might be more sophisticated software in

aug 6, 2025, 1:50 pm • 6 0 • view
avatar
Matt (new nym here) L @matt12l.bsky.social

Five years. But there also might not be enough difference to justify that kind of software in economic terms. Another perspective is that the productivity gains made by first hardware and software over the past thirty years have reached a Pareto limit. All the easy gains have been achieved, more

aug 6, 2025, 1:50 pm • 4 0 • view
avatar
Matt (new nym here) L @matt12l.bsky.social

incremental gains in productivity have become more expensive. At some point the value is not going to be there and the next technological revolution is going to happen in some completely different field. For now the TechBros have unlimited stacks of money they can set on fire to make billions.

aug 6, 2025, 1:50 pm • 5 0 • view
avatar
Arty-san @arty-san.bsky.social

I also have degrees up the wazoo (too many) and years of experience in the software field. The AI we have now is nascent. I'll bet you lunch on it and I'm not talking about LLMs specifically, that's only one small specific application of AI, I'm talking about the field in general.

aug 6, 2025, 2:03 pm • 0 0 • view
avatar
Matt (new nym here) L @matt12l.bsky.social

Nothing grows forever. Everything regresses to the mean sooner or later. It’s the law of thermodynamics.

aug 6, 2025, 3:42 pm • 3 0 • view
avatar
Arty-san @arty-san.bsky.social

Yes, eventually, but I don't think we're there yet. There are so many AI applications in the sciences that are in progress or waiting to be coded.

aug 6, 2025, 3:47 pm • 0 0 • view
avatar
lesbian ASMR thirst posting 🏳️‍⚧️ @psychpunk.bsky.social

Jfc this is a stupid argument lmao

aug 6, 2025, 3:57 pm • 2 0 • view
avatar
Arty-san @arty-san.bsky.social

How would you say it?

aug 6, 2025, 4:04 pm • 0 0 • view
avatar
lesbian ASMR thirst posting 🏳️‍⚧️ @psychpunk.bsky.social

Seems apt to have to explain to an AI defender the difference between sentient beings and a language model

aug 6, 2025, 4:07 pm • 2 0 • view
avatar
Arty-san @arty-san.bsky.social

I don't see bots and humans as equivalent at all (not even close). I was challenging the previous post's concept of what is useful and what is not useful. The point was that a tool doesn't have to be 100% correct to be useful as long as the user is aware of its limitations.

aug 6, 2025, 4:13 pm • 0 0 • view
avatar
lesbian ASMR thirst posting 🏳️‍⚧️ @psychpunk.bsky.social

AI isn’t useful enough to justify all the havoc it’s causing is the fucking point

aug 6, 2025, 4:18 pm • 2 0 • view
avatar
Arty-san @arty-san.bsky.social

It's helping in the science fields. It's being used for image analysis, simulations and modeling, and for filtering data.

aug 6, 2025, 4:21 pm • 0 0 • view
avatar
lesbian ASMR thirst posting 🏳️‍⚧️ @psychpunk.bsky.social

That’s certainly the PR spin. In the meantime it’s eroding truth, history, art, the environment, human intelligence, etc. Sorry but we’re not in a context where it’s few maybe beneficial use cases matter and I honestly see it as bad faith to bring them up in the face of all the harm it does

aug 6, 2025, 4:30 pm • 1 0 • view
avatar
Arty-san @arty-san.bsky.social

I don't mean to make light of what you said but when I read this I though for a moment you were talking about our current government. "In the meantime it’s eroding truth, history, art, the environment, human intelligence, etc."

aug 6, 2025, 4:38 pm • 0 0 • view
avatar
Phil McDuff @mcduff.bsky.social

Lmao lol hahhahahahaha

aug 6, 2025, 8:09 am • 21 0 • view
avatar
Phil McDuff @mcduff.bsky.social

Eat dirt

aug 6, 2025, 8:09 am • 17 0 • view
avatar
The Daniel @alternativehistorian.com

Sure Bob lies about his reports but only 1/6th of the time, and he's stealing from the company and he puts fish in the microwave but don't you think firing him is drastic?

aug 6, 2025, 11:09 am • 12 0 • view
avatar
Arty-san @arty-san.bsky.social

My point was that AI is a tool. It's up to the human to choose the right tool and to understand what it's good for and what it's not good for. Early automobiles and early airplanes had many limitations, but they improved (just as AI is improving) and even then they were useful in the right hands.

aug 6, 2025, 11:21 am • 0 0 • view
avatar
b_b_OK @bbokofficial.bsky.social

and you thought you could convey the point that "AI is a tool" by comparing AI to people? if I assume you know what you're talking about, I would say you're dehumanizing people and treat them as tools.

Arty-san @arty-san.bsky.social People lie. Shall we get rid of them as useless?
aug 6, 2025, 7:54 pm • 2 0 • view
avatar
Arty-san @arty-san.bsky.social

I can tell you from having been in the workplace a long time that people lie in reports a lot more than 1/16th of the time and even if they're not lying a lot of them are incompetent (hired because they're a relative of the boss or... whatever). Early airplanes were not perfect, but they improved.

aug 6, 2025, 11:15 am • 0 0 • view
avatar
The Daniel @alternativehistorian.com

Who cares if he lies? Everyone lies I'm lying right now The important thing is Bob can get better How do I know? Because 100 yrs ago Nancy sucked but she got better

aug 6, 2025, 12:06 pm • 3 0 • view
avatar
Bryan Culbertson 🥄 @bryanculbertson.com

If any one person consistently lied to you while insisting they were correct as much as AI then you would stop listening to them

aug 6, 2025, 3:14 pm • 1 0 • view
avatar
Arty-san @arty-san.bsky.social

An LLM isn't a person. It's a software app coded by people. All software has bugs and limitations. I wouldn't have the same expectations of a software app as I do of people. What we need is education so people have reasonable expectations instead of thinking it's magic (which some of them do).

aug 6, 2025, 3:18 pm • 1 0 • view
avatar
Bryan Culbertson 🥄 @bryanculbertson.com

You are the one who compared AI to a person. I hope you realize how ridiculous of a comparison that was now.

aug 6, 2025, 3:26 pm • 4 0 • view
avatar
Arty-san @arty-san.bsky.social

The operative concept was usefulness versus what was not useful. It was an analogy. I don't consider AI apps and people to be equivalent.

aug 6, 2025, 3:33 pm • 1 0 • view
avatar
John Robie @jrobie.bsky.social

People are people. Computers are tools. If a tool is unreliable you don't use it because it makes your job harder.

A hammer
aug 6, 2025, 12:33 pm • 5 0 • view
avatar
Arty-san @arty-san.bsky.social

Whether it's unreliable depends how you use it. A hammer is great for inserting nails, not so good at inserting screws. If people assume they can use AI without understanding its strengths and weaknesses, it's not much different from assuming they can fly a plane without knowing how.

aug 6, 2025, 12:51 pm • 0 0 • view
avatar
John Robie @jrobie.bsky.social

Silly. It's just a hammer that misses the nail a high percentage of the time. If your point is "it's not useless, it's just useless for all the things people are/are being encouraged to use it for," then sure, that's a distinction without a difference.

aug 6, 2025, 1:02 pm • 8 0 • view
avatar
robjrii.bsky.social @robjrii.bsky.social

Jfc. Beyond the fact that if someone I knew lied even 15% of the time I would probably cease to associate with them, my understanding is that the whole point of AI is to surrender your judgement for the sake of convenience

aug 6, 2025, 10:26 am • 6 0 • view
avatar
Arty-san @arty-san.bsky.social

I have no idea why people want to do that but they do. I hate to say this, but people are lazy and are very quick to abdicate thinking and sit back and push buttons. I know this because I tutor computer literacy in my free time. The problem isn't AI, it's a useful tool, the problem is humans.

aug 6, 2025, 11:10 am • 0 0 • view
avatar
Angry Leshy @americanleshy.bsky.social

The tech billionaires have given into sunken cost fallacy. They put so much into this crap they are trying to force it on us even if it means killing us to keep themselves from going broke and no longer affording their extravagant lifestyle.

aug 6, 2025, 4:14 am • 156 13 • view
avatar
fsherman.bsky.social @fsherman.bsky.social

Along with the money, it's also ego: they love to think they're visionaries and their critics are the horse-drawn carriage makers scoffing cars will never catch on. Being massively wrong? They can't accept that.

aug 6, 2025, 1:51 pm • 3 0 • view
avatar
Braeburn Gesoran @applesandoranges1.bsky.social

Let this be their undoing.

aug 6, 2025, 4:59 am • 49 1 • view
avatar
yinlock.bsky.social @yinlock.bsky.social

they also made some very big and very stupid promises to shareholders which is why they have to double down forever on it

aug 6, 2025, 12:50 pm • 2 0 • view
avatar
Brooke Harrington @ebharrington.bsky.social

Yes: this is why they bought the US government. It's a bargain, relative to their sunk costs & it gives them the means to force onto us their 💩 product no one wants. Same deal with the crypto bros who financed Trump. Social Security will likely be a bunch of AIs & your "check" will be DOGECoin.

aug 6, 2025, 1:33 pm • 2 0 • view
avatar
Looking for Something @looking4evah.bsky.social

This is all why I’m unconvinced the “AI revolution” is actually happening. It’s very hard to create a mass change in behavior and it’s not going to happen when the tools don’t do what they say they can do.

aug 6, 2025, 4:47 pm • 1 0 • view
avatar
propensityforprose @propensityforprose.bsky.social

📌

aug 7, 2025, 2:00 pm • 0 0 • view
avatar
Mark @aguasonic.com

For references, just do some 'text select' on --

image
aug 7, 2025, 4:22 am • 0 0 • view
avatar
Emillie Parrish @emillieparrish.bsky.social

Even calling it a hallucination is spin. It’s wrong, inaccurate, crap.

aug 7, 2025, 2:52 pm • 1 0 • view
avatar
Matthew Donald 📖🦖🤣 @matthewdonald64.bsky.social

When Reddit is your main source, you know you're on the losing side, lol.

image
aug 6, 2025, 7:04 am • 0 0 • view
avatar
rabagast @rabagast.bsky.social

if you really think linking to a reddit copy of a nyt article = using reddit as a main source i can see why you're so enthusiastic about AI.

aug 6, 2025, 9:36 am • 14 0 • view
avatar
Matthew Donald 📖🦖🤣 @matthewdonald64.bsky.social

Oh my bad, not a Reddit post, but an opinion article. Big difference, lol.

aug 6, 2025, 9:22 pm • 0 0 • view
avatar
Doremus Schafer @doremus-schafer.bsky.social

Is this an attempt on your part to prove that good ol' human brains can hallucinate factually wrong information about sources (in this case misattributing a news article by two Times reporters as being an opinion article) just as well as artificial intelligence can?

aug 7, 2025, 6:20 am • 1 0 • view
avatar
Matthew Donald 📖🦖🤣 @matthewdonald64.bsky.social

Yeah, you caught me, this has all been an elaborate art exhibition, lol. But sure, let's humor you and say that these non-opinion journalists are correct and AI is hallucinating more. What makes you think that'll stick? AI is currently the worst it's ever going to be again.

aug 7, 2025, 6:29 am • 0 0 • view
avatar
Doremus Schafer @doremus-schafer.bsky.social

" AI is currently the worst it's ever going to be again." [citation needed]

aug 7, 2025, 6:59 am • 2 0 • view
avatar
Matthew Donald 📖🦖🤣 @matthewdonald64.bsky.social

Are you under the impression that AI is not going to keep improving? What's the alternative?

aug 7, 2025, 9:25 am • 0 0 • view
avatar
rabagast @rabagast.bsky.social

are you fucking serious?

aug 7, 2025, 9:43 am • 2 0 • view
avatar
Doremus Schafer @doremus-schafer.bsky.social

Are you under the impression that every single aspect of human society is inevitably destined to improve, and that no alternative to this is conceivable? Because *just kind of gestures around at all of it*

aug 7, 2025, 9:29 am • 2 0 • view
avatar
Matthew Donald 📖🦖🤣 @matthewdonald64.bsky.social

When it comes to technology? Hell yeah.

aug 7, 2025, 11:04 am • 0 0 • view
avatar
Norzola @norzola.bsky.social

Uhm? So far it's become more powerful and better at stealing and emulating styles etc, but that's not = improvement, is it? Or do you consider the avalanche of shite* it produces and the gross environmental hazard a step in the right direction? *= ugly, uninspired, sloppy copies, made up 'facts'...

aug 7, 2025, 10:01 am • 1 0 • view
avatar
Matthew Donald 📖🦖🤣 @matthewdonald64.bsky.social

The fact that you call it "stealing" proves you don't actually know how it works.

image
aug 7, 2025, 11:07 am • 0 0 • view
avatar
Norzola @norzola.bsky.social

If you believe that AI doesn't steal, copy and emulate creative human beings' work then you and I cannot have a discussion, because that means we live in alternate realities.

aug 7, 2025, 11:14 am • 1 0 • view
avatar
rabagast @rabagast.bsky.social

it only keeps improving (this was last sunday, but at least, thankfully, this is "the worst it's ever going to be again")

aug 7, 2025, 10:46 am • 0 0 • view
avatar
Matthew Donald 📖🦖🤣 @matthewdonald64.bsky.social

Oh no, it made a mistake! That must mean it will always make mistakes and the technology behind it will never improve! Lol

aug 7, 2025, 11:08 am • 0 0 • view
avatar
rabagast @rabagast.bsky.social

eh he can just post a meme about it.

aug 7, 2025, 7:11 am • 0 0 • view
avatar
Matt @acadianrunner.bsky.social

Did AI make that cartoon?

aug 6, 2025, 11:42 am • 3 0 • view
avatar
Matthew Donald 📖🦖🤣 @matthewdonald64.bsky.social

Did it? I dunno. I thought you could "always tell," lol.

aug 6, 2025, 9:20 pm • 0 0 • view
avatar
Matt @acadianrunner.bsky.social

I mean, its AI bad? So its either an insult to whoever made it or its AI? Humans do make bad things, ill easily admit that

aug 6, 2025, 9:37 pm • 3 0 • view
avatar
r @recombobulating.bsky.social

it's great for perfect rational actors in a perfect hypothetical universe and if you disagree then you're the one stopping that from being the universe we live in 😭

aug 6, 2025, 11:25 am • 5 1 • view
avatar
Arty-san @arty-san.bsky.social

AIs don't have to be trained on copyrighted materials. Some commercial ventures choose to do it. Greed and competitiveness are two reasons why this happens. I don't agree with using other people's work, but it's wrong to assume the technology inherently requires it. It doesn't.

aug 6, 2025, 6:54 am • 0 0 • view
avatar
Arty-san @arty-san.bsky.social

Ai is a tool. It doesn't matter if it hallucinates, just as it doesn't matter if the tires or electrical system, or oil in a car needs maintenance. Cars as imperfect as they are, are still useful. The problem is PEOPLE having unrealistic expectations and not doing their due diligence.

aug 6, 2025, 6:53 am • 2 0 • view
avatar
ROUSes? I don't think they exist @rabblerous.bsky.social

Hallucinations aren't the same as cars needing new brake pads from time to time. They're like if you step on the brake pedal and the car accelerates instead. I've seen AI tell people that poisonous plants are safe to eat multiple times.

aug 6, 2025, 2:27 pm • 8 1 • view
avatar
Comrade Jingle Pants ☭🔻 | ✊ Time For #Revolution @cptjinglepants.bsky.social

Telling me is that you don't know what these models are. If you do, then you're cherry-picking the data. Yes, AI will hallucinate. But this is not how you measure that. This is like saying ”Shoes are bad because 95% of them either don't fit or don't last," without accounting for size or use case.

aug 6, 2025, 2:18 am • 2 0 • view
avatar
Comrade Jingle Pants ☭🔻 | ✊ Time For #Revolution @cptjinglepants.bsky.social

I'm not trying to defend AI. I have plenty of issues with it. But I do think it's important to be accurate in an assessment. For example, AI models are often specialized. The general AI models are the worst (like ChatGPT). But there are non-general models which have extremely high accuracy.

aug 6, 2025, 2:18 am • 4 0 • view
avatar
Daniel José Older @djolder.bsky.social

I get that you're not defending it, but if there are models that hallucinate 15% of the time and that's the low end and there are models that hallucinate 80% of the time then that's the spread, I dunno how else you want to cut it.

aug 6, 2025, 2:23 am • 18 0 • view
avatar
Comrade Jingle Pants ☭🔻 | ✊ Time For #Revolution @cptjinglepants.bsky.social

Because it depends on what you're asking it. That's the reason it's about knowing how to use it. For example, some models don't make up anything and simply do consolidated searches for you. It's still on you to validate the results. I'm forced to use it often for research so I learned how.

aug 6, 2025, 2:26 am • 2 0 • view
avatar
Comrade Jingle Pants ☭🔻 | ✊ Time For #Revolution @cptjinglepants.bsky.social

Another way to look at it is to ask it information that's common. All of them should have no hallucinations for basic things like, "is the planet flat?" But there's generalized searches people make that some AI has either been isolated from the truth or trained (purposefully) to be wrong about.

aug 6, 2025, 2:30 am • 2 0 • view
avatar
Comrade Jingle Pants ☭🔻 | ✊ Time For #Revolution @cptjinglepants.bsky.social

That's called "guardrails". And all the models you can use without your own hardware are using lots and lots of guardrails. They do this to avoid getting sued by the government. Political narratives are the biggest reason AI can get so much wrong.

aug 6, 2025, 2:30 am • 0 0 • view
avatar
Daniel José Older @djolder.bsky.social

You're really not explaining this in a way that makes it any better but since you're not defending ai I guess that doesn't matter (not being sarcastic)

aug 6, 2025, 2:37 am • 12 0 • view
avatar
Comrade Jingle Pants ☭🔻 | ✊ Time For #Revolution @cptjinglepants.bsky.social

What I'm pointing out is that it's not AI that is the problem. It's humans either corrupting the info in the models or humans not using AI correctly. Nobody is complaining about hammers, but if I give a classroom full of 1st graders hammers, we call would agree that's going to end badly, right?

aug 6, 2025, 2:55 am • 2 0 • view
avatar
Daniel José Older @djolder.bsky.social

That's where we disagree.

aug 6, 2025, 3:00 am • 6 0 • view
avatar
Daniel José Older @djolder.bsky.social

Not that we shouldn't give hammers to kids lol but that humans are the problem not AI.

aug 6, 2025, 3:01 am • 4 0 • view
avatar
agnes bookbinder @agnesbookbinder.bsky.social

Guns are also tools. This is why we need gun control in the US & why other countries have already figured out it’s not worth it. You can use it to feed yourself, but it’s being loaded with hollow point bullets & it’s unregulated. As long as we’re using metaphors. Tools are what they are used to do.

aug 6, 2025, 9:37 am • 0 0 • view
avatar
Comrade Jingle Pants ☭🔻 | ✊ Time For #Revolution @cptjinglepants.bsky.social

Gun control opinions aside, yes, this is the point. Not everyone needs every tool. But if you're going to use a tool, you should know how to use it or go in knowing you're still learning it and not blame the tool if you can't use it right.

aug 6, 2025, 7:47 pm • 0 0 • view
avatar
Stone Cold Jane Austen 🍉 @cofactorstrudel.bsky.social

Are you trying to say AI only hallucinates when people "use it wrong" or am I misunderstanding?

aug 6, 2025, 5:47 am • 6 0 • view
avatar
Comrade Jingle Pants ☭🔻 | ✊ Time For #Revolution @cptjinglepants.bsky.social

No. I'm trying to say that the likelihood of a hallucination greatly increases when a user doesn't know how to use the AI. Thinking like a software engineer has, in my experience, brought better results. This makes sense when you consider that software engineers are very specific in wording things.

aug 6, 2025, 7:36 pm • 0 0 • view
avatar
Darth Pagliacci the Sad 🏳️‍🌈🏳️‍⚧️🍉(1312)(3D) @mangaturtle.bsky.social

So you were just lying earlier when you said you weren't defending AI. Thanks for wasting our time here.

aug 6, 2025, 9:34 am • 2 0 • view
avatar
Comrade Jingle Pants ☭🔻 | ✊ Time For #Revolution @cptjinglepants.bsky.social

It's still not a defense of AI any more than a defense of hammers. You're conflating "AI hallucinates because users don't know what they're doing" with "AI doesn't hallucinate". My point isn't a defense but keeping the record straight.

aug 6, 2025, 7:45 pm • 0 0 • view
avatar
Comrade Jingle Pants ☭🔻 | ✊ Time For #Revolution @cptjinglepants.bsky.social

We *could all agree...

aug 6, 2025, 2:56 am • 0 0 • view
avatar
Comrade Jingle Pants ☭🔻 | ✊ Time For #Revolution @cptjinglepants.bsky.social

And that's my point. Right tool for the job or you're going to botch the job.

aug 6, 2025, 2:18 am • 0 0 • view