avatar
Arty-san @arty-san.bsky.social

Yes, exactly, all of the output has to be validated. That's part of the process.

aug 6, 2025, 12:27 pm • 0 0

Replies

avatar
John Rogers @johnrogers.bsky.social

No, if a person did that, *you would fire or stop working with that person*. If your ATM was wrong 15% of the time, you would not be so casual. If a brand of hammer broke 15% to 30% of the time you’d stop using it.

aug 6, 2025, 1:32 pm • 29 0 • view
avatar
John Rogers @johnrogers.bsky.social

The question is why are you making an exception for a tool which not only doesn’t function like it’s supposed to, is being sold to us and its use mandated *as if it did not have those flaws*.

aug 6, 2025, 1:34 pm • 27 2 • view
avatar
John Rogers @johnrogers.bsky.social

You are working under the assumption that tools are neutral, which they are not, particularly the ones that are marketed with lies, used by people who don’t understand them *because* of those lies, and also have this environmental effects. The thing is what it does.

aug 6, 2025, 1:40 pm • 17 1 • view
avatar
Arty-san @arty-san.bsky.social

Online interactive video gaming and bitcoin mining (to name only two examples) take a lot of computing resources. The environmental effects do concern me but China has demonstrated (twice) that American systems are bloated and AI can be run with far fewer resources.

aug 6, 2025, 1:49 pm • 1 0 • view
avatar
who cares @maaaatttt.bsky.social

bitcoin mining is also complete garbage, and online gaming uses a tiny fraction of the resources that either of the other sectors use.

aug 6, 2025, 7:15 pm • 2 0 • view
avatar
Redshift @redshiftsinger.bsky.social

People shouldn’t spend huge amounts of energy solving sudoku puzzles for fraud “money” either.

aug 6, 2025, 6:53 pm • 5 0 • view
avatar
Arty-san @arty-san.bsky.social

If the ATM were wrong 15% of the time I wouldn't use it for banking transactions. But I'm not using AI for banking. If an AI could make lottery, stock market, or slot machine predictions with 85% accuracy would you use it?

aug 6, 2025, 1:39 pm • 1 0 • view
avatar
Stan Fresh @stan-fresh.bsky.social

the affirmation machine that can't even model the present is going to model the future LMAO bridge-sellers are having a field day with this stuff

aug 6, 2025, 1:49 pm • 2 0 • view
avatar
John Rogers @johnrogers.bsky.social

But people aren’t using it to make predictions, they are using it to find, organize and present information, *which is the thing it is bad at*. And, also, knock-on effects.

aug 6, 2025, 1:47 pm • 11 0 • view
avatar
John Rogers @johnrogers.bsky.social

The phrasing of your own response carries its refutation.

aug 6, 2025, 1:48 pm • 10 0 • view
avatar
Arty-san @arty-san.bsky.social

LLMs are only the tip of the iceberg. Not even the tip, just a little chunk off to one side. AI is not an app, it's a coding approach. I don't know why so many people use LLMs as the exemplar for AI because it isn't. It just happens to be one that's easily available and thus frequently used.

aug 6, 2025, 1:57 pm • 0 0 • view
avatar
Julian @jl8e.bsky.social

It is literally what is being sold as “AI”, the magic technology. If you’re arguing for machine learning, which is actually useful in some problem domains, don’t chime in when people are talking about LLMs, which everyone talking about “AI” right now is.

aug 6, 2025, 2:07 pm • 4 0 • view
avatar
Arty-san @arty-san.bsky.social

If they are talking about LLMs then they should be specific about it because LLMs in no way exemplify the whole field of AI. It makes no sense to "define" AI as LLMs, that's like making assertions about the universe by talking only about the moon.

aug 6, 2025, 2:12 pm • 0 0 • view
avatar
Danny Lore @weredawgz.bsky.social

That’s explicitly a problem with people pushing llms, who literally advertise it as ai If everyone else in the conversation knows what they are talking about except you, the convo isn’t the problem

aug 6, 2025, 2:16 pm • 5 0 • view
avatar
Julian @jl8e.bsky.social

You wanna defend the honor of machine learning against confusion with the really fancy Markov chains, take it up with the LLM boosters who are selling it as AI Doing it here just makes it look like you’re trying to defend LLMs by conflating them with shit that actually does things

aug 6, 2025, 2:19 pm • 1 0 • view
avatar
Arty-san @arty-san.bsky.social

Saying that an imperfect tool can still be useful is not the same as defending LLMs. I'm trying to point out that AI is a broad field (much much more than LLMs), AI is nascent and still developing, and AI is not well understood by the general public.

aug 6, 2025, 2:29 pm • 0 0 • view
avatar
Richard Moon, Sandwich Enjoyer @rmoonhill.bsky.social

At some point you should probably try to realize that "the broad field" is Not, and was Never, what was being criticized here. Because the amount of time you continue to fail to do that does not help your cause, at all.

aug 6, 2025, 4:14 pm • 1 0 • view
avatar
Arty-san @arty-san.bsky.social

When people use the term "AI" they need to be aware that it is a broad field. It's not a machine, it's not a specific app, it's not specifically LLMs. If the subject of discussion is specifically LLMs then that needs to be clear too, otherwise misunderstandings are inevitable.

aug 6, 2025, 4:17 pm • 0 0 • view
avatar
Will "scifantasy" Frank @scifantasy.bsky.social

But you're not using AI for making lottery, stock market, or slot machine predictions.

aug 6, 2025, 1:48 pm • 5 0 • view
avatar
Arty-san @arty-san.bsky.social

The point I was making is that some tools need to be accurate to be useful (rulers, ATMs, clocks) but not all tools need to be 100% accurate to be useful. A tool that could predict the stock market or winning lottery numbers, or even some aspects of weather, with 85% accuracy would be useful.

aug 6, 2025, 1:54 pm • 0 0 • view
avatar
Will "scifantasy" Frank @scifantasy.bsky.social

AI is not one of those tools. In fact it is the opposite. If it is not accurate, much like a ruler, it is actively harmful.

aug 6, 2025, 1:56 pm • 4 0 • view
avatar
Arty-san @arty-san.bsky.social

Once again, that's a people problem. Unrealistic expectation and lack of education.

aug 6, 2025, 1:59 pm • 0 0 • view
avatar
Will "scifantasy" Frank @scifantasy.bsky.social

Everything is a people problem. But the people pushing AI are pushing false narratives about usefulness and accuracy. If I bought a ruler that turned out to only work 75% of the time and when I complained to the manufacturer they said "well, you had false expectations," would that be okay?

aug 6, 2025, 2:01 pm • 4 0 • view
avatar
Will "scifantasy" Frank @scifantasy.bsky.social

(especially when the manufacturer, and every manufacturer, was pushing me to use one of the new rulers arguing it would save me time?)

aug 6, 2025, 2:05 pm • 3 0 • view
avatar
Arty-san @arty-san.bsky.social

I haven't been hearing a lot of claims about accuracy. Who is making them? Can you give some examples (credible ones)?

aug 6, 2025, 2:05 pm • 0 0 • view
avatar
Will "scifantasy" Frank @scifantasy.bsky.social

I could ask ChatGPT...

aug 6, 2025, 2:06 pm • 0 0 • view
avatar
Will "scifantasy" Frank @scifantasy.bsky.social

So wait, now you're saying it isn't a problem that AI isn't accurate because no one is CLAIMING AI is accurate?

aug 6, 2025, 2:06 pm • 3 0 • view
avatar
Arty-san @arty-san.bsky.social

If you read my post again, you'll see that I was asking a question not making an assertion.

aug 6, 2025, 2:07 pm • 0 0 • view
avatar
Redshift @redshiftsinger.bsky.social

“AI” cannot and will never be able to do that.

aug 6, 2025, 6:55 pm • 2 0 • view
avatar
Redshift @redshiftsinger.bsky.social

“If unicorns lived in your shoes and could grant wishes, would you buy more shoes?” Is how you sound rn.

aug 6, 2025, 6:54 pm • 13 1 • view
avatar
who cares @maaaatttt.bsky.social

that’s fucking stupid. if an AI produces something and I have to check everything it did, that saves me no time.

aug 6, 2025, 12:40 pm • 6 0 • view
avatar
Arty-san @arty-san.bsky.social

Checking takes less time than gathering, organizing, and checking (since everything needs checking anyway).

aug 6, 2025, 12:42 pm • 0 0 • view
avatar
who cares @maaaatttt.bsky.social

no, not everything needs checking. a calculator that is wrong 15% of the time would be useless. If I am preparing a document and 15% of the cites are completely made up, that most likely changes the entire fabric of what I’m writing. And to be wrong 15% of the time it lights forests on fire.

aug 6, 2025, 12:44 pm • 3 0 • view
avatar
Arty-san @arty-san.bsky.social

That doesn't mean everything that's wrong 15% of the time is useless. I don't know why you think something has to be perfect to be useful. A narrow windy gravel road full of potholes isn't much good for regular cars or wheelchairs, but it's still useful for getting places you couldn't go otherwise.

aug 6, 2025, 12:53 pm • 0 0 • view
avatar
who cares @maaaatttt.bsky.social

to be clear, though, I don’t think LLM are useless. they are fancy autocomplete. I do think they are not worth feeding human creative products into an IP woodchipper for, nor are they worth the amounts of money, silicon, and water they are consuming at present.

aug 6, 2025, 1:05 pm • 0 0 • view
avatar
who cares @maaaatttt.bsky.social

Get back to me when people are spending billions of dollars to build narrow windy gravel roads and touting them as the future of computing.

aug 6, 2025, 12:58 pm • 0 0 • view
avatar
Arty-san @arty-san.bsky.social

When the settlers first came here, a narrow windy gravel road was a big step forward. AI will improve. Some will benefit, some will never learn to use it effectively, but just as roads improved, AI will too. It's not going away so we'd better learn to steer it in positive directions.

aug 6, 2025, 1:06 pm • 0 0 • view
avatar
who cares @maaaatttt.bsky.social

when the settlers first came here they were shitting in pots. now imagine if someone in London said “hey guys I’ve got the future of plumbing here, we’re gonna trade our sewer system for pots that we dump in the woods, money please”

aug 6, 2025, 1:09 pm • 1 0 • view
avatar
Redshift @redshiftsinger.bsky.social

It’s twice as bad as the average of all human communication. Things do not have to be perfect to be useful. They DO have to be better than doing without them to be useful. “AI” is worse at everything than humans are, ergo, it is not useful.

aug 6, 2025, 6:57 pm • 2 0 • view
avatar
Kat Vernon @katbird.bsky.social

You have to gather the information to validate the AI output, so you’re doing double work! If I’ve gathered and organized the information, then I don’t have to check it because I already know it’s correct!!

aug 6, 2025, 1:08 pm • 5 0 • view
avatar
Arty-san @arty-san.bsky.social

Double work isn't always a bad thing. We find things in one way and the AI finds it in another. Put them together and the whole may be better because of the different parts. I always check my work anyway, even if I gathered it myself. I tell people if you don't think AI is useful, don't use it.

aug 6, 2025, 1:14 pm • 0 0 • view
avatar
who cares @maaaatttt.bsky.social

…this is pointless. If you are gonna sit here and say “well actually it’s a good thing if it causes double the work” then I’m truly at a loss for words. ChatGPT, please draw me “Robert Downey jr rolling his eyes in the studio ghibli style”

aug 6, 2025, 1:18 pm • 3 0 • view
avatar
Arty-san @arty-san.bsky.social

Yes, sometimes it's a good thing to have input from another source. That doesn't automatically mean twice the work. There is overlap between what the human gathers and what the AI gathers. [ChatGPT doesn't know how to draw. It passes the request to Dall-E which is a different AI that draws it.]

aug 6, 2025, 1:23 pm • 0 0 • view
avatar
who cares @maaaatttt.bsky.social

ChatGPT, draw the dismissive wanking gesture in ASCII

aug 6, 2025, 1:25 pm • 5 0 • view
avatar
A Jelly Squishes Slowly By @ubiquisquish.bsky.social

I mean, in my experience the people using LLMs aren’t validating. *I* am. They’re “saving time” by wasting mine. The close editing required is more time consuming and harder than writing something on your own. Which, of course, is why they don’t do it.

aug 8, 2025, 2:28 am • 1 0 • view
avatar
Arty-san @arty-san.bsky.social

I wouldn't be surprised if those using LLMs without validating are the same ones who don't know how to write or do research in the first place. I've met far too many people who think AI apps are "magic" and believe not only what they tell them but assume they are omniscient and infallible.

aug 8, 2025, 2:41 am • 0 0 • view
avatar
A Jelly Squishes Slowly By @ubiquisquish.bsky.social

Sure, that’s probably true. However, it is the LLMs making references to non-existent papers and jumbling numbers. These are mistakes different from human mistakes that cause wider problems.

aug 8, 2025, 5:25 pm • 0 0 • view