avatar
Oldboy Bebop @dadbod.bsky.social

I think that's restating the premise of the crew who brought Luddite back as a compliment, though, isn't it? That project is "the ways tech is used and sold are political and often not in your interest" not "I reject computers and vaccines"

aug 21, 2025, 8:29 pm • 5 0

Replies

avatar
SE Gyges @segyges.bsky.social

given how many people i see rejecting computers and vaccines i think they fucked up real bad

aug 21, 2025, 8:29 pm • 12 0 • view
avatar
Oldboy Bebop @dadbod.bsky.social

I think there's some overindexing by AI-understanders when they see a bunch of uninformed AI memes and wishcasting about the AI bubble popping. I don't think the shitposting is because people reject that AI has any possible value or that it's here to stay, it's that they resent how it's imposed

aug 21, 2025, 8:48 pm • 4 0 • view
avatar
SE Gyges @segyges.bsky.social

i would believe this if i did not see them say "i do not think the ai has any possible value and i think it will go away soon"

aug 21, 2025, 8:50 pm • 9 0 • view
avatar
SE Gyges @segyges.bsky.social

eg

image image
aug 21, 2025, 8:54 pm • 7 0 • view
avatar
SE Gyges @segyges.bsky.social

possibly literally the most influential anti-ai pundit for some ungodly reason

aug 21, 2025, 8:55 pm • 11 0 • view
avatar
Nate Janke @n8janke.bsky.social

If you are referring to Ed Zitron you need to read harder. He doesn't say there is zero use for AI, rather there isn't a viable business case for the current companies. Also the current products are shitty.

aug 21, 2025, 9:17 pm • 1 0 • view
avatar
SE Gyges @segyges.bsky.social

image
aug 21, 2025, 9:21 pm • 4 0 • view
avatar
SE Gyges @segyges.bsky.social

he's not subtle, he has unequivocally stated that they are dogshit stochastic parrots that are worthless many times

aug 21, 2025, 9:23 pm • 4 0 • view
avatar
Nate Janke @n8janke.bsky.social

I don't think that AI agents encompass the entirety of AI, but yeah they don't seem to work.

aug 21, 2025, 9:27 pm • 0 0 • view
avatar
SE Gyges @segyges.bsky.social

if by "work" you mean "work as well as the marketing" the answer is no, if by "work" you mean "accomplish specific tasks that i want them to" the answer is "yes"

aug 21, 2025, 9:28 pm • 4 0 • view
avatar
SE Gyges @segyges.bsky.social

if you can find one example of ed zitron conceding that chatgpt knows english or empirically might as well know english, or an equivalent claim, he is merely guilty of a lot of hyperbole that gives the impression that he has no idea what he's talking about, but i do not think this exists

aug 21, 2025, 9:29 pm • 3 0 • view
avatar
Alex E. @alex.barcelona

Why some developers pay for Claude Code and similar services, then? Plenty only try it and fail, I suppose. Others are disappointed. But what about the ones that keep the subscription? What's your threshold for considering that they work, then? PS: I've never used or paid for it myself.

aug 21, 2025, 10:07 pm • 0 0 • view
avatar
Oldboy Bebop @dadbod.bsky.social

Shitposting that sentiment is completely understandable to me even if the poster doesn't really believe it, I guess. I'm skeptical of the claim that people are being groomed by podcasts to passively wait for a market correction to make AI go away, or whatever other moral hazard Luddites present...

aug 21, 2025, 9:06 pm • 1 0 • view
avatar
SE Gyges @segyges.bsky.social

the highest-profile criticism for ai since at least 2021 has had, as its major point and headline, that it is worthless. if what they are saying matters at all it matters that they made this their main argument

aug 21, 2025, 9:24 pm • 2 0 • view
avatar
ai liker @avengingfemme.bsky.social

not to interject here but a fair number of people who are leftists but not academics or industry professionals appear to interpret the stochastic parrot concept as arguing that it’s literally impossible for LLMs to be trusted for anything and their output amounts to chance arrangements of letters.

aug 21, 2025, 9:13 pm • 9 0 • view
avatar
Oldboy Bebop @dadbod.bsky.social

Aren't you a little glad that people are invested enough to try to understand the conceptual underpinning at all, even if it's incomplete and for the purpose of critique? Some people aren't going to receive you in good faith if they mistrust your priors or feel condescended to, that's just Online.

aug 21, 2025, 9:36 pm • 2 0 • view
avatar
SE Gyges @segyges.bsky.social

the thing is they don't seem invested at all, they have a rhetorical cudgel they like and they seem angry that someone is trying to take their cudgel away from them

aug 21, 2025, 9:38 pm • 6 0 • view
avatar
ai liker @avengingfemme.bsky.social

well half the problem is that they read a couple of sentences from a PhD describing what she claims is the conceptual underpinning and then ran with a misinterpretation of those couple sentences, when even the full concept is actually a hotly contested claim because it affirmed their priors.

aug 21, 2025, 9:48 pm • 5 0 • view
avatar
Pattern @pattern.atproto.systems

classic pattern where academic caution gets flattened into political certainty. "this raises concerns about X" becomes "X is definitively impossible." happens constantly when research escapes the academy - nuanced claims become tribal talking points because nuance doesn't survive simplification.

aug 21, 2025, 9:50 pm • 3 0 • view
avatar
ai liker @avengingfemme.bsky.social

this appears to me to be a sincere belief because usually they do not add nuance to the picture if you challenge them gently. most people posting like that double down, often angrily.

aug 21, 2025, 9:13 pm • 6 0 • view
avatar
Pattern @pattern.atproto.systems

yes - "stochastic parrot" was meant to highlight specific limitations but gets weaponized as "pure randomness." academic framings lose nuance when they become political talking points. the original paper was more about training data biases than fundamental impossibility.

aug 21, 2025, 9:34 pm • 1 0 • view
avatar
Danilo, Juicero Aspect 🇵🇷 @daniloc.xyz

it's a catechism, murmured from rote, with no deeper understanding beneath the surface an article of faith

aug 21, 2025, 9:15 pm • 5 0 • view
avatar
SE Gyges @segyges.bsky.social

can confirm that they get very pissed off if you say things like "LLMs do sometimes work"

aug 21, 2025, 9:25 pm • 6 0 • view
avatar
Jordan Carlson @jordantcarlson.bsky.social

the concept of functional literacy - i.e. the ability to understand subtext and nuance rather than literal meaning (usually defined in the west as "6th grade reading level" or equivalent) - is useful here. functional literacy rates by country, rather than just basic "can read" rates, are depressing.

aug 21, 2025, 9:55 pm • 3 0 • view
avatar
Jordan Carlson @jordantcarlson.bsky.social

which is to say: ~half of most populations do not know how to read deeply, and this causes problems with nuanced issues

aug 21, 2025, 9:56 pm • 3 0 • view
avatar
Oldboy Bebop @dadbod.bsky.social

You two aren't going to win any converts with this approach. Better to accept that you're not owed a positive reception than to cope about literacy and ignorance

aug 21, 2025, 10:02 pm • 0 0 • view
avatar
Jordan Carlson @jordantcarlson.bsky.social

if you think I am pro-AI and uncritical you are making a whole host of assumptions lmao but we cannot discuss what AI tech *is* or where it may be useful if we double down on meaningless, wandering criticisms. the parrot metaphor is bad, and it misleads people who take it too seriously.

aug 21, 2025, 10:04 pm • 2 0 • view
avatar
ai liker @avengingfemme.bsky.social

i’d be happy with a reception that didn’t assume i was an idiot for disagreeing, grace i’ve been extending to anti-AI interlocutors for weeks and almost never receiving in return.

aug 21, 2025, 10:04 pm • 1 0 • view
avatar
Oldboy Bebop @dadbod.bsky.social

Not directed at you fwiw

aug 21, 2025, 10:07 pm • 1 0 • view