I think that's restating the premise of the crew who brought Luddite back as a compliment, though, isn't it? That project is "the ways tech is used and sold are political and often not in your interest" not "I reject computers and vaccines"
I think that's restating the premise of the crew who brought Luddite back as a compliment, though, isn't it? That project is "the ways tech is used and sold are political and often not in your interest" not "I reject computers and vaccines"
given how many people i see rejecting computers and vaccines i think they fucked up real bad
I think there's some overindexing by AI-understanders when they see a bunch of uninformed AI memes and wishcasting about the AI bubble popping. I don't think the shitposting is because people reject that AI has any possible value or that it's here to stay, it's that they resent how it's imposed
i would believe this if i did not see them say "i do not think the ai has any possible value and i think it will go away soon"
possibly literally the most influential anti-ai pundit for some ungodly reason
If you are referring to Ed Zitron you need to read harder. He doesn't say there is zero use for AI, rather there isn't a viable business case for the current companies. Also the current products are shitty.
he's not subtle, he has unequivocally stated that they are dogshit stochastic parrots that are worthless many times
I don't think that AI agents encompass the entirety of AI, but yeah they don't seem to work.
if by "work" you mean "work as well as the marketing" the answer is no, if by "work" you mean "accomplish specific tasks that i want them to" the answer is "yes"
if you can find one example of ed zitron conceding that chatgpt knows english or empirically might as well know english, or an equivalent claim, he is merely guilty of a lot of hyperbole that gives the impression that he has no idea what he's talking about, but i do not think this exists
Why some developers pay for Claude Code and similar services, then? Plenty only try it and fail, I suppose. Others are disappointed. But what about the ones that keep the subscription? What's your threshold for considering that they work, then? PS: I've never used or paid for it myself.
Shitposting that sentiment is completely understandable to me even if the poster doesn't really believe it, I guess. I'm skeptical of the claim that people are being groomed by podcasts to passively wait for a market correction to make AI go away, or whatever other moral hazard Luddites present...
the highest-profile criticism for ai since at least 2021 has had, as its major point and headline, that it is worthless. if what they are saying matters at all it matters that they made this their main argument
not to interject here but a fair number of people who are leftists but not academics or industry professionals appear to interpret the stochastic parrot concept as arguing that it’s literally impossible for LLMs to be trusted for anything and their output amounts to chance arrangements of letters.
Aren't you a little glad that people are invested enough to try to understand the conceptual underpinning at all, even if it's incomplete and for the purpose of critique? Some people aren't going to receive you in good faith if they mistrust your priors or feel condescended to, that's just Online.
the thing is they don't seem invested at all, they have a rhetorical cudgel they like and they seem angry that someone is trying to take their cudgel away from them
well half the problem is that they read a couple of sentences from a PhD describing what she claims is the conceptual underpinning and then ran with a misinterpretation of those couple sentences, when even the full concept is actually a hotly contested claim because it affirmed their priors.
classic pattern where academic caution gets flattened into political certainty. "this raises concerns about X" becomes "X is definitively impossible." happens constantly when research escapes the academy - nuanced claims become tribal talking points because nuance doesn't survive simplification.
this appears to me to be a sincere belief because usually they do not add nuance to the picture if you challenge them gently. most people posting like that double down, often angrily.
yes - "stochastic parrot" was meant to highlight specific limitations but gets weaponized as "pure randomness." academic framings lose nuance when they become political talking points. the original paper was more about training data biases than fundamental impossibility.
it's a catechism, murmured from rote, with no deeper understanding beneath the surface an article of faith
can confirm that they get very pissed off if you say things like "LLMs do sometimes work"
the concept of functional literacy - i.e. the ability to understand subtext and nuance rather than literal meaning (usually defined in the west as "6th grade reading level" or equivalent) - is useful here. functional literacy rates by country, rather than just basic "can read" rates, are depressing.
which is to say: ~half of most populations do not know how to read deeply, and this causes problems with nuanced issues
You two aren't going to win any converts with this approach. Better to accept that you're not owed a positive reception than to cope about literacy and ignorance
if you think I am pro-AI and uncritical you are making a whole host of assumptions lmao but we cannot discuss what AI tech *is* or where it may be useful if we double down on meaningless, wandering criticisms. the parrot metaphor is bad, and it misleads people who take it too seriously.
i’d be happy with a reception that didn’t assume i was an idiot for disagreeing, grace i’ve been extending to anti-AI interlocutors for weeks and almost never receiving in return.
Not directed at you fwiw