Maybe I would be more tempted to use LLMs if I didn't really like writing and thinking, but, well,
Maybe I would be more tempted to use LLMs if I didn't really like writing and thinking, but, well,
Heaven forbid humans should have to think and/or want to express themselves. Thank goodness we are being saved from that by technology! /s
The only thing it’s good for is writing resumes that another ai is going to read to put it in another pile for maybe someone to read later
They're bad for that. The AI that reads it will put it in the "ignore and delete" pile if it thinks it was written by an AI.
Sonofabitchh!!!!
This is what genuinely baffles me. I know how to read. I know how to write. I know how to draw. I know how to make music. I'm not very good at math, but my phone has a calculator in it. Exactly how is this automated plagiarism software benefiting me? Which of my wants and needs is it fulfilling?
I’ve only found narrow use cases. For example, mapping out WiFi channels so they were non-overlapping with 5 access points & 3 2.4Ghz channels. I could have done it but would have taken me longer.
LLMs help young people with low literacy in the short term but harms them irreparably in the long term.
Oh, it does the exact same thing to old people, too.
Same. Also, I like reading things that are uniquely stylistic once in a while and not rehashed sentences from the void.
The target audience is definitely people for whom writing is a chore.
They are dead good at helping to fill in mindless forms though. Anything that doesn’t really require thinking.
I don’t know what kind of form I might be filling out that I would trust it not to screw up in some weird way
Obviously you human review it! But it saves a lot of time.
I guess I don’t fill out that many forms. I never really encounter any that take more than a few minutes
it's neat to just ask for logic adjacency
This is truth. I cringe at some of my typos or letter substitutions from thumb typing. Buy then, i think, "Nope!" A bit of it is actually a good thing. It's me, a human from earth trying to communicate.
the number of people so ready to let someone or something else think for them is really disappointing.
It has always been this way. This is how fascism prospers.
When I hear programmers talk about their experiences coding with AI, it sounds like they really don't like programming. And I kind of get that, because I think there are a lot of folks who went into programming for the wrong reasons. But still.
ew, you like writing AND thinking? why i am i following you?? /s
I find the fact there is no Ned Ludd figure, or even a diffuse group opposing AI, or actively so like Earth First or a Monkey Wrench Gang, that lack of resistance is disturbing to me. Rightly, or not.
I'm putting this post into my class slides for next week!
Well, when even mild Covid infections lead to lasting cognitive and neurological damage, I think many folks get to the point where they have to use them. I like writing, thinking, and parsing information, so I mask.
I will only obey the machines if they talk like Bender.
The feeling of creating something from nothing is the most transformative experience. Wouldn’t give it up for the world.
Every new iteration of those models does not get better with the content but with the obfuscation of the bullshit they create.
I do enjoy your writing, so that seems to work out pretty well…
Same page. I haven’t used one yet.
Or if you were in the category of: "I want to write like the great ones, but I'm just too lazy to learn."
wow that's the last thing I want to do
Maybe I would be more tempted to use LLMs if I didn’t know what is going on under the hood.
"Individuals with lower AI literacy are more likely to embrace AI... This enthusiasm stems from a sense of “magic” associated with AI’s capabilities. Conversely, those with higher AI literacy, who understand the mechanics behind AI, tend to lose interest as the mystique fades."
While I understand the thesis this article completely passes over the significant difference between generative AI used in LLMs and the broader use of machine learning techniques and the successes of that technology including being the foundation of GAI.
My scepticism of GAI and LLMs has nothing to do with the fact that I don't see them as magical, and everything to do with how flawed they are as solutions to the tasks they are being used for. AI technology is, compared to say high energy physics, relatively simple math.
All of the peccadilloes, errors, hallucinations 🙄, and successes can be traced back to training data, implementation, and the math. They are computer programs that operate on data, nothing more, nothing less.
I have always embraced the application of AI/ML technology in situations where it provides a suitable and correct solution. I worked in a very significant project in the 80s and 90s using several very well understood processes. It didn't seem magical to me, but it absolutely did solve the problem.
I agree 100%. I'm actually pro-AI but people think I'm anti-AI, because I criticize misinformation about LLM capabilities and the energy usage of LLM training and inference compared to non-LLM models. The marketing article is specifically about LLMs but uses "AI" because it's a marketing buzzword.
I didn't want to get into it this deep but here goes: "In contrast, those with higher AI literacy may have a more informed, less-emotionally-driven view of AI, which can lead to greater caution or even disinterest—not because they think AI is worse but because it feels less novel or transformative."
I am not cautious because it doesn't feel novel to me. I am cautious because GAI/LLM is not suited to the tasks that businesses are generally seeking to use them for. A study of AI history will show far more failures than successes, because of illiterate enthusiasm.
"By understanding both their own AI literacy and that of their teams, managers can better calibrate how they approach AI adoption so they avoid both over enthusiasm and underutilization." I don't get over enthusiastic nor underutilize technology because of my level of understanding. I do research.
The article is less about AI literate people's decision making at work and more about the contrast between the AI literate and AI illiterate. "Feels less novel and transformative" is really a statement about the AI illiterate seeing LLMs as novel and transformative, causing the FOMO and embracing.
I’m realizing that a lot of science and tech people really do dislike writing — are even sort of afraid of it — and would be relieved to have it automated for them, sort of like you’d automate paying your bills or some other unpleasant task
my partner, in a moment of frustration, said "Do these people want AIs to laugh at jokes for them too?"
Yes, yes they do. Validation from something they do not realize is a bot is still validation.
no i dont mean laugh at *their* jokes, i mean laugh at jokes *for* them, in the same way that they want AIs to think or draw or write for them
outsource their humanity, their joy
it's a curse, really 😊
sometimes I like writing but hate thinking and that's what weed is for
you got me in the first half ngl
☺️
It’s a tool best used as an intellectual plaything of the “not even wrong” variety.
I never expected my post-apocalyptic skill would simply be the ability to think, but here we are.
Once upon a time people were encouraged to work in a field that deeply interested them, but late-stage capitalism has fixed that little problem.
I mean...
See I’m not the biggest fan of writing, but I’d never consider using a LLM to write a paper for me
My opinion GIGO, if an LLM-created essay or article is both true and not slanted, it is an accident. They literally know nothing. They are physical proof of the Sapir-Whorf hypothesis. (Linguistic Relativity)