Writing was too hard, so you asked the machine to do it. Later, when you tried to speak, you found you had only the machine’s words on your tongue. www.theverge.com/openai/68674...
Writing was too hard, so you asked the machine to do it. Later, when you tried to speak, you found you had only the machine’s words on your tongue. www.theverge.com/openai/68674...
Doubleplusungood!
One of the most "Oh wait, you're saying that as a good thing?" moments I've had was seeing someone describe a future where an AI whispers responses in your ear during conversations. That's the future they want, just to be a non-entity repeating the computer and pretending they did it themselves.
Cool. Awesome. We know that cognitive offloading impairs learning and independent critical thought. If you offload all your thinking…
I'm sure they had some bullshit reasoning about democratizing social skills or something like that. But I can't imagine anyone being anything other than appalled if they found out they were basically having a conversation with an LLM by proxy.
The funny thing is it doesnt democratize a skill, it actually erodes your skill. But it gives the illusion of skill at the expense of developing the actual skill.
This sorta thing stresses me out; I neither use nor read things generated in chatgpt so I have no clue what parts of vocabulary are considered "AI tells" now. For all I know a random word I use in writing all the time is going to give some person the wrong idea in the future... 😩
I recommend reading the article. It's less about "AI tells", and more about how the rampant use of LLMs is affecting the way real people speak.
I did read the article. I'm pointing out that things like "delve" apparently being a word people associate with AI now is something I didn't know about at all, so it troubles me that someone might find me less trustworthy if I use a word like that without knowing this stuff.
There are a bunch of "tells" that have become almost like supersitions. I think the article is trying to point out that these are irrelevant, since real humans are starting to use those same phrases more BECAUSE LLMs do. They were trained off us. We are being trained by them.
"These findings provide the first empirical evidence that humans increasingly imitate LLMs in their spoken language. [They] highlight the need for further investigation into the feedback loops between machine behavior and human culture." arxiv.org/html/2409.01...
What's wild about this to me is the article says that the uniqueness of your speech makes you come across as human but I've always had problems because of odd speech patterns and word choices. "Yeah, sure," means the same thing as "yes, definitely," but no one takes it that way
2/ similarly, someone complimented my parenting once and I said, "One does what one can," (admittedly a movie quote) and they gave me a strange look despite that sentence being functionally no different than "I do the best I can."
3/ so there's paradox of SOUND NORMAL! SOUND NORMAL! Followed by WAIT! THAT'S TOO NORMAL!
As a professional writer of fantasy - the fact that "delve" has become a flag is disconcerting. My lexicon and divergent speech were already tics that gave normies creeps when they spoke to me. I also had a partner once accuse me of "using words like that to make (them) feel stupid!" and now this.
Who knows, the phenomenon discussed by this article might benefit you. If everyone sounds like an LLM, then no one does.