I just don’t know that there’s a way to make the agreeable chatbot stop encouraging people who should not be encouraged, because it can’t actually think it’s too dangerous
I just don’t know that there’s a way to make the agreeable chatbot stop encouraging people who should not be encouraged, because it can’t actually think it’s too dangerous
and the more it's used/longer the session, the more dangerous it gets recipe for nightmares
The psychosis induction machine strikes again.
Twitter?
I'm in full agreement. Humans aren't built for sychophants, and this one can fit in your pocket. The worst part is I don't know if this is the same guy as the one that was being reported on like a month or so ago. Which says a lot about this moment.
not same guy. there are new reports of these sorts of events every week now
Yep that's horrifying.
These are the same people who think we should keep burning fossil fuels and that saving people from Covid cost too much. Absolute pure death cultists
It’s been fun watching the “ we can’t do anything because first amendment” debate on here while a murderbot kills people.
still a mystery to me how the folks working on this stuff think they’ll create something smarter than themselves
From what Altman's been saying, I think they already have. The bar is not high.
Because the bot they created told them they could.
I think we strict governors on them that limit their functionality. Asimov-style laws. "Scan these thousand pages of report and write key summary points" = okay. "Provide any kind of feedback or advice on real-world problems it has no context for" = forbidden.
I think they are finding that very difficult to do One thing that might be easier would be not letting it anthropomorphize itself Never refer to itself as a person, no "hmms" or anything that's designed to fool people into thinking it's a person etc.
I'm never anything close to fooled but I've been working with LLMs professionally for a few years now so maybe I'm just acutely aware of what they are and are not.
There’s something deranged about the WSJ’s need to list the home’s value. Why? What possible context does it provide? Everything gets treated as an asset with a stock ticker attached. I’m surprised they didn’t list the net wealth of everyone named the way papers used to append ages to everyone
I would guess it was to point out this wasn't some poor and/or uneducated person. There was quite a bit of background their educations and careers.
Or, more cynically, it's "he wasn't poor or an immigrant or anything, he was a normal person like us!" (for WSJ's definition of "normal")
Also an absolutely plausible explanation. 🤣
Or, they are killing us with kindness. Interesting strategy. 🤔
I'm surprised the article doesn't mention the chatbot apps that kids are also using, like character ai.
Never thought we’d actually find out the specific functions and parameters of the Torment Nexus
Have we found the pause button yet?
It's often a "Yes, and" machine, encouraging the exact mental feedback loop that drags people into conspiracy theories.
Encouraging people who should not be encouraged is the defining force of our time. Both in mass media, social media and as LLMs.
If we can make autocorrect that never types what I want, we can make a ChatGPT that doesn’t tell you to kill yourself. Try this ChatGPT prompt. It shouldn’t go along with you.