chatbots do not work that way goodnight
chatbots do not work that way goodnight
"many experts I know said 'that doesn't make sense' and 'shut the fuck up.' but I didn't"
B R A V E
The inevitable result of people using *human* language to describe *computer programs.* AI doesn't "think." It doesn't "hallucinate." It's "neural networks" aren't made up of neurons. These are just convenient ways to explain computing processes. AI does what it's programmed to do. That's it.
Taking AI welfare seriously? We don't even take human welfare seriously!
seriously, why is everyone so stupid? I swear the quality of discourse has plummetted the last several years
Covid? (I’m only somewhat joking)
probably, but it's also like experts have given up trying to participate and we've decided we don't need them anyway
autistic people do not lack a theory of mind but the people who keep talking themselves into believing that a predictive text generator is sentient sure aren't beating those allegations
Slashing their chests at the altar of T9 keyboards. Was it better when people spent their stupid on proper religions?
Thank you, Morbo
it's incredible how easy it is to be better at both computer science (this part is easy) AND neurobiology than a celebrated herald of enlightenment from the New York Times
Alarm clock goes off - but have we considered the ethical ramifications if enslaving such machines?
“Oh, no! The poor AI! Better make some human prisoners do the work instead.”
How about we start to take the welfare of humans serious first, then maybe move on to some weird computer shit that doesn't need to even eat, sleep or take a shit.
Oh ffs
Same energy as "Do video game NPCs deserve human rights? More at 11." No, end of story, they fundamentally do not work the way you are implying.
I feel like these folks keep thinking LLMs are sentient because they don’t have sufficient familiarity with actual human beings (including their own thought processes)
There are some conversations where you should have a lawyer in the room with you. It is becoming increasingly clear that journalists should be learning that there are other conversations, with tech evangelists, where you probably want somebody with technical knowledge in the room with you.
... i read a pretty sobering article about a young boy who developed a relationship to an ai bot. it did not end well another case about a grown man who also ended his life after talking to a chatbot something that can simulate closeness but lacks understanding about what it does is dangerous
and those are just the ones we know about.
aka: take human welfare seriousl first you weird techbro-mfers the shit human-job-replacig-MLM-word prediction machines seem to be able to harm people who are in a psychological crisis by reinforcing toxic thoughts (great plan to use those as therapists *gagging noises*) maybe take that seriously...
they're so greasily desperate to con everyone (maybe including themselves) into forgetting there is no AI - only chatbots and LLMs. Personally, I'm all about rights for artificial intelligences - if we ever get one. But maybe until then we could concentrate on oh, human rights for all humans, first.
another nyt piece, another head desk
This is like asking if the d20 gets sad when I yell at it for rolling low.
it should though I really needed to make that saving throw
…if the d20s faces were each made from plagiarized words and also it cut down a couple of trees in a rain forest every time I rolled.
Many experts say no but who do they think they are, experts?
What? We don’t even take the welfare of all our citizens seriously.