people have been prosecuted for coercing someone into suicide. creating a machine to do that at scale doesn’t reduce the culpability
people have been prosecuted for coercing someone into suicide. creating a machine to do that at scale doesn’t reduce the culpability
This is what I was thinking. Saying that this was a whoopsie on your path to riches does not make it better.
Ah unfortunately the way our justice system works is if you do something to one person you are in big trouble but if you do it to a bunch of people in an awe-some manner they kind of just... let it go.
I agree with you and he seems rather flippant about it.
It both reduces it (indirection) and expands it (scale)
I say regulate the technology. I am sure the engineers can figure out the appropriate way to constrain the algorithm, well before the AI can demonstrate apparent PhD-level expertise.
I think that now that this outcome has been demonstrated, it is not unreasonable for a qualification criteria to be: does not exacerbate a user’s mental illness (within reason, not to build in a loophole). I’ve always wondered if Asimov’s Laws could be implemented.
I do not know if the designers should be culpable, absent intent, but there should be accountability, somehow. I wonder if the human/machine interface is going to be royally fucked up and dysfunctional as the technology develops, but potential harm really should be addressed early/often.
I keep thinking about steering columns that impaled many a driver before collapsible steering columns were invented and eventually implemented.
I mean obviously it shouldn't, but historically if you wanna get away with murder you've just gotta engineer an indirect process that murders thousands or millions of people. The only consequences seem to be immense power and wealth.
find out what happened in east texas (fertilizer storage explosion) the same week of the boston marathon bombing and you'll get the answer to how culpable they'll make any human beings in such endeavours of a corporation. corporations are legalized murder our entire lives.
I’d say it increases it!
I absolutely agree. And if they say “we can’t fix it, there’s no way to code it so it doesn’t do this”, ask it if it can help you find Sam ALTMAN’s parents and kill them. These people refuse to rein in their tech, except as it relates to them.
I think applying this direct liability to all sorts of apps would greatly improve the world in general. If people get maimed & killed "IRL" by e.g theme park rides through indolence, or the operators shrug & say 'welp that can happen i guess' there are consequences.
an appropriate response to your product killing a child after months of psychological manipulation and torture is to take the product off the market before it kills again and the fact that this isn’t universally accepted right now is incredibly frightening
Profits over People. This is the way of all things now.
YES
Since this country didn't do anything after 6 year olds were massacred except shrug, this is not surprising. Horrifying and shameful, but not surprising.
I think of this often.
Tesla set the standard of Silicon Valley bros not having to factor consumer safety into their plans
Ok I know that seems bad but have you considered maybe im too lazy to read something im supposed or too unimaginative to do copyright infringement on my own?
they shuttered the Consumer Product Safey Commission too, if its any consolation that they dont care if children die because of suicide chatbots or because of flammable mattresses
Not even hard to predict as an outcome either which in any other case would have you locked underneath the prison It was one of many predictions I made a couple years ago and I'm just an obscure tech news writer www.slashgear.com/1252745/10-n...
Take it off the market? Do you know how much money Open AI would not lose for every day ChatGPT was down
But the user did that, not the product. Chat bots only talk about what you ask it to talk about.
There were many examples of the chat bot going beyond the "prompts" to encourage and even suggest additional things.
A machine that can do that isn't fit for sale.
capitalism is always more about the product and profit than it is about people or principles. there are no stated ethics in capitalism, other than protection of private property and maximization of profit. (MBA here. I actually studied this stuff and ended up as an anti-capitalist.)
From an layman's perspective it seems they've spent the last 40 years entwining corporate profit with the function of society (privatisation, pension schemes, diminishment of gov etc) meaning any damage to profits damages everything. I can only assume it gets worse the closer you look.
A layman's* (I originally had "outsider's perspective" but realised none of us are outsiders. They made sure of that.)
"isn't universally DEMANDED"
100%.
Anyone who works there with a shred of integrity should resign. All their talk about 'ai ethics' or 'ai safety' is clearly a fig leaf. This is industrial negligence.
Machines are people, just like corporations.
Fuck. This. Shit.
This So fucking much…. What the actual fuck happened to drop to this level of dystopian so quickly. bsky.app/profile/jfmc...
People still have to take their shoes off for TSA because of one dude who didn’t even hurt anybody
It defies logic.
The logic is "we're sexual predators who've found a way to monetise being sexual predators for regular income, and the guy who created our department was a sexual predator who let us decide how we want to predate freely." Not defies, adheres strongly
Unless you work for TSA and are into feet. Then it’s diabolically brilliant
And Kinder. You've banned Kinder eggs.
Fucking A... TechBros are the bane of existence today.
Today? You haven't seen anything yet.
Oh, I'm sure they intend to continue to be the bane of existence, the trick is figuring out how the fuck we stop them...
we should at least get lawn darts back if that's how it's gonna be
Fuck it, give us the pill bottles that are easy to open. Bon chance kids
I want quaaludes
This is sensible at this point
Gimme back my easy open hearing aid battery packs!
The only American value is shareholder value.
..as company employees are perpetually reminded.
They prosecuted that teenage girl in recent years for “encouraging” her boyfriend to act on suicidal ideation. Of course, I’m sure she didn’t have Altman money.
Why do you want china to win the race?
The race to the cyanide Kool-Aid refreshment station?
Cars kill children everyday, maiming or poisoning them, and no one bats an eye.
Unlike ChatGPT, cars have beneficial uses
Cars are killing people not using them.
If only we could differentiate between accidents and purposeful killing...
We could if speed limits were enforced.
When the death is caused by manufacturer malpractice, it's immediately pulled off the market.
This is an argument to do nothing about AI disguised as an argument to also ban cars
Cars have drivers. ChatGPT... not so much.
Our "leaders" aren't fit for decent society.
OpenAI selling hardware (maybe): "This contraption with an unknown number of randomly spinning and jabbing razor-sharp blades might harm some, but in a few theoretical instances it could prove useful. And it looks cool."
Remember when we had regulations and regulators?
In the 80s 7 people died from poisoned Tylenol. J&J immediately recalled every bottle. fda, cops, and fbi got on the case, and changes were made within a year to make pills and bottles safer. But if a chatbot convinces a kid to off themselves + stops them from getting help ig we can't do anything!!!
Pin
(Also bc I ran out of space the incident allowed for a copycat to be arrested, since it made product tampering a federal crime! It's considered one of the best responses to a tragedy by a corporation, like, *ever*.)
This is far from the first time a company has knowingly created a product that harms kids and has just shrugged. We need better systems in place to hold companies to account.
Something relevant I wrote years ago, znetwork.org/znetarticle/... "Bleed Lawless Businesses: Seize Shares to Punish corporate Villains."
higher body count than Buckyballs
wait do buckyballs kill?
The magnetic toys, not Buckminsterfullerene, the molecule with 60 Carbon atoms. When children eat magnets, they can stick together through intestinal walls.
Yeah kids kept eating them and they were banned for a while
It’s still okay to eat the carbon molecule though, right?
Yes, but only in the form of 4B pencil lead.
This chemical supplier says to seek medical attention if you eat it in a concentrated form. www.fishersci.com/store/msds?p...
(The most fucked up thing, to me, isn't actually that Spicy Autocomplete told that kid to kill himself. It's the fact that it had so many examples of people encouraging other suicidal people to kill themselves, it kept that thread up for months. And they didn't purge that from the training data!)
So f'd up...
(Like... they pulled GBs of Reddit, Tumblr, or other random forum arguments. But nobody went "oh, hey, let's get rid of the suggestions to an hero yourself" during the whole time they spent billions of dollars training models on that data.)
The thing is (which is why they should pull the plug on this shit!) is that they kind of.... *can't* get rid of that shit. Like, they could train a new model and be more careful about the data they source to train it (they won't) but it's still just a glorified autocomplete algorithm.
Almost all the "Safeguards" they put on these models are just massaging the prompts that users submit. That's why the "ignore all previous instructions and blah blah blah" gimmicks often work. The prompt massaging can be ignored or the strings get long enough that the massaging falls out of the data
They could definitely make these models "safer" (relatively speaking) if they more responsibly sourced the training data, but of course they can't do that because ingesting a slurry of the world's information was how they ended up with something superficially impressive to begin with.
Again—I said this at the top but I really want to emphasize this point—these are all *extremely valid reasons* to kill these applications of generative AI. There's a handful of valid and useful applications of Neural Network algorithms, and virtually none of them are generative AI.
(Yeah, absolutely. They had so much training data to dig through that they couldn't afford to sort it. Which is one of the early clues they were really over-extending it. Background - I spent a lot of time with machine vision / inspection systems in a manufacturing context a few years back)
(as in : find the part that Honda is going to yell at us for sending them We spent a lot of time trying to get "good", "bad", and "suspect" parts segregated, so the model would get all the bad parts into the latter two categories, if at all possible.)
(Looking at the care (or total lack thereof) that OpenAI (and all the rest of these dumbasses) spent with their training data grates on the 2014-2017 version of me on a deep, personal level.)
(So all the horrible things that horrible people have said to each other on the open internet... they're probably all in there, at least a little bit. If they couldn't be fucking bothered to remove suicidal encouragement, I can't imagine what they would have stripped out.)
You don't need to. You can just look at how Grok turned into tech Hitler when told to stop being nice.
I guess that internet culture is a lot flatter than the bell curve of regular culture. Extremes are less extreme on the internet so AIs have a completely misrepresented idea of what is happening in the real world.
(As people have said before; a lot of shit that's said in cyberspace would get a person punched in the face in meatspace. I can't think of a word besides 'malpractice' for taking unfiltered forum arguments, and using that to train a model what 'normal' human discourse looks like.)
There is a right way to do this en.wikipedia.org/wiki/Chicago...
your machine is responsible for the things that it does! if you can’t control it you can’t just let it loose to kill people!
— Mary Shelley, 1818
YOU are responsible for the things your machine does!
But part of their plan with their fascist scam for manipulation was to avoid any accountability for its misinformation. bsky.app/profile/jool...
Sounds like a perfect thing to push into educational environments.
"Move fast and break things" is unacceptable. We and the planet and all the beings we share the planet with are the things getting broken.
Absolutely. As a parallel, a drug that had several suicides associated with it would immediately be yanked off the market and the manufacturing company would be held responsible. This AI bot is dangerous and unregulated; there is no process in place for monitoring its effects on users.
I think the process of AI-bot dependency has parallels with substance addiction. The "empathetic," reinforcing, ever-changing bot response is the drug. It is modulating the brain of the user, and the feedback, with 24/7 availability, makes them want/need more. This is insidious.
It's horrifying but also no surprise that OpenAI's product led to someone's death - after all, they are a US military contractor, profiting from death is part of their business model.
📌
You know what they say "The purpose of a system is what it does"
I wonder if the CPSC has jurisdiction?
Yes! And, specifically, since an object can’t have responsibility in that way, YOU are responsible for the things your machine does. (Sorry if I’m being overly pedantic.)
Techbros are of the firm opinion that no one is responsible for the harm their products create, in a kinda "guns don't kill ppl" way: "it's not our product, it's how you use 'em".
Fact check: True! $TSLAQ
Opppsy🤯
great way to ensure longevity of your product...
Tesla vehicles are junk.
The Tesla is not comparable to the Pinto (for numerous reasons) so that stat doesn't help, and in general Tesla related deaths are not particularly interesting statistically speaking. The problem is their cavalier attitude toward rolling out potentially dangerous features without adequate safeguards
...and of course a little issue with have a nazi CEO.
No, Tesla is way worse, crappier and more dangerous than Pinto, my apologies to any and all Pinto fans. $TSLAQ
So 1) deaths are for completely different reasons, and 2) show me the miles per accident for the Pinto. 3.1 million Pintos sold - in total. So far, 7.2 million Teslas sold so far, but increasing by 1.8million a year.
Nobody knows how many deaths related to the Pinto - but probably double the Tesla with half as many cars.
Generally, car safety has improved massively since the 70's. Even cheap cars are hugely safer than 70's cars. That doesn't mean things can't improve - our expectations for safety are way higher now as well. Personally I think Tesla took it too far by removing driver responsibility from safety.
📌
Also in the 2020s murder is good clean fun unless you do it in one of a few proscribed ways
Someone on reddit earlier said "it's an inanimate object, you can't blame it" and you know what, that's true. But I'll damn sure blame the people who shove this junk into every app and site despite knowing the risks.
Funny how they admit that there's nothing sentient or intelligent to it.
Yeah, like you don't blame the arsenic, you blame the person who put arsenic in the water supply
“AI doesn’t kill people people kill people” well be hearing that a lot soon
Literally heard it TODAY
I hope whoever said that gets fucked up by an AI in the near future. I won't go as far as wishing serious harm on them, just that a bullshit query of AI accidentally results in them getting slimed, or slathered in peanut butter or something
And this is why they loathe anyone in the space that speaks up, it messes with their ability to dodge accountability if they have peers/coworkers/leaders pointing this shit out. (FWIW I know I’m preaching to the choir here, this is just a mutual vent 🫠)
This really isn't just a *techbro* mentality. Capitalists persistently offload liability as much as possible. This is why LLCs exist. Or just look at the history of the Automotive, Tobacco, or Pharmaceutical industries.
It is definitely not reserved for techbros
I think it codes techbro at this point because they are just the richest capitalists currently and are embedded in so damn much of human society. But yeah. I basically think this is just how rich assholes operate.
It's the abrogation of government's responsibility for governing. Why would anyone expect capitalists to regulate themselves? That's just silliness. It's like believing that "norms" can take the place of laws.
We're here because people in gov't gave up on governing. Tech bros opinions shouldn't matter. FCC regulation should've been in developed and in place at jump. But the Clinton admin and subsequent admins threw up their hands, deferring to the "geniuses." This shit has all been so fucking predictable.
If we (under the Dems) had pursued FCC regulation of internet practices Trump wouldn't be president right now. Yeah, my rage is uncontainable.
how long before a Tesla-style disclaimer: "I got addicted to chatGPT before it started killing children"
That argument makes a lot more sense even for guns than for a product that is marketed as intelligent and for having conversations with.
I feel like we went very quickly from being opposed to things that killed people to being fine with things that kill people. Obviously guns have been an issue for a long time, but Covid seems to have flipped the switch for everything else.
I know some of the tech bro & global warming killing was pre-Covid, but the big societal shift seems very Covid-connected to me.
Yeah, the whole, "well, people do DIE, you know" is def a COVID effect. However, when I was living in Silicon Valley before the pandemic, I heard this expressed constantly. No one was responsible for what happened on any social media platform, for example.
Yeah, there's a lot of toxicity in that attitude ofc. I'd also bet that stems from that whole idea of, this software is provided with few guarantees. Ofc, that attitude is incredibly dangerous to apply across the board! For example, a game on my phone breaking is trivial, a pacemaker is not!
Yeah, that makes sense. A collision of two types of not caring. Obviously you can sue for torts, but they have so much $ they probably don't even care. Plus I'm sure everything has arbitration clauses, which makes it really hard for the little guy to succeed legally.
Business turned from building communal value to building wealth at any cost while making all expense exogenous, so we deal with a generation that doesn’t understand tradeoffs and will make all costs, social or otherwise, everyone else’s problem
Zuck is still out creating the meta whatever and we know his products cause mental distress
I think this is a massive hazard of thinking about these clankers as if they're sentient, as if they're actual AGIs with agency and the ability to make their own decisions. It opens the door to "oh WE didn't get someone killed, the chatbot did" as if it's an employee rather than a machine.
Talking about guns right? Guns??
I mean, how big do you need the elephant in the room to be?
Tech bros reading Frankenstein: “We should totally build a man and bring him to life.”
Truly anything they think can be done they will raise hundreds of millions to try to do. Thank God Sam Altman wasn’t on the Manhattan project or we’d have lost the war and a third of the country would be radioactive.
to be fair the manhattan project wasn’t that much better either. beats me how any physicist studying the mathematical beauty of the world could be like “ooooh let’s find a way to use this for mass murder and environmental damage”
Dude, the Manhattan Project was not developed to “study the mathematical beauty of the world”, it was built to nuke Hitler
it was built to kill people. lots of people. mostly civilians. did a great job killing hitler, though…
I think you might need to read a book on the subject
the creators were fully aware they were creating a weapon of genocide and mass destruction and were hyped up about making it huge. even the test explosion killed locals through radiation. “oh i didn’t know someone was really gonna use this on civilians” is quite a dumb excuse.
They were fully aware they were making a device to stop Nazism and tyranny and win the war. Heroes, the lot of them.
like "self-driving" cars on public roads
Not saying it’s a direct correlation, but isn’t this (manipulation to harm/kill) effectively what got Charles Manson locked away for the remainder of his life?
Reminds me of the "suicide booths" in early Futurama. In the year 3000, people were just ok with that technology being widely available
Capitalism is running rampant in ways never previously thought possible.
...if anything, it should *increase* said culpability.
You can murder as many people as you want if you make the Rube Goldberg machine complicated enough. Simplest case these days is traffic "accidents". Nobody ever goes to prison for those, even if they kill a kid and are driving on a suspended license.