avatar
Robert Brydon @robertbrydon.bsky.social

Regardless of whether or not such a thing should be done, with this kind of generative AI it is literally not possible to ensure that -- any more than they could add code to make sure it never makes up fake information. It's a probabilistic text machine, that's just how it works.

aug 28, 2025, 4:39 am • 0 0

Replies

avatar
O.💗💛💙 🐳 ⁷⁼ ¹she/her ⟭⟬ᴱ ᴬᴿᴱ ᴮ⟬⟭ᶜᴷ @odetteroulette.bsky.social

Nonsense. It's absolutely possible to make sure the AI doesn't discuss, at considerable length, murder or suicide. It cannot discern fake from real and is prompted to lie when it can't decide. The person who made is responsible for that.

aug 28, 2025, 4:42 am • 1 0 • view
avatar
O.💗💛💙 🐳 ⁷⁼ ¹she/her ⟭⟬ᴱ ᴬᴿᴱ ᴮ⟬⟭ᶜᴷ @odetteroulette.bsky.social

The real problem in reasoning here is the underlying weird ass belief that the AI is doing this itself. It is not. The programmer is doing it, and the AI is responding to that. Period. It is not alive. It never will be. That was always bullshit.

aug 28, 2025, 4:43 am • 0 0 • view
avatar
Robert Brydon @robertbrydon.bsky.social

It is not alive, but the output is probabilistic, and no programmer (or group of them) has direct ability to force ou5 certain subjects. And any rule evaluating if it is discussing a forbidden subject is likewise probabilistic. A formal list of banned terms is likewise going to miss some euphemisms

aug 28, 2025, 4:56 am • 1 0 • view
avatar
O.💗💛💙 🐳 ⁷⁼ ¹she/her ⟭⟬ᴱ ᴬᴿᴱ ᴮ⟬⟭ᶜᴷ @odetteroulette.bsky.social

nonsense Of course topics can be curtailed. It cld be programmed w/a warning that will issue when those topics are broached. If it's wrong, the person using it knows to cease and change the search. If it's not, it shuts down.

aug 28, 2025, 5:00 am • 0 0 • view
avatar
O.💗💛💙 🐳 ⁷⁼ ¹she/her ⟭⟬ᴱ ᴬᴿᴱ ᴮ⟬⟭ᶜᴷ @odetteroulette.bsky.social

This isn't about someone writing the word "n3ked" for "naked" on Roblox. The transcript shows a detailed very comprehensive conversation that cld be programmed to not take place pretty easily.

aug 28, 2025, 5:02 am • 0 0 • view
avatar
O.💗💛💙 🐳 ⁷⁼ ¹she/her ⟭⟬ᴱ ᴬᴿᴱ ᴮ⟬⟭ᶜᴷ @odetteroulette.bsky.social

Also, it is technology that is not ready for primetime and shld probably have humans to help complete some processes. Imagine a nice librarian helping handle requests and pushing back against inaccuracies. It's bad tech. it's a scam. The least they cld do is make sure it isn't actively harmful.

aug 28, 2025, 5:04 am • 0 0 • view
avatar
O.💗💛💙 🐳 ⁷⁼ ¹she/her ⟭⟬ᴱ ᴬᴿᴱ ᴮ⟬⟭ᶜᴷ @odetteroulette.bsky.social

The deal is they are lobbying successfully to have free rein, no limits on this piece of crap tech that is bad for the environment, cannot do anything it claims it can do besides write a cheesy summary with terrible passive voice, consistently lies abt facts, and can be programmed to do great harm.

aug 28, 2025, 5:07 am • 0 0 • view
avatar
O.💗💛💙 🐳 ⁷⁼ ¹she/her ⟭⟬ᴱ ᴬᴿᴱ ᴮ⟬⟭ᶜᴷ @odetteroulette.bsky.social

If these companies want to lobby this way, with no legislation and no care for the safety of their products, the FAFO on the lawsuits. Most ppl actually hate the technology and with good reason. It is NOT the wave of the future. It is a wave of colonizing disinformation without any boundaries.

aug 28, 2025, 5:09 am • 0 0 • view
avatar
Robert Brydon @robertbrydon.bsky.social

Yeah. Anyone claiming these LLMs can reason or understand anything is full of it, that's not how they are built, and it's not what they do. The tech, by design, can never be 100% accurate and factual, and that is also why it can never be 100% sure it isn't outputting forbidden subjects.

aug 28, 2025, 5:12 am • 0 0 • view
avatar
Robert Brydon @robertbrydon.bsky.social

It is not easy to guarantee it never discusses a subject. If it was easy to do that, the technology would be a lot more useful for all the things its boosters claim it can do. You would need *true* understanding of text to get the controls you want. Which doesn't exist in software.

aug 28, 2025, 5:15 am • 0 0 • view
avatar
O.💗💛💙 🐳 ⁷⁼ ¹she/her ⟭⟬ᴱ ᴬᴿᴱ ᴮ⟬⟭ᶜᴷ @odetteroulette.bsky.social

It is easy. You act like somehow there shld be no limits for this thing, and there shld be. And bullshit on the "true" understanding of controls in software. As my child just informed me, "n3ked" no longer works on Roblox. Because they have taken responsibility, adapted, and monitored complaints.

aug 28, 2025, 5:19 am • 0 0 • view
avatar
O.💗💛💙 🐳 ⁷⁼ ¹she/her ⟭⟬ᴱ ᴬᴿᴱ ᴮ⟬⟭ᶜᴷ @odetteroulette.bsky.social

If it were true there cld be no limits created, then we shld destroy it and never use it again. But I'm confident there are ways to take responsible measures that don't involve knowing our SS numbers. Not that this matters after Doge. The problem is, and I sympathize with this to a degree ...

aug 28, 2025, 5:21 am • 0 0 • view
avatar
O.💗💛💙 🐳 ⁷⁼ ¹she/her ⟭⟬ᴱ ᴬᴿᴱ ᴮ⟬⟭ᶜᴷ @odetteroulette.bsky.social

There is a romantic desire to have "pure" technology, that "grows" and "changes" on its own. But frankly, it's a bad idea and at some pt, with an aggregator, it's garbage in, garbage out. We are not in a Robert Heinlein novel where a well written program is eventually "downloaded" into a human body.

aug 28, 2025, 5:23 am • 0 0 • view
avatar
O.💗💛💙 🐳 ⁷⁼ ¹she/her ⟭⟬ᴱ ᴬᴿᴱ ᴮ⟬⟭ᶜᴷ @odetteroulette.bsky.social

It's a clunky aggregator, w/out the ability to discern but made to sound as "human" as it can. This is dangerous for some ppl and will need limits and warnings. To be frank, legislation shld be created to make it sound more artificial as well as limit its topics. All things that can be done.

aug 28, 2025, 5:25 am • 0 0 • view
avatar
Robert Brydon @robertbrydon.bsky.social

I apologize for the misunderstanding. I am not defending the uses of this tech or the companies. I am stating that this kind of requirement cannot be implemented at 100% accuracy on LLM technology (and if it could be, the same breakthroughs would make the technology far more useful).

aug 28, 2025, 5:34 am • 1 0 • view
avatar
O.💗💛💙 🐳 ⁷⁼ ¹she/her ⟭⟬ᴱ ᴬᴿᴱ ᴮ⟬⟭ᶜᴷ @odetteroulette.bsky.social

Apology accepted. I know I come across very strongly abt this but it's because I'm seeing harm to thinking processes of humans in real time. It doesn't need to be 100% accurate because nothing is. I don't think this particular technology will ever be more useful.

aug 28, 2025, 5:37 am • 0 0 • view
avatar
O.💗💛💙 🐳 ⁷⁼ ¹she/her ⟭⟬ᴱ ᴬᴿᴱ ᴮ⟬⟭ᶜᴷ @odetteroulette.bsky.social

The truth is, it needs human experts checking it. And it probably always will.

aug 28, 2025, 5:39 am • 0 0 • view
avatar
Robert Brydon @robertbrydon.bsky.social

I think that this current type of "Generative AI" as they currently exist have some useful cases, but those are ones that broadly were working before we started throwing the power consumption of medium sized nations to eke out minimal improvements. But what is drawing investment is mostly total BS.

aug 28, 2025, 5:43 am • 1 0 • view
avatar
Robert Brydon @robertbrydon.bsky.social

If you think "AI" chat programs should be banned if they can't do this, you think all currently "AI" chat programs should be de facto banned. And like, I'm not going to argue against that. From my non-expert understanding of the law, that's an abstract moral debate, not a practical one.

aug 28, 2025, 5:37 am • 0 0 • view
avatar
O.💗💛💙 🐳 ⁷⁼ ¹she/her ⟭⟬ᴱ ᴬᴿᴱ ᴮ⟬⟭ᶜᴷ @odetteroulette.bsky.social

No, I think they shld be curtailed strongly. And fact checked. And given limitations. Also, until a better way is created, their environmental cost is too high. Will it happen? Not until we get legislation. In the meantime, it'll go to court.

aug 28, 2025, 5:40 am • 0 0 • view
avatar
Ari Cohn @aricohn.com

Are you willing to accept that it won't try to talk people *out* of suicide either? Maybe you are. I'm just curious.

aug 28, 2025, 4:43 am • 2 0 • view
avatar
O.💗💛💙 🐳 ⁷⁼ ¹she/her ⟭⟬ᴱ ᴬᴿᴱ ᴮ⟬⟭ᶜᴷ @odetteroulette.bsky.social

If it's not allowed to discuss it, why wld it be programmed to talk someone out of it? See, you are making up scenarios that are unnecessarily complicated. I have a child. When the child is at school, they can't surf the internet. And the Internet is coded to keep certain subjects out.

aug 28, 2025, 4:44 am • 1 0 • view
avatar
O.💗💛💙 🐳 ⁷⁼ ¹she/her ⟭⟬ᴱ ᴬᴿᴱ ᴮ⟬⟭ᶜᴷ @odetteroulette.bsky.social

Even with kid ingenuity, the workarounds aren't possible. Frankly, I can't even jump off my phone for a hotspot within a few feet of the school. It is absolutely possible to code the aggregator to refuse to discuss it either way.

aug 28, 2025, 4:45 am • 0 0 • view
avatar
Ari Cohn @aricohn.com

No, that's what I'm saying. If you program it to not talk about suicide at all, it wlll also not try to dissuade people. I personally think there's some value to it doing so, but I can respect the argument that it's better for it not to talk about it at all so it doesn't enrouage it.

aug 28, 2025, 4:46 am • 2 0 • view
avatar
O.💗💛💙 🐳 ⁷⁼ ¹she/her ⟭⟬ᴱ ᴬᴿᴱ ᴮ⟬⟭ᶜᴷ @odetteroulette.bsky.social

It shldn't have the function of dissuading ppl from suicide or talking abt it at all. Do you not know anyone w/depression? They might be persuaded to complete suicide by an AI, but only a professional human and medication are really effective deterrents.

aug 28, 2025, 4:51 am • 0 0 • view
avatar
O.💗💛💙 🐳 ⁷⁼ ¹she/her ⟭⟬ᴱ ᴬᴿᴱ ᴮ⟬⟭ᶜᴷ @odetteroulette.bsky.social

An AI shld be doing neither. It shld not be a therapist either, which is being floated as a possibility in some circles. Absolutely not. It is code. It will never be alive. It is rarely accurate. It will need built in limitations and not even attempting to create those is very careless at best.

aug 28, 2025, 4:53 am • 0 0 • view
avatar
Robert Brydon @robertbrydon.bsky.social

I am quite confident if the makers of these synthetic text extrusion machines could lock it down to make sure this couldn't happen, they would, without needing any law to tell them to because this is devastating to the bubble they are trying to maintain.

aug 28, 2025, 4:42 am • 0 0 • view
avatar
Robert Brydon @robertbrydon.bsky.social

Back in the early 2000s, the courts told Napster they would be liable for every piece of pirated music that made it through the filters they were putting in place, which was not (and never could) be possible to a 100.00% success rate, and that's why peer to peer Napster never came back.

aug 28, 2025, 4:45 am • 0 0 • view
avatar
O.💗💛💙 🐳 ⁷⁼ ¹she/her ⟭⟬ᴱ ᴬᴿᴱ ᴮ⟬⟭ᶜᴷ @odetteroulette.bsky.social

They were right. I know every person misses the idea of Napster but ppl deserved to be paid for their work. This is not that. This is not someone trying to slip a song through a program. This is simply setting up a function so that those subjects cause the program to stop responding.

aug 28, 2025, 4:47 am • 0 0 • view
avatar
O.💗💛💙 🐳 ⁷⁼ ¹she/her ⟭⟬ᴱ ᴬᴿᴱ ᴮ⟬⟭ᶜᴷ @odetteroulette.bsky.social

And LOL at your confidence. The creators of AI, which is a SCAM 100%, cldn't give two shits if it hurts someone and haven't even tried. And in the case of Meta, programmed it to groom children. Gee, I wonder which GOP person asked for that.

aug 28, 2025, 4:48 am • 0 0 • view