avatar
JustAnotherRando @justarandoontheweb.bsky.social

First of all, I think any responsible person, definitely a journalist should put 'AI' in quotes. There's no intelligence at work here. It's a LLM chatbot, just a random text generator. Once you realize what it is, the notion of 'safeguards' etc become apparent for what they are: PR to hide behind.

aug 26, 2025, 2:26 pm • 54 0

Replies

avatar
Doug Linse @douglinse.bsky.social

The uncritical parroting of disingenuous marketing language is infuriating. Like, it doesn't "become delusional" in long chats, its functioning degrades.

aug 26, 2025, 4:35 pm • 27 0 • view
avatar
JustAnotherRando @justarandoontheweb.bsky.social

I would even put 'functioning' in quotes. Since the LLM bot is spewing out text based on it's inputs, I'm not sure exactly at what point your own inputs start to cause it to generate text that apparently 'veers off' from 'safeguards'. The way these things are built, no one knows exactly what ... 1/

aug 26, 2025, 4:56 pm • 12 0 • view
avatar
JustAnotherRando @justarandoontheweb.bsky.social

.. will be output in a particular scenario. The very idea that these chatbots can be 'safeguarded' against is folly. The only safe thing to do is to not use them.

aug 26, 2025, 4:56 pm • 15 0 • view
avatar
Allison🇨🇦 & The Blowfish @allistronomy.bsky.social

I think it’s also really dangerous how they described “Looksmaxxing” in the article, and in the article they link to that describes the practice. It’s not about improving wellness or health, it’s a serious red flag that someone is into incel and other malicious internet communities.

aug 26, 2025, 6:12 pm • 7 0 • view
avatar
solodeveloping.bsky.social @solodeveloping.bsky.social

People need to understand that the AI output could be override by an actual human dev It's not magic people, there is a software architecture behind it

aug 26, 2025, 4:56 pm • 3 0 • view