avatar
Pwnallthethings @pwnallthethings.bsky.social

Then retrain on the 99.85 billion documents you have left over, and you've created an LLM that is unaware of those labelled 150 million documents.

jun 21, 2025, 10:14 am • 75 3

Replies

avatar
Aging Nobody @komisarnym.bsky.social

What about using LLM to not skip undesired documents, but rewrite them, so the new model thinks they say something else?

jun 21, 2025, 11:12 am • 2 0 • view
avatar
YetAnotherSteve @yaseppochi.bsky.social

You think they’re likely to do better than canceling research on “equity” and discovering they’ve bankrupted all business school finance departments?

jun 21, 2025, 10:21 am • 0 0 • view
avatar
Martha 🥄 near Seattle @marthawt.bsky.social

A+ post

jun 21, 2025, 7:25 pm • 0 0 • view
avatar
Pwnallthethings @pwnallthethings.bsky.social

You've created an LLM there that is /worse/ because it is (a) smaller, and (b) intentionally skewed, but it's also not so much smaller that it will be visibly useless, and it has knowledge of other topics, even if it is much dumber about things adjacent to the filtered topics.

jun 21, 2025, 10:17 am • 73 4 • view
avatar
Nigel Tufnel @misterrora.bsky.social

And then Elon will brand it as more “intelligent” and lots of people will believe it.

jun 21, 2025, 10:20 am • 1 0 • view
avatar
Pwnallthethings @pwnallthethings.bsky.social

It doesn't require any particularly sophisticated steps to do it, or need an LLM that is particularly intelligent to help you get there, which is also partly what makes this dangerous: if you have enough money and an AI machine, you can absolutely construct propaganda models. And he's saying he will

jun 21, 2025, 10:19 am • 135 23 • view
avatar
Mike Fulk @mikefulk.bsky.social

I think you’re right, making propaganda models can be done. It already works that way of you know how to put the right things surrounding the LLM, though. All of the bad stuff is in there you can get it out with the right input prompts. This also shows Musk doesn’t know how they work.

jun 21, 2025, 10:41 am • 27 1 • view
avatar
Pwnallthethings @pwnallthethings.bsky.social

Right, we've already seen several cases of people trying to create propaganda models in a few different ways. Musk did it already with system prompt hacking (the famous "grok is obsessed with white genocide" fiasco). DeepSeek did it too, but in post-training, objecting to Tiananmen square references

jun 21, 2025, 10:44 am • 49 2 • view
avatar
Pwnallthethings @pwnallthethings.bsky.social

You have some filtering which is more neutral; prior to the 2024 election several online models tried to detect and bail when talking about politics to both avoid brand risks and "election disinformation", and genImages try hard to detect attempts to create abuse material

jun 21, 2025, 10:47 am • 31 1 • view
avatar
gadzuba.bsky.social @gadzuba.bsky.social

Which all gets back to the hostility Musk et al have for Wikipedia -- for its faults, Wiki is a largely reliable source of verified facts and data that resists being used as a propaganda machine.

jun 21, 2025, 10:57 am • 5 0 • view
avatar
Martha 🥄 near Seattle @marthawt.bsky.social

A good reason to send them donations. I do.

jun 21, 2025, 7:24 pm • 0 0 • view
avatar
Zac Spitzer @zackster.bsky.social

quite an expensive fools errand too, training ain't cheap

jun 21, 2025, 10:50 am • 1 0 • view
avatar
Pwnallthethings @pwnallthethings.bsky.social

but this is much more direct attempt to create propaganda: that the output of the model should seek to achieve political persuasion outcomes and be deployed to a site that hundreds of millions of people, including political elites, journalists, and citizens

jun 21, 2025, 10:50 am • 56 6 • view
avatar
Mike Fulk @mikefulk.bsky.social

I agree with you that’s what he is describing. What a lot of disinfo people fear most is GenAI only good trait is it’s a convincing liar. And using it to wage propaganda war at Internet scale is its bread and butter.

jun 21, 2025, 10:56 am • 12 0 • view
avatar
Cornelia Kristiansen @cokristiansen.bsky.social

📌

jun 22, 2025, 4:18 pm • 0 0 • view
avatar
John M Danskin @johnmdanskin.bsky.social

Elon Musk wants to create an artificial Fox Viewer by filtering out training material that does agree with the narrative. This creation of artificial stupidity may be successful, but it's not a contribution.

jun 21, 2025, 12:51 pm • 3 0 • view
avatar
Wiarton Willy @wiartonwilly.bsky.social

I am certain Grok was already a propaganda machine before the election.

jun 21, 2025, 1:04 pm • 3 0 • view
avatar
Cub Green @cubtrader.bsky.social

Musk should take a ride in one of his rockets.

jun 21, 2025, 10:23 am • 3 0 • view
avatar
Uffe @uffedimon.bsky.social

An LLM is actually a perfect propaganda machine, it's real power is not the veracity of its output, but the convincing way it presents it. The training data is still the costly time consuming point I would argue, consensus on what it should spit out, and internal integrity of the data presented

jun 21, 2025, 10:39 am • 8 0 • view
avatar
Nigel Tufnel @misterrora.bsky.social

The real power is that most users believe that it possesses intelligence. Calling it AI is a misnomer that needs to stop. It’s a really good mimicry machine.

jun 21, 2025, 1:52 pm • 4 0 • view
avatar
Uffe @uffedimon.bsky.social

They believe it possesses intelligence exactly because it's output sounds so convincing. it's output sounds like human speach, and with authority.

jun 21, 2025, 2:53 pm • 0 0 • view