avatar
Maxim Raginsky @mraginsky.bsky.social

Also, it is worthy of note that, for the AI safety people, the ends (preventing the emergence of skynet and bioweapons) justify the means (among other things, immiserating huge quantities of people by normalizing runaway gambling in the guise of "predictive rationality").

aug 29, 2025, 4:10 pm • 10 2

Replies

avatar
Maxim Raginsky @mraginsky.bsky.social

An example of this grand guignol: an ML researcher who pivoted to AI safety and fully embraced all of the x-risk doomsday narratives was demanding quantitative evidence to the claims that gambling at scale is harmful. What is that saying by Scott Alexander? "Beware solated demands for rigor?"

aug 29, 2025, 4:15 pm • 8 0 • view
avatar
Ben Recht @beenwrekt.bsky.social

What I love about your example is that it could be any number of people and is basically the same attitude Nate Silver has. They are all bravely converging on exactly the same worldview, by pure deduction alone, apparently.

aug 29, 2025, 4:36 pm • 3 0 • view
avatar
Scott Ashworth @soashworth.bsky.social

Imagining the utopia where takes from pundits like Silver and Shor were based on data.

aug 29, 2025, 4:54 pm • 1 0 • view
avatar
Maxim Raginsky @mraginsky.bsky.social

Yep.

aug 29, 2025, 4:38 pm • 1 0 • view
avatar
svateboje.bsky.social @svateboje.bsky.social

that’s a good line.

aug 29, 2025, 4:43 pm • 0 0 • view
avatar
Maxim Raginsky @mraginsky.bsky.social

And deeply ironic as well, because Scott is a true believer in the AI safety cause and an adept of vulgar rationality a la Nate Silver.

aug 29, 2025, 4:47 pm • 1 0 • view
avatar
svateboje.bsky.social @svateboje.bsky.social

ah. i am not too familiar w the case, but that kind of thing does happen. and one can go myopic on the "whole" also. lord, give me balance... when i need it. ha!

aug 29, 2025, 4:52 pm • 1 0 • view