avatar
Steve Farrugia @fasterandworse.com

Because a calculator is a device made for a specific purpose, our trust in it would be immediately corrupted if it were to give an incorrect answer. It would be useless. If you take away the specific purpose you also take away the criteria to determine if something works or not.

jun 12, 2025, 8:11 pm • 124 33

Replies

avatar
Steve Farrugia @fasterandworse.com

🔗 bsky.app/profile/swil...

sep 16, 2025, 8:04 pm • 2 0 • view
avatar
Kim Crawley (she/her) 😷🍉 @stopgenai.com

I'm gonna add your blog to the list of resources on our website soon. stopgenai.com

sep 16, 2025, 8:08 pm • 1 0 • view
avatar
Steve Farrugia @fasterandworse.com

oh thanks, Kim!

sep 16, 2025, 8:08 pm • 1 0 • view
avatar
Steve Farrugia @fasterandworse.com

⬆️ 🔗 ⬇️ bsky.app/profile/prof...

sep 6, 2025, 3:03 pm • 2 0 • view
avatar
Steve Farrugia @fasterandworse.com

👆🔗👇 My god! They did it! bsky.app/profile/davi...

aug 20, 2025, 7:41 pm • 0 1 • view
avatar
Lillard's Maximum Perception Control @junipersbird.bsky.social

(forgive the "P" typo( bsky.app/profile/juni...

aug 20, 2025, 8:17 pm • 0 0 • view
avatar
Steve Farrugia @fasterandworse.com

I speak about it in this thread/video/audio thingy bsky.app/profile/keta...

aug 19, 2025, 6:25 pm • 0 0 • view
avatar
Steve Farrugia @fasterandworse.com

🔗 bsky.app/profile/pook...

jun 24, 2025, 9:12 am • 0 0 • view
avatar
Steve Farrugia @fasterandworse.com

filing this here 👆🔗👇 bsky.app/profile/astr...

jun 20, 2025, 7:33 am • 0 0 • view
avatar
Steve Farrugia @fasterandworse.com

Creators of LLMs benefit from the general purposeness of their products because we have no constraints on what it means for them to *work* They can always escape accountability because a "use case" is an abstract concept that the products will never be optimised for.

jun 12, 2025, 8:14 pm • 121 28 • view
avatar
Steve Farrugia @fasterandworse.com

bsky.app/profile/alfi...

jul 18, 2025, 11:29 am • 0 0 • view
avatar
Steve Farrugia @fasterandworse.com

It's important to recognise when they don't assert any purpose for *us*. It doesn't mean there isn't one for *them*.

jun 15, 2025, 10:33 am • 8 1 • view
avatar
Steve Farrugia @fasterandworse.com

👆🔗👇 bsky.app/profile/tant...

jun 19, 2025, 8:16 am • 3 0 • view
avatar
PJ Coffey - They/Them @homebrewandhacking@mastodon.ie @homebrewandhacking.bsky.social

I see as a person of taste and distinction that you might be interested in some real world examples illustrating your most excellent point. 😀 bsky.app/profile/home...

jun 21, 2025, 11:34 am • 2 1 • view
avatar
Steve Farrugia @fasterandworse.com

I see you are also a person of taste and distinction!

jun 21, 2025, 12:08 pm • 1 0 • view
avatar
PJ Coffey - They/Them @homebrewandhacking@mastodon.ie @homebrewandhacking.bsky.social

Oh you are too kind! 😀 I'm merely a common-or-garden despiser of AI but I do my best. 😀

jun 21, 2025, 12:22 pm • 1 0 • view
avatar
Steve Farrugia @fasterandworse.com

This should be filed here: bsky.app/profile/step...

jun 14, 2025, 5:25 pm • 1 0 • view
avatar
Steve Farrugia @fasterandworse.com

In short: It's a cult

jun 12, 2025, 8:24 pm • 23 0 • view
avatar
Steve Farrugia @fasterandworse.com

also I'm wearing some of @davidgerard.co.uk 's merch in this vid

jun 12, 2025, 8:41 pm • 5 1 • view
avatar
Lillard's Maximum Perception Control @junipersbird.bsky.social

General purposelessness

jun 13, 2025, 5:19 am • 3 0 • view
avatar
aetherclaw.bsky.social @aetherclaw.bsky.social

the very act of training an LLM violates both copyright law, operates under the exact same ethics as mass scale serial rape, and violates countless human rights to an unthinkable degree and must be internationally criminally treated as a crime against humanity.

jun 13, 2025, 7:14 pm • 1 0 • view
avatar
B. Prendergast @renderg.host

OpenAI pulled this same trick with their “we’re not aiming to make a profit” line. (Altman literally said “We don’t know how to monetise this. We’re going to build an AGI and ask it”). Take away the success metric and now you can play with the success narrative.

jun 13, 2025, 5:58 am • 4 0 • view
avatar
B. Prendergast @renderg.host

Simon Wardley said the charlatans in tech (referring I think to consultants) always say things like “it’s going to get worse before it gets better” so that they can hedge reality. When it gets better, you thank them. When it gets worse, they say “well I did tell you this would happen”.

jun 13, 2025, 5:58 am • 4 0 • view
avatar
B. Prendergast @renderg.host

“Well we did *say* we weren’t going to make money”

jun 13, 2025, 5:58 am • 5 0 • view