avatar
lyndsay @magpie.tips

Is this just the line between LLM and RAG?

sep 2, 2025, 3:17 pm • 3 0

Replies

avatar
lyndsay @magpie.tips

Like from a simple "these are probability based word machines" standpoint, an LLM is still pretty likely to give similar responses, and responses that are the same *in substance* in response to a question. But if the LLM is trained to inject randomness that fluctuates MoM...

sep 2, 2025, 3:19 pm • 4 0 • view
avatar
lyndsay @magpie.tips

... that seems to me like it's going to lead searchers to the absolute rock bottom of the internet by mid-2026. This thread is half SEO primal scream, half "how do we live in a society with no shared reality" primal scream

sep 2, 2025, 3:20 pm • 11 1 • view
avatar
lyndsay @magpie.tips

As an SEO if I am told that LLM search is going to be disincentivized from showing my domain repetitively to searchers.... then I simply don't care. Subject me to the winds of fate at that point. I'm sticking to what I can do that will bring reliable, measurable volumes of traffic to my pages.

sep 2, 2025, 3:22 pm • 7 0 • view
avatar
lyndsay @magpie.tips

Why would I bust my ass trying to get perplexity senpai to notice me in April of 2026 when I could be working on long tail pages that already bring in 4x the traffic?

sep 2, 2025, 3:23 pm • 5 0 • view
avatar
lyndsay @magpie.tips

I *think* the function they are talking about in this post is something like "Temperature" as described here. A commenter explains it in this thread as "If the temperature is set appropriately, this results in more varied writing without going off the rails"

sep 2, 2025, 3:27 pm • 2 0 • view
avatar
lyndsay @magpie.tips

The problem with this is, of course, when we are talking about a function where people are trying to understand a condition within reality like "Are vaccines safe", a topic with a lot of quantity of writing about "no they aren't" with zero medical authority but topic domination on sheer volume.

sep 2, 2025, 3:29 pm • 6 0 • view
avatar
lyndsay @magpie.tips

But if we're incentivizing LLMs to be quirky girls, like, this is so unhelpful from a basic utility standpoint. It's very funny how often posts about LLMs include statements like this that make the technology fundamentally unworkable, and just breeze past that lol

sep 2, 2025, 3:30 pm • 5 1 • view
avatar
Dana Fried @leftoblique.bsky.social

Yeah, it relates back to simulated annealing, which has been a technique in AI and ML models for decades; the idea is that if an optimizing model uses strict and deterministic descent to come to a result, it can get stuck in local minima.

sep 2, 2025, 3:43 pm • 1 0 • view
avatar
Dana Fried @leftoblique.bsky.social

But it's an art, and specific to both the model and problem. Not enough heat, and you get a bad answer; too much and you get nonsense. I don't know that you can pick a single value for all LLM queries - if for no other reason than you have no idea what the shape of the surface is.

sep 2, 2025, 3:43 pm • 2 0 • view
avatar
lyndsay @magpie.tips

And I think this goes back to using an LLM for a sum of general knowledge search engine. With those volumes and the training you're able to do on it, an incentive or temp function that results in 70-90% domain drift over 6 months is... wild

sep 2, 2025, 3:47 pm • 1 0 • view
avatar
Dana Fried @leftoblique.bsky.social

I don't make the rules. (But I don't disagree.) I think the thing is... with deterministic or semi-deterministic ranking algorithms, even if you don't know the secret sauce, you can still understand them. A stochastic system is stochastic.

sep 2, 2025, 3:53 pm • 0 0 • view
avatar
lyndsay @magpie.tips

Yeah! You've just summarised my thread so well with this, haha

sep 2, 2025, 3:55 pm • 1 0 • view