avatar
cara ara~ @hyperfekt.net

Perplexity doesn't hallucinate with state of the art models on regular searches (I tested with 0/10 hallucinated papers). Their deep research model is not SOTA, it's a mediocre model that keeps it cheap to run for them. I don't think any deep research mode but OpenAI's is really very good right now.

aug 29, 2025, 4:58 pm • 0 0

Replies

avatar
Dr. Casey Fiesler @cfiesler.bsky.social

I’m glad to hear that my case is unusual inaccuracy but there isn’t a single LLM that never hallucinates, including Perplexity.

aug 29, 2025, 5:45 pm • 6 0 • view
avatar
cara ara~ @hyperfekt.net

I mean, obviously that's an approximation... Nobody should seriously argue against that, but the real question is whether it remains a significant concern for practical purposes, and in this case I would argue it is not.

aug 29, 2025, 6:03 pm • 0 0 • view
avatar
Dr. Casey Fiesler @cfiesler.bsky.social

I guess that depends on what your practical purpose is!

aug 29, 2025, 6:10 pm • 4 0 • view
avatar
cara ara~ @hyperfekt.net

One obviously shouldn't be using it to treat patients without verifying all the information. But I don't think a general recommendation to always verify all information for all systems is appropriate anymore.

aug 29, 2025, 6:19 pm • 0 0 • view
avatar
Dr. Casey Fiesler @cfiesler.bsky.social

It is for any context in which it matters if the information is correct. For example, I probably should not assign papers to my students to read that do not exist, which was the example here.

aug 29, 2025, 7:16 pm • 5 0 • view