avatar
Allen James @lalenjames.bsky.social

It's still early days for the technology so it's hard to say how error-prone it's going to be. If the next-gen LLMs are still hallucinating sources then yeah people should be made aware the LLMs are not infallible. But ultimately it's a critical thinking / media literacy issue like in the past.

jun 20, 2025, 12:52 am • 0 0

Replies

avatar
Allen James @lalenjames.bsky.social

How do you know what's on Wikipedia is true or not? You shouldn't treat it as infallible, but using it as a starting point and considering what it has to say, and going from there is a reasonable thing to do. Same kind of thing IMO.

jun 20, 2025, 12:53 am • 0 0 • view
avatar
wtpho @wtpho.bsky.social

we were told from the get-go to approach wikipedia w caution and scrutiny and there's transparent threads to follow for the info there, but that's not how LLMs are, and the resource cost to have them unreliably perform what we already have tools for are disproportionate to their usefulness

jun 20, 2025, 2:44 am • 1 0 • view
avatar
wtpho @wtpho.bsky.social

also wikipedia not only allows for legitimacy through sourcing, but credit as well

jun 20, 2025, 2:48 am • 1 0 • view
avatar
Allen James @lalenjames.bsky.social

My point was that, even considering what you said, you can't treat Wikipedia as infallible, and there have been many controversies in the past with unreliable/biased information published there. You have to develop your own critical thinking skills to interpret the information. Same with AI.

jun 20, 2025, 9:12 pm • 0 0 • view
avatar
Allen James @lalenjames.bsky.social

Many Wikipedia editors have pretty blatant biases so they'll go find a credible-seeming cherry-picked source for what they want to have published, but if you really scrutinize their contributions to an article it doesn't hold up to Wikipedia's stated ideal of neutrality.

jun 20, 2025, 9:18 pm • 0 0 • view