avatar
Boris Lenhard @borislenhard.bsky.social

Who will recognise those issues and their consequences when they occur? Historically, it's the people with the domain knowledge, not philosophers. Uncritical use of AI is not unlike uncritical use of statistical methods by those who don't understand them well - there's peer review/feedback for that.

aug 20, 2025, 12:39 pm • 0 0

Replies

avatar
Branden McEuen @bmceuen.bsky.social

Sure but to use a medical analogy peer review is like curative medicine while this paper is more like preventative medicine. There’s space for both here. There isn’t One True Way to do science or to be mindful of AI use in research

aug 20, 2025, 12:52 pm • 1 0 • view
avatar
Boris Lenhard @borislenhard.bsky.social

In the long run, peer review is also curative. You learn things for the future from it and learn when it is necessary to ask for help from knowledgeable colleagues before planning, let alone submitting your next work.

aug 20, 2025, 1:00 pm • 0 0 • view
avatar
✨110% Unbiased ✨ @scipie.bsky.social

Yeah I’m thinking back to nate silvers tweet yesterday about how journalists would be better scientific peer reviewers than scientists, which no, but this paper doesn’t seem like it’s trying to do that. Just start thinking through guardrails.

aug 20, 2025, 12:56 pm • 2 0 • view
avatar
Boris Lenhard @borislenhard.bsky.social

A basic requirement for a sensible peer review is that reviewer 1) knows enough to understand the basic idea; 2) is aware what they do not understand (they do not have to understand everything); 3) is able to follow the arguments in response to reviewers. Most journalists would fail all three.

aug 20, 2025, 2:03 pm • 1 0 • view