avatar
Boris Lenhard @borislenhard.bsky.social

I am not asking for any timeless ideal, but for things most people can agree upon and build on. Historically, scientists have done quite well in that regard. Yes, ideas have been disputed, but many were subsequently resolved – because of a shared idea of what it means to resolve them.

aug 20, 2025, 12:09 pm • 0 0

Replies

avatar
Branden McEuen @bmceuen.bsky.social

Ok but there will be instances where people don’t agree on what steps to take going forward, such as with the introduction of a brand new technology. At that point it can be those who study how science works to offer one possible path for how to integrate it.

aug 20, 2025, 12:22 pm • 1 0 • view
avatar
Boris Lenhard @borislenhard.bsky.social

Scientists will judge any new technology on what it enables them to do, or do better, in their work and how - because, in the end, that is how science works. If somebody suggests to me how to do things better, or to avoid pitfalls, it must be better in a way that is relevant to me and my colleagues.

aug 20, 2025, 12:34 pm • 0 0 • view
avatar
Branden McEuen @bmceuen.bsky.social

I don’t think anyone is saying scientists *must* adopt the position of this paper. Philosophers aren’t coming down from atop their proverbial perches to tell the lowly pleb scientists how to do their work. It’s just their suggestions. Do with it what you will

aug 20, 2025, 12:47 pm • 0 0 • view
avatar
Boris Lenhard @borislenhard.bsky.social

I agree. I am just disagreeing, based on historical data, with the claim that philosophers of science have special insight or skills that will make scientists do science better.

aug 20, 2025, 1:06 pm • 0 0 • view
avatar
Boris Lenhard @borislenhard.bsky.social

I am more interested in philosophy of science and read more on it than (conservative estimate) 95% of my colleagues, and I just see no evidence that it puts them at any disadvantage as scientists relative to myself. Many are thoughtful and brilliant scientists, who excel at spotting problems.

aug 20, 2025, 1:10 pm • 0 0 • view
avatar
✨110% Unbiased ✨ @scipie.bsky.social

So what would have needed to be changed about the convenience AI paper? That seems like a fairly specific category they’re defining

aug 20, 2025, 12:15 pm • 1 0 • view
avatar
Boris Lenhard @borislenhard.bsky.social

Nothing. I tend to agree with most of it. But it is in essence an opinion piece - well written and referenced, but still. It implies that there are intrinsically better and worse ways of doing science, and that AI will encourage worse ways. Other philosophers might argue the opposite just as well.

aug 20, 2025, 12:22 pm • 0 0 • view
avatar
Branden McEuen @bmceuen.bsky.social

I think the argument is much more specific in that they’re saying that the selling point of AI—it’s ability to make certain tasks convenient—fosters an uncritical use of AI over time that can lead to significant issues in conducting research.

aug 20, 2025, 12:33 pm • 2 0 • view
avatar
Boris Lenhard @borislenhard.bsky.social

Who will recognise those issues and their consequences when they occur? Historically, it's the people with the domain knowledge, not philosophers. Uncritical use of AI is not unlike uncritical use of statistical methods by those who don't understand them well - there's peer review/feedback for that.

aug 20, 2025, 12:39 pm • 0 0 • view
avatar
Branden McEuen @bmceuen.bsky.social

Sure but to use a medical analogy peer review is like curative medicine while this paper is more like preventative medicine. There’s space for both here. There isn’t One True Way to do science or to be mindful of AI use in research

aug 20, 2025, 12:52 pm • 1 0 • view
avatar
Boris Lenhard @borislenhard.bsky.social

In the long run, peer review is also curative. You learn things for the future from it and learn when it is necessary to ask for help from knowledgeable colleagues before planning, let alone submitting your next work.

aug 20, 2025, 1:00 pm • 0 0 • view
avatar
✨110% Unbiased ✨ @scipie.bsky.social

Yeah I’m thinking back to nate silvers tweet yesterday about how journalists would be better scientific peer reviewers than scientists, which no, but this paper doesn’t seem like it’s trying to do that. Just start thinking through guardrails.

aug 20, 2025, 12:56 pm • 2 0 • view
avatar
Boris Lenhard @borislenhard.bsky.social

A basic requirement for a sensible peer review is that reviewer 1) knows enough to understand the basic idea; 2) is aware what they do not understand (they do not have to understand everything); 3) is able to follow the arguments in response to reviewers. Most journalists would fail all three.

aug 20, 2025, 2:03 pm • 1 0 • view
avatar
✨110% Unbiased ✨ @scipie.bsky.social

I agree there prob could be a little more analysis but at the same time there’s not really a problem with a critical account. I had an old school PI who had us make our reagents (that were easily purchasable) from scratch bc people no longer understood how they were made & why (and it was cheaper)

aug 20, 2025, 12:41 pm • 1 0 • view
avatar
✨110% Unbiased ✨ @scipie.bsky.social

It 100% made the science more clearer but it was also a massive waste of time after the first few times and caused more delays and headaches and possible inaccuracies and inconsistencies of students running around trying to make things last minute and squeeze the last drop out of some reagent that

aug 20, 2025, 12:42 pm • 1 0 • view
avatar
✨110% Unbiased ✨ @scipie.bsky.social

Hadn’t been reordered. So I get the kind of it’s sometimes hard for scientists to see themselves in the description of “just doing stuff for convenience” but at the same time I don’t see why there’s not a place for some critical analysis.

aug 20, 2025, 12:44 pm • 1 0 • view
avatar
Branden McEuen @bmceuen.bsky.social

Yeah I don’t mean to suggest technologies of convenience are inherently bad or anything like that and I don’t think this paper is doing that either. But it is already established that uncritically using LLMs and the like can have negative consequences for the users critical thinking skills

aug 20, 2025, 12:50 pm • 1 0 • view
avatar
Branden McEuen @bmceuen.bsky.social

And sometimes it may be hard to see that taking place as you’re in the middle of doing your research and trying to get published and writing a grant proposal and managing a lab and all those other things that make convenient tech like AI all the more attractive

aug 20, 2025, 12:50 pm • 1 0 • view
avatar
Boris Lenhard @borislenhard.bsky.social

In the end, scientist themselves will decide which future research papers are interesting based on what they themselves can build on, and spot problems caused by overreliance on AI, just like they historically spotted problems with overreliance on other approaches.

aug 20, 2025, 12:25 pm • 0 0 • view
avatar
Boris Lenhard @borislenhard.bsky.social

A big issue is the training of future scientists. AI can be used to make it easier for them to learn (which is good) or to make it easier for them to avoid learning (which is bad). With today's students, the latter is becoming a MASSIVE problem, calling for big changes in how learning is assessed.

aug 20, 2025, 12:28 pm • 0 0 • view