avatar
Carl T. Bergstrom @carlbergstrom.com

2. AI is shifting scientists do and what they value far faster than individual scientists—let along scientific institutions—are able to adjust and adapt their value systems. Even when they work well, AI tools disrupt the delicate balance of costs and rewards that structure scientific activity.

aug 19, 2025, 5:01 am • 213 15

Replies

avatar
Carl T. Bergstrom @carlbergstrom.com

3. To get through the AI revolution intact (not to mention other crises such as the GOP's all-out war on science, knowledge, and expertise) we need to be able to view science from outside and reflect on how it is changing and how we might wish to engineer rather than passively accept those changes.

aug 19, 2025, 5:04 am • 215 20 • view
avatar
Kevin Barron @kevinbarron.bsky.social

AI Devolution? In any case, this seems like a forthcoming book, or at least an essay.

aug 19, 2025, 5:45 pm • 0 0 • view
avatar
Kevin Barron @kevinbarron.bsky.social

Importantly, I think we should avoid furthering the misrepresentation of this phenom as AI. As your colleague points out in the “AI Con” (Bender & Hanna), the LLM’s are text-extrusion machines. The only intelligence involved is the decimation of our own, via an eco-disastrous parlor trick.

aug 19, 2025, 6:23 pm • 0 0 • view
avatar
Carl T. Bergstrom @carlbergstrom.com

4. Science of Science, with its data-intensive descriptive approach is useful, but insufficient. Metascience, with its laser focus on a replication crisis in psychology, likewise. We need broader humanistic thinking about the social processes involved in the construction and valuation of knowledge.

aug 19, 2025, 5:07 am • 250 27 • view
avatar
Paul N. Edwards @avastmachine.bsky.social

STS. You’re talking about STS. Small but mighty. Also, a lot of modern media studies.

aug 19, 2025, 1:18 pm • 5 1 • view
avatar
mj @malthusjohn.bsky.social

📌

aug 20, 2025, 3:49 am • 0 0 • view
avatar
Carl T. Bergstrom @carlbergstrom.com

5. Today I read a paper by @sabinaleonelli.bsky.social and Alexander Mussgnug that I think illustrates this point perfectly. philsci-archive.pitt.edu/24891/1/Phil...

Convenience AI Sabina Leonelli & Alexander Martin Mussgnug12 Abstract: This paper considers the mundane ways in which AI is being incorporated into scientific practice today, and particularly the extent to which AI is used to automate tasks perceived to be boring, “mere routine” and inconvenient to researchers. We label such uses as instances of “Convenience AI” — that is situations where AI is applied with the primary intention to increase speed and minimize human effort. We outline how attributions of convenience to AI applications involve three key characteristics: (i) an emphasis on speed and ease of action, (ii) a comparative element, as well as (iii) a subject-dependent and subjective quality. Using examples from medical science and development economics, we highlight epistemic benefits, complications, and drawbacks of Convenience AI along these three dimensions. While the pursuit of convenience through AI can save precious time and resources as well as give rise to novel forms of inquiry, our analysis underscores how the uncritical adoption of Convenience AI for the sake of shortcutting human labour may also weaken the evidential foundations of science and generate inertia in how research is planned, set-up and conducted, with potentially damaging implications for the knowledge being produced. Critically, we argue that the consistent association of Convenience AI with the goals of productivity, efficiency, and ease, as often promoted also by companies targeting the research market for AI applications, can lower critical scrutiny of research processes and shift focus away from appreciating their broader epistemic and social implications.
aug 19, 2025, 5:11 am • 265 49 • view
avatar
Nicola Low @nicolamlow.bsky.social

📌

aug 19, 2025, 5:35 am • 0 0 • view
avatar
opk @opk.bsky.social

📌

aug 19, 2025, 5:16 am • 2 0 • view
avatar
Carl T. Bergstrom @carlbergstrom.com

6. These are the conversations that we need to be having if we want to be part of a science that continues to further our understanding of the physical world, rather than turning into a high-stakes parlor game wherein humans jockeys ride AIs to fame and (localized, limited) glory.

aug 19, 2025, 5:14 am • 171 17 • view
avatar
Vanadis Saves @vanadissaves.bsky.social

Thank you for this and I hope you are on one of our university's AI committees.

aug 19, 2025, 5:18 am • 2 0 • view
avatar
Carl T. Bergstrom @carlbergstrom.com

7. The paper looks at how researchers might hand off boring "drudge work" such as data collection and cleaning to AI agents, and what could (and IMO will) go wrong as they do so. Here's a key passage.

Critically, we argue that the very framing of certain AI applications around themes of productivity, efficiency, speed, convenience, and ease can conflict with their thoughtful and rigorous situated consideration — itself a somewhat laborious exercise. Precisely because convenience is the key appeal of some AI tools, researchers who adopt these methods have little incentive to question them and investigate in detail the epistemic implications of adopting them – or privileging them over other approaches. Convenience AI can be dangerous because, motivated by speed and ease, it can instil complacency into the use of given tools and lower the depth and frequency of critical scrutiny
aug 19, 2025, 5:18 am • 213 46 • view
avatar
cometsncats.bsky.social @cometsncats.bsky.social

I assumed scientists would see the instances of lawyers and judges including fabricated legal citations, case details, etc within legal documents as glaring red flags against implementing AI, especially for the "boring" yet fundamental aspects, in research? Why risk your data integrity and results?

aug 19, 2025, 5:38 am • 5 1 • view
avatar
Greg Gloor @ggloor.bsky.social

just finishing up a lecture on how choices made during the drudge work are among some of the most impactful ones on downstream analysis. Handing this off to AI would be catastrophic.

aug 19, 2025, 12:06 pm • 7 0 • view
avatar
Carl T. Bergstrom @carlbergstrom.com

8. As I see it, because scientists themselves determine what outputs of science are valued (by what gets published, funded, drives promotion, etc.), when a new technology offers convenience, we are vulnerable to rewarding its use uncritically for many of the reasons brought up in the paper.

aug 19, 2025, 5:20 am • 128 7 • view
avatar
Carl T. Bergstrom @carlbergstrom.com

9. As a result, science can swing, pendulum-like, far past its equilibrium point, Researchers gravitate to the convenience technology, it gets rewarded out of proportion, and science stalls despite the fact that most practitioners think they are working in a time of unprecedented productivity.

aug 19, 2025, 5:21 am • 145 11 • view
avatar
Carl T. Bergstrom @carlbergstrom.com

10. I could go on and on. But instead, read the paper. And record this as my vote for having more philosophers, and fewer computer scientists, on the Provost's Expert Committee For AI Futures and Institutional Destiny.

aug 19, 2025, 5:24 am • 233 20 • view
avatar
Alvaro Falcone @pjwm.bsky.social

Ok, I've read it. I deleted seven versions of comments. Think all I want to actually say is that I love you, crow man. And I'd rather not argue about the paper.

aug 20, 2025, 3:56 am • 2 0 • view
avatar
Alvaro Falcone @pjwm.bsky.social

Aw man, I hate when I have to read the paper. I guess I have to read it sober too? Oppressive.

aug 20, 2025, 1:45 am • 1 0 • view
avatar
Sarah Rajec @sarahrajec.bsky.social

Looks interesting--I clicked through and was immediately offered an AI summary of the paper, because of course I was.

image
aug 19, 2025, 1:40 pm • 3 0 • view
avatar
type1civilian.bsky.social @type1civilian.bsky.social

I am graduating as a biology major and philosophy minor this December in part because I think science and philosophy need each other. Also, they are both really cool subjects, especially when they overlap.

aug 20, 2025, 2:15 pm • 2 0 • view
avatar
Jeffrey Shallit 🇺🇦 @shallit.bsky.social

I can't think of a worse idea than this. My experience is that philosophers have little understanding of basic knowledge such as the theory of computation, and hence come to wildly wrong conclusions about AI. 1/2

aug 21, 2025, 9:01 am • 1 0 • view
avatar
Jeffrey Shallit 🇺🇦 @shallit.bsky.social

As Frances Crick wrote, "Philosophers have had such a poor record over the last two thousand years that they would do better to show a certain modesty rather than the lofty superiority that they usually display."

aug 21, 2025, 9:01 am • 2 0 • view
avatar
Steve Huntsman @stevehuntsman.bsky.social

Well Paul Thagard is a huge exception OTOH I’m still waiting to read David Chalmers’ “Could a Large Language Model be Conscious?” because I’m waiting for a time when I need a good laugh

aug 21, 2025, 1:19 pm • 1 0 • view
avatar
Paul Konstant @progpk.bsky.social

📌 Philosophy of science addressing changes (and destruction) of the scientific enterprise by bad tools (LLMs), horrible policies, and disinformation. Will return after reading the linked paper

aug 19, 2025, 7:33 am • 2 0 • view
avatar
John Mashey @johnmashey.bsky.social

And for AI & ethics, I'd strongly recommend @shannonvallor.bsky.social‬, en.wikipedia.org/wiki/Shannon... years ago, when she was still at Santa Clara U, she gave a great talk for Board at @computerhistory.bsky.social.

aug 19, 2025, 6:02 am • 9 1 • view
avatar
Stephen Curry @scurry.bsky.social

Interesting thread. Will read the paper, though I’d take issue with your characterisation of metascience as fixated on replication. While that may be true of some corners, there are broader, historically-rooted perspectives (eg this from RoRI assets.publishing.service.gov.uk/media/685bcd...)

aug 19, 2025, 11:57 am • 1 0 • view
avatar
Peter @pboothe.bsky.social

PECFAFID around and find out.

aug 19, 2025, 8:35 am • 0 0 • view
avatar
Unicorns Happen @silvy777.bsky.social

📌

aug 19, 2025, 1:04 pm • 1 0 • view
avatar
Alex Merz 🇺🇸🇨🇦🇺🇦 @merz.bsky.social

Well, scientists may not be determining what gets funded any more. It's quite clear that Vought/Bhattacharya et al. are determined to destroy peer review.

aug 20, 2025, 1:21 am • 1 0 • view
avatar
Carsten Timmermann @ctimmermann.bsky.social

But AI is not the first technology that comes to mind in this context. Isn’t that what happened in genomics, during the human genome project, with the mechanisation and computerisation of sequencing? I remember sequencing ‘by hand’. Man, that was tedious.

aug 19, 2025, 6:28 am • 1 0 • view
avatar
Andrew Richards @andrewwr235.bsky.social

Human beings had done the hard work of transcription, error correcting, gap filling, etc. They then encoded that specific knowledge into algorithms to successfully speed the process up. AI offers the promise of convenience so that human beings don't have to do the hard graft first.

aug 19, 2025, 9:49 am • 3 0 • view
avatar
Ian Sudbery @iansudbery.bsky.social

I don't think that anyone would argue that things that increase productivity are necessarily damaging, only that our incentives mean that we risk letting things that are damaging through if they are seen as improving productivity.

aug 19, 2025, 9:28 am • 2 0 • view
avatar
Kenji @elementaleucology.com

If people aren't interrogating why and how it is "drudge work," which is integral, becomes incidental as soon as it is seen as "feminine" (when the labor is feminized), can there be a meaningful, affirmative shift in science's "inquisitional culture" that obviates the need for Convenience AI?

aug 19, 2025, 5:26 am • 2 0 • view
avatar
Yohan J John @dryohanjohn.bsky.social

Nice! Just published a closely related essay. yohanjohn.substack.com/p/to-model-b...

image
aug 20, 2025, 2:22 pm • 1 0 • view
avatar
pakemon.bsky.social @pakemon.bsky.social

There is an old saying "Everybody believes the data, except the engineer who collected the data"*. I wouldn't be comfortable with pure AI-agent data collection. * The second part is "Nobody believes the theory, except the engineer who formulated the theory".

aug 19, 2025, 5:58 am • 2 0 • view
avatar
Atul Haria (immigrant) @atulharia.bsky.social

Christ, it's so easy (as a human) to make mistakes in quality control and cleaning of data. And it's certainly not as easy as setting parameters and everything outside is 'false' - that's how you miss black swan events. You only realise this after you've done the work.

aug 19, 2025, 8:44 am • 1 0 • view
avatar
Jeffrey49 @jeffrey49.bsky.social

A tough one for sure. How can we leverage AI to assist, not to take over.

aug 19, 2025, 5:18 am • 3 0 • view
avatar
Hemlock Autumn @hemlockautumn.bsky.social

Ai is not taking over, because ai does not exist. The issue is not ai. It's the aggressive push of "faster, cheaper, go, go, go!!", which would use and abuse any tool. LLMs are a tool and I do think that they are not a tool that needs to be used that much out of a few niche areas...

aug 19, 2025, 6:22 am • 2 0 • view
avatar
Jennie Dusheck @solenodon.bsky.social

I receive notices from organizations of science writers. It's normal for these to include universities, national labs, etc that advertise short meetings and educational opportunities usually about molecular biology, neuroscience, and similar. Lately, they are all about using and reporting on AI.

aug 19, 2025, 2:26 pm • 2 0 • view
avatar
Hemlock Autumn @hemlockautumn.bsky.social

Which is being aggressively pushed by people and corporations for the buck.

aug 19, 2025, 5:32 pm • 2 0 • view
avatar
Jennie Dusheck @solenodon.bsky.social

REALLY aggressively!

aug 22, 2025, 10:12 pm • 1 0 • view
avatar
Corinth Fallwind @fallwind.bsky.social

Also important to consider that it's not just ease of use and convenience that's driving people to adopt these tools prematurely. Most of what I've seen among my colleagues is a fear of being left behind. Our department meetings are basically LLM show-and-tell for bragging rights at this point.

aug 19, 2025, 5:25 am • 6 0 • view
avatar
randomanybody.bsky.social @randomanybody.bsky.social

📌

aug 19, 2025, 6:56 am • 0 0 • view
avatar
Kyle Jones @ckylejones.bsky.social

📌

aug 19, 2025, 9:08 am • 0 0 • view
avatar
Phil Swatton @philswatton.bsky.social

📌

aug 19, 2025, 6:11 am • 0 0 • view
avatar
Justin M @justinmm2.bsky.social

Something I wonder about is if we're going to let AI be our self-imposed sophon (ala Three-Body Problem); I worry there are too many people (especially with commercial motive) willing to trade-off verifiability and reproducibility for speed, and that not enough people understand just how bad that is

aug 19, 2025, 5:16 am • 5 0 • view
avatar
Alex Mussgnug @mussgnug.bsky.social

Glad you liked our work!

aug 20, 2025, 9:31 pm • 1 0 • view
avatar
Krista Koeller @kristalerista.bsky.social

The act of data collection is so important. My current research direction was sparked by something I noticed while collecting data for another project. It’s also how new scientists get trained!

aug 20, 2025, 1:30 pm • 6 0 • view
avatar
Dileep Rao @leepers.bsky.social

If AI is used to help sort data and organize even stress test ideas, it’s possibly useful. If it instead is used to flatter notions or do the rigor of work and true intellectual examination, the laziness of it will out.

aug 19, 2025, 5:12 am • 2 0 • view
avatar
Eduardo Schenberg, PhD @eeschenberg.bsky.social

academic.oup.com/book/43074

image
aug 20, 2025, 12:34 am • 1 0 • view
avatar
lucindacatchlove.bsky.social @lucindacatchlove.bsky.social

I suspect Latour's work is useful here because these issues are also about culture.

aug 19, 2025, 5:23 am • 6 0 • view
avatar
Carl T. Bergstrom @carlbergstrom.com

100%

aug 19, 2025, 5:24 am • 2 0 • view
avatar
Dan Davis @bindlestiff.bsky.social

How many universities are left that emphasize broad-based education in the humanities? How many public schools even address education beyond “basic skills?” There’s more than one reason we find ourselves here.

aug 24, 2025, 9:45 pm • 0 0 • view
avatar
Marco Palombi @ocrampal.bsky.social

We must transcend the limitations of the scientific method: ocrampal.com/chasing-our-...

aug 25, 2025, 10:47 am • 0 0 • view
avatar
WIRoadTripper @jsgjames.bsky.social

Thank you for these threads. You're posts like these one of the reasons I use social media.

aug 19, 2025, 6:16 pm • 1 0 • view