avatar
Chris Dabbs, Ph.D. @dabbspsych.bsky.social

Amazing; then you are literally the person who is positioned to help me with my questions in this thread. Do you have anything to offer besides "these suck and are useless" when there is near-insurmountable pressure to incorporate these tools into workflow? Why do they suck? What is the alternative?

sep 2, 2025, 7:43 pm • 2 0

Replies

avatar
Eleanor Schille-Hudson @erbsh.bsky.social

Sorry to jump in, but I am genuinely curious--what are the sources of the "near-insurmountable pressure to incorporate these tools into workflow" in your life? Is your university or department putting some kind of pressure on you?

sep 2, 2025, 7:50 pm • 6 0 • view
avatar
Eleanor Schille-Hudson @erbsh.bsky.social

(For context I feel the ambient pressure of online hype but I am a remote postdoc and so might be shielded from other kinds of pressure!)

sep 2, 2025, 7:50 pm • 1 0 • view
avatar
Chris Dabbs, Ph.D. @dabbspsych.bsky.social

Oh definitely. The conversation has turned from "should you use this?" to "you will be using this, here's how to do so effectively, and if you don't you will be left behind." Faculty meetings, workshops at conferences, you name it. I'm not some AI stan -- this is just the reality.

sep 2, 2025, 7:55 pm • 0 0 • view
avatar
Eleanor Schille-Hudson @erbsh.bsky.social

Interesting. Is there any vocal pushback? (If so, what is the response to vocal pushback or even just voiced concerns?)

sep 2, 2025, 8:04 pm • 1 0 • view
avatar
Chris Dabbs, Ph.D. @dabbspsych.bsky.social

I think a lot of the pushback I've seen falls into three camps: moral (e.g., environmental, intellectual property), cognitive (e.g., we will lose faculties), and ignorance and fear (e.g., this is too complex and scary). Often a combination. Feedback is usually like "it's happening, soooo 🤷"

sep 2, 2025, 8:49 pm • 1 0 • view
avatar
Mark Madsen @markmadsen.bsky.social

I have different questions: why do you feel you will be left behind? What sort of tasks make you worried that you will fall behind people using such tools?

sep 2, 2025, 10:01 pm • 1 0 • view
avatar
Spaceweft @spaceweft.bsky.social

Dr. Dabbs, perhaps it would help to think of A.I. (as it is now available to the public/for sale) as being a clinical trial with no IRB, no informed consent, no confidentiality measures, and no peer review. Why in the world would you participate in such a trial?

sep 2, 2025, 8:07 pm • 3 0 • view
avatar
Dr Katie Twomey @k2mey.bsky.social

Scite.ai from the homepage uses LLM architectures, which are probabilistic, so make errors by design.

sep 2, 2025, 8:11 pm • 7 2 • view
avatar
Chris Dabbs, Ph.D. @dabbspsych.bsky.social

By design or as a byproduct? Is there "behind-the-scenes" safeguarding that can take place to reduce these errors in output?

sep 2, 2025, 8:15 pm • 0 0 • view
avatar
Dr Katie Twomey @k2mey.bsky.social

It's inherent to the design. To fix it you'd have to correct each individual error, or massively curtail what it does. And to get rid of bias in all of these models you'd have to comb through the dataset to remove biased input data which would be prohibitively time consuming, or restrict the...

sep 2, 2025, 8:22 pm • 3 0 • view
avatar
Chris Dabbs, Ph.D. @dabbspsych.bsky.social

I've read about the bias in LLMs, particularly the trans-erasure and anti-Black sentiment. I didn't know that iterations of them (like scite, elicit), which are positioned to do a specific *task* and not generate knowledge, were subject to the same biases.

sep 2, 2025, 8:32 pm • 0 0 • view
avatar
Dr Katie Twomey @k2mey.bsky.social

This is where my understanding runs out (don't know if they're trained on a particular dataset) but if the input is biased so is the output. And as an autistic person with knowledge of the autism literature I wouldn't touch an LLM-based tool with a barge pole

sep 2, 2025, 8:38 pm • 3 0 • view
avatar
Chris Dabbs, Ph.D. @dabbspsych.bsky.social

My experience with ChatGPT in particular shows a lot of these issues with the autism lit. I'm going to go read the AI pre-print that was linked to me and reflect on how I can more critically engage with the institutional pressure to use these tools.

sep 2, 2025, 8:43 pm • 2 0 • view
avatar
Dr Katie Twomey @k2mey.bsky.social

size of the dataset/artificially construct one to the extent there wouldn't be sufficient data to train and/or their behaviour would be unrecognisably different. Anyway honestly please do read Olivia's papers.

sep 2, 2025, 8:22 pm • 2 0 • view
avatar
Dr Katie Twomey @k2mey.bsky.social

I know a fair bit about these things but she and Iris are much more expert than me

sep 2, 2025, 8:24 pm • 2 0 • view
avatar
Dr Katie Twomey @k2mey.bsky.social

Although there are many more issues which Olivia details in her work

sep 2, 2025, 8:11 pm • 1 0 • view
avatar
Iris van Rooij 🟥 @irisvanrooij.bsky.social

She has kindly made all her research openly available on her website. This may be a useful place to start 👇 bsky.app/profile/iris...

sep 2, 2025, 7:46 pm • 21 1 • view
avatar
Simone Sprenger @simonesprenger.bsky.social

This is almost too kind. Literature search seems to have become a tool from the age of typewriters as well…

sep 2, 2025, 8:42 pm • 6 0 • view