(2/2) ... and I'm sorry to say that anyone using LLMs for peer review should resign, get out of the game, and let someone take their job who's going to do it seriously.
(2/2) ... and I'm sorry to say that anyone using LLMs for peer review should resign, get out of the game, and let someone take their job who's going to do it seriously.
Yes. I also happen to think anyone using an LLM to provide written feedback to students is, unbeknownst to them perhaps, risking their entire job by doing so. At some point sentiment turns here, particularly among students themselves, who will want human feedback even if they used an AI.
Absolutely. If you can't even be bothered to read your colleagues' work, what does that say about your attitude to the whole intellectual exercise?
Per posts of @dbellingradt.bsky.social and @ayates.bsky.social (that you might have seen earlier): we´re already way past that post, I´m afraid.
That´s also a fairly logical outcome actually, given how universities have a) increasingly favoured quantity over quality and b) have squeezed their labour force to the brink.
Vision of the glorious future: Students use AI to write papers to be graded by AI, while professors write papers using AI to be reviewed by AI. Hiring, tenure, and promotion decisions are then made by AI. Those of us who actually think, read, and write are deemed unproductive.
Unproductive except in the sense that they will remain legally accountable when the AI-tool screws up, of course. It naturally can´t be the sellers of the tools or the university leadership that takes the fall for those problems, can it.
I think the glory to the admins is that AI promises a future university with no professors.
And to further complicate matters, apparently so much money is sunk into AI it may well lead to a bubble: futurism.com/ai-agents-fa...
This is a very interesting study, thanks. By the summary (one click through) quality of the tools is not a reason for failure, but a) expecting too much of it and b) difficult internal tool development are. That first conclusion goes against big tech marketing efforts, the second supports them
AI is not paying off: moneyweek.com/investments/...
I always like the comparison with the dotcom bubble, which we are of course discussing via a technology and through types of platforms that absolutely failed back then, but that we now can´t operate without and support the largest companies in the world 😉
Absolutely failed, meaning in a financial overinvestment sense. The dotcom bubble was not about the technology not working, but about hype building up, and advertisement revenues going bonkers, for individual websites. Now the large companies make money through basically creating digital
gated communities and "'owning the market' in which people do transactions" like Amazon or being the middle man between people advertising (content creators) and people wanting to advertise (people wanting to sell a product) like Google/YouTube/Meta or a mix of both like Twitch(/also Amazon). 🤔
For me all of this reminds me more of the (Gartner?) hype cycle of technologies for GenAI, and we're definitely close to the "peak of inflated expectations", hopefully moving to the "through of disillusionment" soon. But this time with the added moral/theft question of the data sets.
The moral question of extreme power use was also, even more present for NFT and blockchain.