avatar
Ted Underwood @tedunderwood.me

So I have two questions 1) How recently and how often have you tried using a model for real work? 2) How do you interpret evidence that a quarter of Americans use AI, for work, weekly? Do we imagine that they haven't noticed it can make errors? That the errors don't matter much?

aug 23, 2025, 1:53 pm • 8 0

Replies

avatar
Kevin Bonham @kevinbonham.com

But has anyone demonstrated a sustainable business model? $20/month is fine to help me fix CSS on my lab website and help me write documentation, but when I give Claude code novel coding problems that I really need, it still struggles. And that $20/month is the heavily subsidized price, isn't it?

aug 23, 2025, 2:41 pm • 1 0 • view
avatar
Ted Underwood @tedunderwood.me

I'm sure prices and business models will change. Streaming channels cost more than they used to, I've noticed. ;) What you'd have to believe, to buy the Zitronite bubble doctrine, is "no sustainable business model exists or can exist." Otherwise what's being said is just "growth may slow."

aug 23, 2025, 3:09 pm • 9 0 • view
avatar
Kevin Bonham @kevinbonham.com

It seems to me that the possibility space encompasses authoritarian AGI overlords, abundance utopia, and complete collapse of the industry, with the median outcome some kind of muddling along. Anyone claiming high certainly of any outcome seems misguided.

aug 23, 2025, 5:16 pm • 0 0 • view
avatar
John Q Public @conjurial.bsky.social

It’s annoying that “bubble” means both a) it’s all fake and will disappear and b) there’s some irrational exuberance 1999-style (a) is clearly false and (b) might be right

aug 23, 2025, 4:32 pm • 4 0 • view
avatar
Ted Underwood @tedunderwood.me

The appeal of the language *is* the conflation. We're only talking about this because people really really want to suggest that (b) probably means (a). If the claim was just "asset values will decline," I don't think nearly as many people would care.

aug 23, 2025, 4:34 pm • 6 0 • view
avatar
John Q Public @conjurial.bsky.social

Fair! It’s a good rhetorical move when you put it that way

aug 23, 2025, 4:35 pm • 3 0 • view
avatar
Ted Underwood @tedunderwood.me

And then it becomes a motte-and-bailey thing, as I appreciate more fully after reading comments on this thread. People clearly hope the technology itself will have to be scaled back, but those hopes are expressed as a kind of tacit penumbra around a motte that is just OpenAI valuations.

aug 23, 2025, 5:22 pm • 5 0 • view
avatar
John Q Public @conjurial.bsky.social

I think people also genuinely don’t understand you can have a bubble by being right about what’ll happen and expecting it too early as well as by expecting more than is actually going to happen

aug 23, 2025, 5:43 pm • 3 0 • view
avatar
Maximilian Hoffmann @mh123.bsky.social

Why? There were clearly sustainable business models during the dotcom boom. The bubble still popped, if you call this the growth slows fine, but it seems euphemistic.

aug 23, 2025, 3:27 pm • 1 0 • view
avatar
Ted Underwood @tedunderwood.me

If what people mean by "bubble" is just "asset values will decline and companies will go bankrupt," then I fully agree with that.

aug 23, 2025, 3:31 pm • 2 0 • view
avatar
Maximilian Hoffmann @mh123.bsky.social

The question, then, is whether you think the consequence of this suddenly happening to companies with a central position in the current economy will be really bad or just creative destruction as usual. I think the only thing Ed Zitron is saying is that it will be potentially quite bad...

aug 23, 2025, 3:43 pm • 1 0 • view
avatar
Ted Underwood @tedunderwood.me

I went back and looked carefully at Zitron's rhetoric. It's slippery, but I think he is clearly claiming that the technology as it now exists is unsustainable. It's not just a claim that certain businesses will go bankrupt.

aug 23, 2025, 3:58 pm • 3 0 • view
avatar
🐜 🍯 @anthonybecker.bsky.social

Yo @edzitron.com we are trynna figure out if you are a millenarian prophet or not

aug 23, 2025, 4:01 pm • 0 0 • view
avatar
Ted Underwood @tedunderwood.me

I've got him blocked, so that is not really going to help.

aug 23, 2025, 4:02 pm • 1 0 • view
avatar
🐜 🍯 @anthonybecker.bsky.social

LOL I see. Shame. I was hoping the three of us could do some illicit substances together on my non existent podcast

aug 23, 2025, 4:04 pm • 2 0 • view
avatar
Ted Underwood @tedunderwood.me

I don't particularly care about authorial intent, but I do think this tells us something important about the appeal of the bubble theory to its wider audience. Sam Altman losing money may matter to some people, but for most people the appeal is the idea that the tech will be scaled back.

aug 23, 2025, 4:07 pm • 1 0 • view
avatar
Ted Underwood @tedunderwood.me

And this is really not a situation where no sustainable business model exists. Investment in training might go down; the market might commoditize; but we're not in a situation where inference itself is unsustainable.

aug 23, 2025, 3:13 pm • 8 0 • view
avatar
Chris J. Karr @cjkarr.bsky.social

The problem isn't that inference wouldn't be profitable - that's the easiest part to make profitable - but rather ho long it takes for inference's profits to pay for the MUCH more expensive training process and R and D that makes the model work in the first place...

aug 23, 2025, 3:41 pm • 2 0 • view
avatar
Chris J. Karr @cjkarr.bsky.social

... If investment stopped tomorrow due to the bubble popping would there be a market in 2030 for AIs trained on data that ends in 2025?

aug 23, 2025, 3:42 pm • 3 0 • view
avatar
SE Gyges @segyges.bsky.social

it would be pretty cheap to keep models up to date on events without starting from scratch, if r&d budgets get real scarce we will have gradual cheap iterations of current models in 2030

aug 23, 2025, 3:54 pm • 6 0 • view
avatar
SE Gyges @segyges.bsky.social

the primary cause of the current cash burn is basically a race for higher quality and lower inference cost which everyone is concerned is or might be winner take all

aug 23, 2025, 3:55 pm • 7 0 • view
avatar
SE Gyges @segyges.bsky.social

like. the first round of this already happened and openai won decisively. they got a 6+ month market lead and became synonymous with ai. the cash burn is everyone trying to win or at least be fast to follow if that happens again

aug 23, 2025, 3:56 pm • 6 0 • view
avatar
SE Gyges @segyges.bsky.social

so, either a much better/cheaper llm (every company trying to do this) or something that is as paradigm shifting as the llm itself was (a lot less of them trying to do this)

aug 23, 2025, 3:59 pm • 6 0 • view
avatar
Ted Underwood @tedunderwood.me

See "the market might commoditize," in my post above. You're asking a question about how profitable the business will be, and I'm very open to the possibility that the answer is "only modestly, and as the market matures the pace of new training will decline."

aug 23, 2025, 3:44 pm • 4 0 • view
avatar
Chris J. Karr @cjkarr.bsky.social

Re you open to the possibility that this general line of business - as currently constituted - might not even be viable on its own? My overall question isn't how profitable the companies are, but rather if they exist at all...

aug 23, 2025, 3:49 pm • 2 0 • view
avatar
Chris J. Karr @cjkarr.bsky.social

... Look at the recent spate of failed hardware AI companies as a smaller example of this phenomenon.

aug 23, 2025, 3:49 pm • 2 0 • view
avatar
Ted Underwood @tedunderwood.me

I don't care much about business models. Maybe OpenAI goes bankrupt and bigger players like Microsoft and Google take back over. Maybe China plays a bigger role. What I care about is the social role of the technology, and I also suspect that's what "bubble" believers care about, at bottom.

aug 23, 2025, 3:55 pm • 3 0 • view
avatar
Chris J. Karr @cjkarr.bsky.social

As one of the "bubble" guys, I believe that LLM technology will be with us for the foreseeable future. That's a genie that doesn't get put back in the bottle, and has sufficient niche uses to stick around in some form. The business models are important - esp. for a social scientist - because...

aug 23, 2025, 4:10 pm • 2 0 • view
avatar
Ted Underwood @tedunderwood.me

This is an even better summary of the situation:

aug 23, 2025, 3:15 pm • 9 0 • view
avatar
Jon Becker @jonbecker.bsky.social

Isn't what we are seeing more like some loose version of the Gartner Hype Cycle?

aug 23, 2025, 3:20 pm • 2 0 • view
avatar
Julia Boechat @juliaboechat.bsky.social

I would say data about the amount of people who use AI seems very unreliable because tech companies are jamming AI everywhere. I would definitely not call it "evidence" of AI usage.

aug 23, 2025, 3:36 pm • 0 0 • view
avatar
Ted Underwood @tedunderwood.me

If you dig into that survey, at the link, they told respondents what should be counted in their answer — and it wasn't incidental use like "did you do a Google search and get an AI Overview?" It was deliberately using ChatGPT, Claude, Midjourney, etc.

aug 23, 2025, 3:39 pm • 4 0 • view
avatar
Dan Punday @dan-punday.bsky.social

I'm a luddite on this point (I know, there's a whole history of the term) but I believe that people are for thinking. That using AI to summarize email, or draft reports is a sign of failure in work culture. I write my emails, I write my articles and rec letters, I read essays and take my own notes.

aug 23, 2025, 2:03 pm • 0 0 • view
avatar
Ted Underwood @tedunderwood.me

One of the sources of miscommunication on this, I think, is that bad advertisements + notorious student use have given people who don't use AI the impression that it is mostly used for writing. I use AI 10-30 times a day and almost never ask it to write something for me.

aug 23, 2025, 2:05 pm • 5 0 • view
avatar
Dan Punday @dan-punday.bsky.social

Fair enough. And I'm convinced by AI use in things like programming (which smart people seem to really believe works) and things like possible drug research. Great! But the valuation of these companies rests (I think) on the belief in AI application to all parts of our work life.

aug 23, 2025, 2:16 pm • 0 0 • view
avatar
Ted Underwood @tedunderwood.me

I think the explanation is that most sources of information are fallible — coworkers are fallible, Google's ten blue links were always fallible — and we survive because we have pretty decent verification methods. A term paper is a terrible use because there is no verification. But if you're using +

aug 23, 2025, 1:58 pm • 7 1 • view
avatar
Ted Underwood @tedunderwood.me

a model to answer a how-to question, you get instant feedback about whether the proposed solution works. Another common situation: questions where the user doesn't know enough yet to write a search query, but will be able to search and verify once they know what the terms to use are.

aug 23, 2025, 2:01 pm • 7 1 • view
avatar
Andy Famiglietti @afamiglietti.bsky.social

The how to question is indeed verifiable but the AI value for how to questions is dubious to me because I have never encountered one the web (and especially YouTube) did not readily answer. The value add seems to be the *ethos* of AI, which people take as trustworthy, which is... Troubling.

aug 23, 2025, 2:08 pm • 1 0 • view
avatar
Ted Underwood @tedunderwood.me

I've gotten suggestions where — not only had *I* already failed to find that answer on the web, but I see recent published papers that also overlooked it, and it's likely the better solution, although obscure. (Specifically an issue with OCR correction.)

aug 23, 2025, 2:25 pm • 5 0 • view
avatar
Andy Famiglietti @afamiglietti.bsky.social

Do you think that's a common outcome? It seems kinda marginal to me? The other issue is economic, AI doesn't make those solutions it scrapes them, and if it doesn't send traffic back out to the folks who do make them we've got the aggregator crisis of the web on steroids (this maybe is happening)

aug 23, 2025, 2:43 pm • 0 0 • view
avatar
Andy Famiglietti @afamiglietti.bsky.social

I know what you're going to say "I've caught you Andy, how can AI be sapping traffic if it isn't useful" and I think people do *like* AI as search, because its "friendly" and chatty, but then people *like* getting in a giant SUV and driving everywhere and look where that got us

aug 23, 2025, 2:49 pm • 0 0 • view
avatar
Ted Underwood @tedunderwood.me

Re "marginal," I'm going to be honest: it puzzles me that anti-AI discourse takes "hundreds of millions of people are probably just self-deceived about utility" to be the natural Occam's razor solution to tension between a critic's own experience and social evidence. +

aug 23, 2025, 2:53 pm • 5 0 • view
avatar
Ted Underwood @tedunderwood.me

Can hundreds of millions of people fool themselves about a tool they use weekly? Yes. It *can* happen. Is that the obvious Occam's razor inference? Not to my eyes. I would only be driven to that conclusion as a last resort, after looking for other forms of real utility they might be getting.

aug 23, 2025, 2:53 pm • 3 0 • view
avatar
Andy Famiglietti @afamiglietti.bsky.social

I'm not sure *at all* about the use patterns stats and I don't think it's just me. The data are all over the place. I'm not a denier, but I'm also not sold on the value of consumer-facing LLM chatbot product.

aug 23, 2025, 2:56 pm • 0 0 • view
avatar
Ted Underwood @tedunderwood.me

Re: aggregator crisis: that's potentially a real and troubling issue with journalism. In this case we're talking about open-source papers on arXiv who get "paid" via citation, not via clicks, and o3 gave me the citations — so it actually increases benefit to those authors.

aug 23, 2025, 2:56 pm • 0 0 • view
avatar
Andy Famiglietti @afamiglietti.bsky.social

Sorry, after a bit of a step back its clear to me we're arguing over a very small difference in how we're responding to a set of anticipated future outcomes we mostly agree about. I also don't think LLMs collapse under their own weight (the current investment bubble chasing AGI *does* but that's...

aug 23, 2025, 3:20 pm • 0 0 • view
avatar
Andy Famiglietti @afamiglietti.bsky.social

... another matter. I think you're right that people find LLMs *attractive.* This annoys me, I'm not sure they *should* find them attractive, but now we're re-enacting various kinds of "false consciousness" discourse, and about as likely to find a satisfying resolution as any of them were...

aug 23, 2025, 3:20 pm • 0 0 • view
avatar
Andy Famiglietti @afamiglietti.bsky.social

Journalism, photography, blogging, etc... a lot of the content those end users want distillations of isn't academic

aug 23, 2025, 2:59 pm • 1 0 • view