Articles in financial media outlets have been reporting companies successfully implementing AI projects haven't seen any increase in profits. In other words, spend a bunch of money and waste resources to make the same amount of money.
Articles in financial media outlets have been reporting companies successfully implementing AI projects haven't seen any increase in profits. In other words, spend a bunch of money and waste resources to make the same amount of money.
Yeah, I know what is happening in programming. People compared code output of the companies using and not using genAI. There were gains, but quite slim. By design these algorithms tend to return output resembling the solution with quite nasty errors hidden.
"They" talk about demand for cybersecurity experts is going to increase in the next couple of years. I half-wonder if it's because they are planning for the increase in vulnerabilities the automated AI coders are going to introduce.
I just hope the companies which cared about it are not stupid enough to even use genAI where this counts. There immediately were many papers and raports from experts genAI is just terrible at code security.
I suspect these gains are only assessed according to one criterion which is "time to apparently successfully finish the task". No I would like the same assessment but taking into account testing, security, maintenability, evolutivity...
Huh. Sounds familiar, like nfts, and blockchain, and AR...
Follow it back and there's been lots of stuff like this, it's just hard to remember. Remember "Push Content?" Everyone was going to have a client that companies could directly send things to. Remember "Active Desktop?" Your wallpaper was going to be a web page.
Well, there's a big surprise!
The Klarna business model, but they’re blaming their failure on their customers. Classy!
Yeah, I think the WSJ reported on the exact numbers and it wasn’t pretty.
But, but ... Grok Porn !!
MIT had it at a 95% failure rate to produce positive ROI.
I know there are going to be a ton of terrible consequences because we all always pay for it when Wall Street does this stupid shit but a *95% failure rate* is extremely funny lol
Artificial Investment.
For those interested.
This article fails to note the external costs of everyone's electricity rates increasing from the needs of AI data centers
Because the article is focused on what COMPANIES are losing, which is what drives investors to change their behavior. They don't care if the cost of electricity overall goes up, that's just means their energy tech investments will see a spike in valuation.
The data also reveals a misalignment in resource allocation. More than half of generative AI budgets are devoted to sales and marketing tools, yet MIT found the biggest ROI in back-office automation—eliminating business process outsourcing, cutting external agency costs, and streamlining operations.
This line made me laugh cuz like, this is exactly how I think AI should be, targeting one single task for backend processes and mundane office work shit. Artists and writers and customer service reps etc can't really be replaced in the long run, it's all stuff that takes a human touch
So what, is outsourcing, which processes? Lmao the paperwork bots that pretend to talk should be doing paperwork. Might even be more profitable?
Yeah, and that's how this all started. Before AI became a buzzword (again, this has happened like 10 times over the last 60 years) it was being used to great effect as a tool to eliminate mid-level managers. And that is where it will continue to be used.
Fr! I was talking about AI and it's limitations as a tool with some younger artists and I dropped the fact that generative AI isn't new, Google developed DeepDream in like 2010 and DeviantART had DreamUp in 2014, so even the newest iteration of the tech is more than ten years old. AI is all hype
Wild how the article still insists "oh, but it's not the AI, it's the implementation." While also insisting the 5% are all "rags to riches" story in an era where that myth is even more obviously propaganda.
A lot of the people in NANDA work for the very small AI startups the report touts as being so successful. 🙃
@stephruhle.bsky.social
Thank you, I was looking for that article this morning after seeing someone mention it on TikTok— which is what inspired my post this morning when I didn’t find it but saw all these other articles.
"Startups led by 19- or 20-year-olds, for example, “have seen revenues jump from zero to $20 million in a year,”" - Ok so how does that compare to comparable startups (founders have same background, same VC funding, etc) that aren't using it?
I wonder, if and when the AI bubble pops, would we see all those dissolved jobs come right back in demand? (I don't actually know one way or another)
Just thinking of how much of it has been driven by "OK, we dumped a whole lot of money into this, so now you're going to buy it" and trying to use it as a value add to receive prices and it isn't working.
Entire economy propped up by the sunk cost fallacy.
All of it
I've seen a ton of hype and people looking at it, initially seeing the risk of NOT getting into it, doing a little research and pilot projects and realizing that people HATE agentic workflows and results aren't great unless you have a massive data set to train it off of.
Companies are realizing that the commercial AIs don't adapt and are stuck using it as-is since it's not able to adjust to their processes. Both the big AI companies and AI adpoters are in the sunk-cost phase. "We've dumped X billion already, if we just dump X billion more it'll all turn out OK".
there are some things it's decent at, but it's a fuckin' roulette wheel every time. even if some of the output is decent, you need skilled humans to verify and sift through the mountain of shit the machine spits out
And as long as you have to have the skilled humans you might as well let them choose how they complete their tasks, because in many cases they find that the prompt/review/prompt refinement/review process takes longer than it would to just do the thing.
i'm a creative professional so i have a searing hatred for these tools in the abstract, but even i admit they have utility. it's pretty fuckin' impressive what some of them can do. that said, they have to be used in the right way, not jammed into every angle of production because management says so
Unfortunately, the “let a person direct a photorealistic movie of a media figure or draw something well-understood in a well-understood style” are two of the things it does surprisingly well.
i'd say "reasonably well, within certain parameters" but yeah, point taken it certainly does it well enough to fire a bunch of people, so far as the C-suite's concerned
That’s true—I was mainly thinking that the deepfake stuff works so well it’s scary and it’s being used a lot for social engineering scams.
i don't think we've even seen the full power of that disinfo pipeline unleashed yet you can run a local model these days on consumer-grade hardware that can create video clips good enough to fool trained folks, let alone your average Facebook boomer next few election cycles gonna be NASTY
Yeah, and a lot of the conversations when it's pointed out go like: "It's fake. You see how this person has 8 fingers on his hand on an arm that's sticking out of his hip?" "Well, it sure sounds like something he WOULD have said" ...Instead of being outraged that someone tried to trick them.
I'm also imagining that it's hard for people to separate how good it is at generating derivative creative work based on a prompt (definitely theft, but the results are impressive) with something that largely annoys people and enshittifies business processes.
Or less because lots of users will be turning away from you