I find this shit (OP, not Ed's post) so frustrating - of *course* LLMs can't sort through a chat, can't do calculations, can't refer to a PDF, because *that's not what they're for*, they're random text generators.
I find this shit (OP, not Ed's post) so frustrating - of *course* LLMs can't sort through a chat, can't do calculations, can't refer to a PDF, because *that's not what they're for*, they're random text generators.
If you're asking yourself "is this a job for an LLM?" try rephrasing it to "is this a job that a ouija board could do?" and if the answer is "no," then don't get an LLM to do it. ffs, guys
I'm not surprised LLMs can identify cancer cells with a higher accuracy than humans. We trained pigeons to do that. So another one is, "could you credibly train a pigeon to do that?"
I don't think an LLM can identify cancer cells, can it? Machine learning might be able to, but surely not an LLM?
Fair point. Although, we're arguing against people who say they believe LLMs think and will invent a Mars colony by 2030. But yeah, we should be accurate.
LLMs can’t—the other kind of “AI” can. One of the problems we have right now is we’ve got two kinds of giant machines simulating intelligence in different ways and one of them is kind of useful and the other isn’t, but the not-useful one is the one that fakes being a person and we’re conflating them