There’s this disconnect between thinking something represents AGI and something being useful. An LLM can be *incredibly* useful even if it isn’t “intelligent”.
There’s this disconnect between thinking something represents AGI and something being useful. An LLM can be *incredibly* useful even if it isn’t “intelligent”.
But because it mimics a person so well, we can't help but imagine it's thinking like a person does.
There’s a terrible semantic mismatch between skeptics being overly dismissive and vendors being over zealous. LLMs help in my day-to-day work and help me (a PhD scientist) with brainstorming even if they aren’t intelligent per se. I hope the strident beliefs on both sides can be moderated.