It’s really fascinating to me that in quite a few countries, getting people to accept that “cameras” are in fact more reliable and less biased than human beings at spotting crime, is so much harder than “get people to treat the LLM like a human”.
It’s really fascinating to me that in quite a few countries, getting people to accept that “cameras” are in fact more reliable and less biased than human beings at spotting crime, is so much harder than “get people to treat the LLM like a human”.
Completely changed what I think about the longterm viability of Waymo, where I used to be quite fatalistic that people would choose “a much more dangerous human” over “an automated vehicle”.
And it turns out (from personal experience) that the biggest challenge for Waymo and co isn’t the trolley problem, it’s the random traffic cone problem.
Coming soon: pedestrians must carry a transponder on them at all times in case a self-driving car crashes into them and gets upset.
Already so when you walk the floor in an amazon distribution center
Cameras don’t talk to you. I don’t think it’s more than that. And the end of the day LLMs reliably beat the Turing test - which highlights the flaws of that particular assessment, but it is the main difference that explains most of this nonsense.
But the primary aim of an LLM, to sound human, so it isn't surprising really. Even down to tweaks like not choosing the most probable next word in a sequence, but a random one from amongst a list of candidates, just so it doesn't repeat the same answer.
Ok hear me out; We mark Speed Cameras and ULEZ camera as "AI Powered Safety Devices" but don't change a damn thing about the OS or how they work.
The Power of Words