Yes, and it’s in large part because chatbots sell, whereas training transformer architectures on tokenized natural signs for applications in science would take much longer but wouldn’t enrich Sam Altman.
Yes, and it’s in large part because chatbots sell, whereas training transformer architectures on tokenized natural signs for applications in science would take much longer but wouldn’t enrich Sam Altman.
So instead of taking the time to understand transformers’ capabilities as powerful architectures for data-driven approximation of control policies, we’re consigned to this.
We're social animals, and while we *can* think about architectures for control policies ... we are much, much more likely to pay attention to things that look or sound like conspecific primates.
Which takes us straight back to the intentional stance.