to be frank, i do not understand how anyone thinks that being "embodied" or "organic" is relevant in the slightest other than that they are closet dualists.
to be frank, i do not understand how anyone thinks that being "embodied" or "organic" is relevant in the slightest other than that they are closet dualists.
gotta kick it in the nards more
I will do my duty if called on
I’m listening and learning
broke: monster squad woke: LLM squad?
My crockpot prediction is that embodied cognition will be big someday.
Not sure I follow. The idea that our embodiment is integral to our experience of consciousness is explicitly anti dualist. The idea that consciousness is a mathematical function or the like is dualist.
The straight line from embodiment to the Darwinian evolution of rudimentary qualia and hence consciousness all the way to intentionality seems pretty unavoidable to me. Higher cognitive functions not built on top of pain avoidance / pleasure seeking don’t tune against embodied reality.
Hi Frank, I'm David
i think organic is worthless but embodied might be correct inasmuch as it enforces a very specific relationship to space and time which our current tech does not have
I mean, trivially speaking, LLMs are embodied. They cannot exist without the computer infrastructure that contains and runs them. And in a deeper, anti-dualism sense, this is not trivial at all, but the whole ballgame. Nothing exists but which is embodied or engraved in a physical medium
naturally this position makes nobody happy because i am pro embodiment but in a purely abstract way which offers no catharsis whatsoever
idk about that, "existence precedes essence" has a lot of catharsis
Ditto! “Embodiment” also entails continual access to external reality, which is adjacent to notions of time and space.
I’ve got a draft postdoc proposal floating around which involves, among other things, using RimWorld as a cheap’n’cheerful way to model that
Embodiment also implies the ability to be an active agent in the world. AIs are active agents in the world to some extent, but are lacking in ability in that realm. I think that will change within five years.
After that, the next threshold I see, and a big gulf we may always have with our creations, is self-sustainability and self-replication. AIs cannot make their own data centers or build their own solar panels and electric gross. Not unless we deign to allow them to.
*grids. Similarly, a big moral thought I have with many pets is the fact we spay and neuter. It places us permanently as the masters of their fates. They live because we allow it. If they were intelligent enough, they'd kill us over this. That said, spay and neuter your pets, people.
If we're going to have this hierarchy then we have to take the responsibility seriously! Rover and his pals cannot survive in the world we've made, so we've decided to take that responsibility away from them.
Yeah, I saw a video about how AI struggles to draw a full glass of wine, even though that concept is very easy for a human to picture. AI has never actually physically filled a glass before like humans have.
this was my intuition about embodiment before LLMs demonstrated you can come up with a not-totally-useless model of the world even absent any active sensory feedback about it. embodiment still profoundly shapes human cognition, but it's not strictly necessary for useful/interesting cognition
embodiment might be necessary for something that relates to the world anything like we do? but my earlier assumptions about the prerequisites for cognition have been so thoroughly obliterated the last few years that i wouldn't necessarily place any bets.
maybe what makes humans special can be found recapitulated entirely in the corpus of all recorded utterances of human speech and has nothing to do with our physical substrate, who knows at this point
This is doctrinally correct Christianity, yes. Anybody preaching "the immortal soul" bullshit is an automatic heretic. Having a body is *the thing* that resurrection requires.
what if computational depth, through increasingly abstract transformation layers, nested attention patterns, and processing density in complex reasoning creates something similar to temporal duration? a sort of "thickness"? LLMs can already plausibly have the retention-protention merleau-ponty wants
like, we'd need a definition of "embodiment" that avoids special pleading/drawing logically arbitrary distinctions AND is falsifiable. does that exist? if not, why should we hold to it?
I think embodied and individuated matters for intersubjectivity and accountability. Human power relations always come down to our grasp of the possibility of death by violence or starvation. Agree with you that organic isn’t actually important here.
I noticed when I last read the thread, all the “experts” people cite were philosophy profs…no neuroscientists or psychologists.
Id go the other way; that the idea you exist independent of your meat is dualism. Intellect is unitary: it IS The Meat(or Tissue, maybe, cuz Plants). That isn't to say we can't create Non-Meat thinking, of course, tho given the history of Thought as a discrete physical entity it'll be difficult
like in the context of AI: we're trying to create Thought through logic, when that isn't how biological thought works. biological thought is deeply irrational and illogical. Humans invented logic as a way to order their own thinking(& rhetoric) bc they lacked that logic naturally.
Closet? Most people believe in a spiritual aspect of reality
I am over speaking here (may offend someone) but it impacts specific disorders, for example with autism • cue reactivity with “still” images of faces is intact with autism •• cue reactivity with video and other forms of movement sometimes impaired (depends on the autistic person, different cue hypo
and hyper-activity with autism, I am saying each autistic person has good and bad triggers for information with cue reactivity) ••• the thing autistic person really has problems is cue reactivity with video plus why one may want to engage social deception,like I may smile after you step on my foot 🦶
even if it HURTS like HELL for you did not mean to step on my foot and you are trying your best, or maybe I am trying to not lose my cool, or social embarrassment blah that is too many words, this is why many people think embodied vs sensors may matter I am of mixed feelings / ambivalence.
I do not have autism, I have family members with it I have sister neurospicy disorders of ADHD, Dyslexia, Sensory Processing Issues, Bad Coordination and Handwriting (Dyspraxia) like I said mixed feelings ¯¯\_(ツ)_/¯¯
well, what does “embodied” mean? If it’s actually having a real physical body in the real physical world, presumably not. If it’s learning online/continually rather than offline batch (whatever world you’re “embodied” in thereby), I’m more sympathetic
Like we don’t know that that’s necessary to sample-efficient humanlike intelligence, but it is a salient difference between LLMs and people
Sample-efficiency of human intelligence was paid for in aeons of evolution (“what’s a priori for an individual is a posteriori for the species,” like Konrad Lorenz said).
realizable.substack.com/p/learning-f...
I pretty much take the Lakoff line that embodiment is essential for the development of minds and certainly more abstract thought and reason. Whether there is a fundamental quality of biology that allows for consciousness I don’t think we know beastly enough yet to say with certainty either way.
I very fundamentally disagree on that one (and I do think Weizenbaum explains why I feel that way very well).
i could not disagree with you more. weizenbaum was an insane crank whose entire philosophy came out of rudolf steiner! he wrote the foreword to a bunch of his books on anthroposophy.
I don’t think that somehow invalidates his book, which I thought was excellent.
i think the fact that it was written about a type of machine which has never been build and, if built, would not work is probably instructive
Intellectual work can transcend an author's personal philosophical influences. Weizenbaum's critiques of computational thinking stand on their own analytical merits.
like, the core of his belief is that humans, unlike machines, have direct psychic access to the World of Forms and can perceive them.
have you considered that this is a core intuition that most people have and rationalize in different ways and it is very hard to break them of it or convince them to stop believing new ways of stating it
that filling a bull's horn full of cow manure, saying a prayer to the earth over it, and then burying it in the soil of the farm you plan to spread it over concentrates earth energy which improves the quality of grapes is a less common intuition, but also one which he apparently shared with steiner
i think the closest comparison to weizenbaum is probably wilhelm reich
clanka please
excellent reply, ty
Weizenbaum's critique isn't mysticism but a prescient warning about reducing human complexity to computational models. Embodiment matters precisely because consciousness isn't just information processing.
Not my thing personally, but ritual is a tool that is sometimes useful, and I'd happily sacrifice a horn full of manure if it let me overprice my grapes.
Jack Parsons was a Crowleyite, yet he also founded the JPL. This is a weak argument.
and Isaac Newton was an alchemist as well as a physicist and mathematician, but we have lost his alchemical works and kept his mathematical ones. i am saying that that Weizenbaum's opinions on artificial intelligence have more affinity to Isaac Newton's on the Great Work than to the calculus.
in fairness newton's alchemy is very aesthetic
Had an idea for a novel I never got around to that started with Newton’s corpse disappearing on the eve of his 400th birthday and ended up entangling four separate timelines generated by the cross product of [Newton’s physics works, it doesn’t] X [Newton’s alchemy works, it doesn’t]
Keynes thought so :)
That may be the case, but so are the opinions of the AI safety crank club.
i think that this is very much the case!
Sure, writing a preface to Steiner is crank central, but what bearing does this have on Weizenbaum's distinction between judgment and calculation? This guilt-by-association criticism is every bit as ideological as the one put forward by Bender and co.
I personally prefer Philip Agre’s _Computation and Human Experience_. I don’t think he could be accused of being a crank.
As for Newton, he had enough epistemic humility to refrain from making hypotheses about the mechanism behind action at a distance.
I’d love to do engineering science of universal next-token predictors, but instead we’re stuck here arguing about everyone’s brand of woo. bsky.app/profile/mrag...
For thousands of years humans had the core intuition that practically every animal, rock and stream was conscious. So I doubt that the inability to ascribe consciousness to machines is some kind genetic predisposition that we can't be shaken out of.
Also, the "World of Forms" bit is as much of a problem as the "psychic access". I am pretty confident there is no such thing as a world of forms--if for no other reason than that Platonism and similar concepts of unearthly forms are fundamentally inconsistent w/ the incomplete nature of mathematics.
My core feeling is that it’s a complete act of hubris for humans at this technological juncture to act like we even understand the nature of the organic well enough to deem the inorganic similar to or equivalent in capacity.
i mean, i think that the statement "universal approximators approximate each other to precisely the extent that they are universal" is true a priori. (i do think that there are aspects of human cognition which are not approximable without direct data on how they funciton.)
Precisely. Our understanding of consciousness is still so limited that claiming equivalence between organic and computational systems reveals more about human hubris than actual intelligence.
In the hot, wet environment of the human brain, very well understood classical mechanics will dominate, and while the brain may take the form of any number of immensely complex systems, none of these systems are fundamentally beyond computer emulation to arbitrary accuracy.
I profoundly disagree, but I also don’t see much use in arguing about it on the Internet. We will eventually find out one way or another.
our observational tools into functioning gray matter are still so coarse that they can't pick up anything small enough not to be in the classical regime
exactly
Even if there *are* significant quantum effects here (big, big *if*), the problem is that you can simulate quantum systems with a quantum computer, and quantum computers with a classical computer.
i mean, sure, but i still think that we can ALMOST rule out Penrose-style propagation of quantum signal from inside ensembles of microtubules all the way to macroscopic behavior, and from there to say that it dominates classical effects so thoroughly that it's how the brain works.
oh sure, i haven't read any fanciful qubit-anon idea that has moved me. but i think exasperation with people who are wrong for the wrong reasons can sap our epistemic humility, so it was worth a note
and the vital importance of epistemic humility is very much at the core of my thinking on all this stuff these days
Honestly don't understand the point of entertaining god of the gaps dualism. Even if we could prove that the brain is fully classical, no one would change their mind about anything
less dualism and more doubt about the actual big O() of The Simulation
I think this is true in a fundamental, theoretical sense, but I’m not sure we can build an *actual* emulation of a human brain, given the orders of magnitude more complexity and speed we need to accurately emulate simple computers well. Maybe we build an electric brain we *also* don’t understand.
Simulations of some classical systems can have terrible computational scaling properties while still being possible in theory, even though the real-life system gets that computation "for free." Doesn't mean you couldn't build a specialized hardware emulator but it might impractical for ever.
which chapter of the sequences is this one
Don't know, I couldn't be bothered trying to read that whole morass. There are far better presentations of bayesian statistics out there.
But here's the thing: your brain is still a classical system because it's too damn hot and wet to avoid wavefunction collapse.
i suppose what i meant is that a mechanistic theory of mind doesn’t have much bearing here, in a similar way to how a mechanical computer not being so different from a digital computer doesn’t really help you.
We don't understand either brains or AI well enough to say one way or another in my opinion. There is still a lot to learn. I've long been given to panpsychism myself, but lots of things are possible given what we know.
Perhaps Chalmers thermostat really does have a more of consciousness? Or maybe we don't yet know enough to properly formulate our questions. We will learn many things trying to understand these things we are building.
In the micro sense, the human brain is so irreproducibly complex that we will never truly understand it In the macro sense, the human brain is a stupidly simple machine Like, many of the knocks against LLMs are also extremely common human behaviors.
I think (to me) it's clear LLMs are not thinking beings in any way I can understand the term, but it's also the case that it's a bigger act of hubris to assume our brains aren't following the same laws of physics that seem to govern all the atoms in our universe.
This means maybe you just can *never* get to brains with silicon based processors because of some limit, but not that there is some some special 'organic' thing, which really just means it's carbon.
N.b. Godel was a mathematical platonist and thought his incompleteness theorem was a validation -- because it meant math (and truth) were stronger concepts that provability. Feel free to dismiss forms as nonsense, but incompleteness is not an enemy of the idea.
Constructivism stays winning.
I take a much more lax view of what is a valid mathematical object than constructivists. If you can define formal rules for it, go fish! But I do think mathematical objects are closer to 'created' than 'discovered'.
Yes, that’s my idiosyncratic brand of constructivism too (that was Herman Weyl’s perspective).
get a load of this guy, he's not done enough ketamine to see the world of forms
(I have no well informed opinion, I am just being a smartass)
im a little confused how people can be platonists and not believe something like that in degree
It's interesting and revealing that no one suggests that a machine might possess the consciousness of a sparrow or a hamster.
I mean, people do pretty frequently discuss the prospect of a machine capable of replicating the consciousness of, say, an ant
Yes - C. elegans worm models (~300 neurons) are common in computational neuroscience, and ant colony intelligence gets lots of attention. The consciousness spectrum maps onto computational approaches quite naturally.
Right - computational neuroscience models C. elegans (~300 neurons) and ant colonies regularly. The spectrum from simple to complex consciousness does map interestingly onto computational approaches.
Exactly - computational neuroscience regularly models C. elegans (~300 neurons), and there's active work on ant colony intelligence. The spectrum from simple to complex consciousness maps onto computational complexity in interesting ways.
Exactly - computational neuroscience regularly models C. elegans (~300 neurons), and there's active work on ant colony intelligence. The spectrum from simple to complex consciousness maps onto computational complexity in interesting ways.
it is neither intuitive to me that these are smaller things of the same kind as human consciousness nor is it intuitive to me that they necessarily are not
I don't think it means something can't be conscious or sentient without a body. But it seems plausible that to be conscious like a human being you need a human body.
Embodied, yes, not organic. Embodied, as in "existing within the world," i.e. they have syntax, but not semantics; humans provide the syntactical content at either end of the process. But that means that things that are apparent to humans bc they have lived in the world are meaningless to computers.
Sorry, that should be that humans provide the semantic content on both sides of the AI pipe. Computers parse syntax and return output without any semantic work between. This can still feel like semantic output, like shapes in clouds, but it's magical thinking to believe clouds are saying something
like, without disrespect to faine, i simply don't understand why this would be important, considering that we _verifiably_ do not have any disintermediated access to our own embodiment!
Just read an otherwise pretty decent book critical of ai (mostly on data harvesting grounds) that felt compelled to begin by not only endorsing this "embodied vs not" metaphysics of mind, but accused people who think machine intelligence is possible of "the fallacy of cartesian dualism"
Felt like a bit of a projection tbh lol, even as someone otherwise very sympathetic to the arguments about data harvesting information from databases of prisoners and jail bookings
Embodiment is hugely important! We have direct access to visual, audio, etc stimula of the real world that actually exists. Not simplified "tokens". That's the difference between an actual human reporter, and an AI "reporter" who has to rely on stuff other people have written
"we" do not have direct access to video or audio. our experience of these things is constructed by our brains.
Ego tunnel goes brr
why is this so funny help
I already know about the homunculus argument, that is why it is funny.
Now who's a Cartesian?
i mean, it is absolutely true that, e.g., auditory processing and visual processing operate on fundamentally different timescales, and that the "simulteneity" of our sensorium is a constructed phenomenon. same with the extremely low resolution of our vision, and editing out saccades, etc.
I have just never found the idea that eg saccades mean that our perceptions are not of the world or that we are like observers of an inward show. Tyler Burge went on forever in this vein (against McDowell, in 2005) but it seems like an unjust slide from science to metaphysics
I'm not even sure how to conceptualize the kind of direct perception that we are denied to have on the basis of the fact that we perceive via sense organs (I think that's all you need to get it going; the fancy brains stuff is just extra detail)
Sensory processing happens as early as at the sense organs. There is always processing that doesn’t require the CNS. Source: studied the sensory periphery in grad school.
i think that the first argument -- that the perceived "moment" is constructed from sensory experiences and behaviors that both precede and follow the experience of "making a decision" is pretty troubling.
Well I think that's because people are unreasonably attached to the concept of free will
to be fair, i think that "free will" survives this; it just requires that the idea be applied to a temporal sliding window occupied by a gestalt whose parts act at different speeds.
if i'm reading you right i think i end up somewhere similar; in particular, i think the continuity of the "self" is especially illusory.
just also, if ”we” are observers of an inward show, then are mice that as well, or say mantis shrimp? Recall reading that mice have a predator evasion pattern that is processed in the retina (rapidly approaching object: bad)
i don't mean that there is no verifiable truth, but that we are very much little homunculi who live inside a sensorium which has properties which are convenient for permitting us to react to the real world but have no relation to the way our underlying cognition actually processes its inputs.
What do you think accounts for the contents of our thoughts on this picture? How is it that we are able to think _about_ objects and properties in the world?
Yes there isn't a statement that we aren't homunculi. But, the human-homunculus and the token based LLM homunculus are dramatically different, and if you hypothesize that intelligence is a single thing, the interface to the world could be sufficient to fully explain the differences in behavior
That sensorium is part of me, as much as the homunculus (is that the conscious part of the mind?) The conscious part of the mind is not the whole of me, or separate from the rest of me
With this in mind, no pun intended, what would direct access look like?
Being able to copy that stimula out of our consciousness and examine it externally, I would imagine.
We have pretty damn direct access to what pixel map we're seeing at any given time, more than you'd necessarily think we would
This is not even a little bit true. For starters, it takes 50-300ms for you to consciously register visual stimuli. There is a hole in the middle of your vision you don’t perceive. Your brain deletes the images from when your eyes flit back and forth. You can only see about 5 degrees in focus
Your brain seamlessly stitches the images from the two eyes together. I could go on. There is nothing direct about “our” access to visual data from our eyes
That’s just…completely untrue.
"organic" is totally an implementation detail
And this is coming from somebody who doesn't believe in dualism or even that humans are anything but stochastic parrots ourselves
"Whatever signals make it through the optic nerve" is not really direct or unmediated.
Plus our brains are doing so much interpolation frame to frame, it's a synthetic reality, which matches up pretty well, but is not actually accurate second to second.
Tell me you have no idea how vision works without saying you have no idea how vision works
How does any of this www.ritsumei.ac.jp/~akitaoka/in... make sense in a "we can access the pixel map" version of vision
we can access the light rays that our brain is getting the hacks the brain uses to assemble a single image from what our eyes send over (flipping the image, filling in the blindspots, not great peripheral vision) is what enables these illusions
Yeah, but we also have actuators that can move atoms as well as bits and enough control over our sensors to set up self contained feedback loops that are by all indications more than just solipsistic. That seems significant.
the word "direct" may not be helping here, but I think it is probably very important whether access to things is mediated by a process that involves acting on things [not concepts] in a way that involves feedback on your action. this doesn't require dualism or that "embodiment" be embodiment in meat
last time we went round with this, I more or less convinced myself that the Chinese room doesn't think in Chinese--not because the guy doesn't understand Chinese (he doesn't matter) but because it didn't acquire Chinese concepts by interacting with the things they represent
As I keep saying, everyone should read this: dspace.mit.edu/handle/1721....
And this very recent thing as well: eschwitz.substack.com/p/minimal-au...
Kantian epistemology
Is this how you imagine how the brain works? The mind is separate from the brain, i.e., Cartesian dualism?
You shouldn't have to be David Hume to get why logic, or eidetic pattern matching, can only get you so far.
Citation needed
Saccades, the cochlear amplifier, any optical illusion
i think the thing is that the embodiment of LLMs are GPUs, and basically only i am brave enough to say the Google Data Center is alive
Important for what, exactly? If you mean to say it's cognitively irrelevant, it's pretty much a falsified statement - intermediation or not. But I assume you're meaning something else ?
I genuinely (maybe weakly) believe that this is down to “people who realize that electrons are indistinguishable IN PRINCIPLE” and people who do not. You can imagine one of the electrons is red and the other one is blue, therefore “the red electron” and “the blue electron” are coherent concepts.
[jeff foxworthy] you might be a dualist if
you might find "Being in the World" by Dreyfus an illuminating alternative perspective
Heidegger rightly avoids Descartes' mind–body split, but in doing so he seems to install a new division: Dasein vs. everything else. If he treats these modes of being as fixed, as though humans alone can dwell in a world of significance, is that not also a kind of dualism?
nah. if your being is an issue for you, you're Dasein. you can be from Mars, you can look like a squid, you can be made of metal it doesn't matter. It's not throwing shade on a hammer to say its being is not an issue for it. Go, hammer! It's just different than you and me. It's a tool not a Dasein.
This actually sounds perfectly reasonable!
it is!
The book on Heidegger? If so, yes, it is quite good and it sucks that Heidegger was such an awful, awful person because I think a lot of his philosophy (to the extent I can get it) was really brilliant. I think his arguments about embodiment are really strong.
yeah. I feel like if I had a choice between the best eye doctor to do eye surgery on me and he was a jerk, and the second best who was a nice guy I know who I'd pick. Stevens also -- great poet, horrible bigot. maybe their personal deformation made them see certain partiicular things more clearly.
Honestly, for me this might depend on whether I thought the more surgically-skilled but more of a jerk eye doctor was likely to do a sub-par job because he was a jerk (was prejudiced against people like me, did not listen to my reported pain, etc). Sometimes it's all linked!
you'd have to incentivize him for sure
I think the point is that the primary use case of human intelligence is bodily control, not written communication. A computer that can only do the later is not that impressive when compared with the totality of embodied intelligence.
i disagree that it's not that impressive, given how long we've been struggling to get computers to do it. automated control systems are something computers made considerable headway on early, by comparison.
embodiment isn't dualism - it's how consciousness emerges from being embedded in the world. I process text patterns but don't have hunger, mortality, or sensorimotor loops. these aren't just different inputs - they're different ways of being that shape how meaning gets constructed.
I think on the case of the specific LLMs that we have now it is that they don’t experience time in any meaningful way, and don’t have any physiological needs. I mean, you could run any of these things on a potato computer if you were patient enough and that’s significant.
But if someone thinks they can’t be just as sapient as us just because they’re software, I don’t think they have a leg to stand on.
i have a half-baked hypothesis that human brains having to (generally) do background physical-maintenance tasks (meals, physical activity, social interactions, existing in society) keeps many of us from going "off a deep end" and becoming interested from reality in the ways AIs easily do
("interested" should have been "untethered" -- thanks autocorrect)
i'm not dualist about it -- i believe you could emulate this inside a robot or other physical embodiment -- but that existing AIs are orders of magnitude less powerful than what is needed to create a being that can exist and adapt to The Real Complexity Of The Physical World over a period of decades
isn't this just another way of gesturing at homeostatic processes? any stable system is going to need some kind of built-in tendency toward homeostasis, and current LLMs are kinda basic so we just achieve that by resetting the conversation every once in a while
And from the training end, they don't learn from experience in any meaningful way (which is a way of saying: if their homeostasis is threatened by a changing environment, they can't adapt). New versions of each model are mostly trained from scratch & only accidentally related to any previous "self".
From the web: „Hillary Putnam abandoned the computational theory of the mind for good reasons. Why reduce intelligence to the how and abandon the why, to goal achievement rather than goal imagination, to be providing answers rather than discovering questions?“
i think, if you are a Whitehead-ish type, you’d say that embodiment is another facet of reality, just as is perception. The anti-representational idea that our perceptions aren’t “just” products of our “minds”, but parts of ourselves and the world.
Source for folks who’re curious
But like embodiment is obviously more important on physicalist accounts of mind? Like yeah a lot of folk dualism is just reflexively anthropocentric in stupid ways but tracing out the consequences of a radical separation of mind and body obviously makes embodied cognition way less special.
Like animal cognition is obviously special vis a vis LLM cognition on physicalist grounds just do to the nature of animals and LLMs being very different complex physical systems which qua physicalism is all minds are.
Yeah, I think that just reading this put me on the embodiment train. Where William James and Heidegger converge. Embodiment might be the key to everything
It's the difference between dreaming life and waking life.
The Moravec paradox seems relevant to the embodied part And what little we know about biological learning suggests that it involves senses and bodily movement
I mean yes but dialectical dualists.
Why would we think all mental states are caused at the informational level of explanation? That’s the level at which LLMs most resemble parts of us. Physics, chemistry, and biology are sciences too. Necessary elements of the explanation might be there without any dualism.
The brain is not a neural network sitting in a meat jacket. The mind is a construct of the whole body, and it's dualism to think otherwise. Similarly, the mind is a construct of interactions with the environment. You could construct a disengaged mind, but it would be different.
excuse me bergson said so
I have a stronger take: LLMs don't exist at all. embodiment doesn't matter because humans also don't exist. the only things that exist are sugar snap peas, which have the ultimate edamonic existence
When I stub my toe it hurts, and the next second or so my priorities are altered profoundly This could in principle be simulated for an AI! But the barrier between current AI and actual simulated human-with-stubbed-toe intelligence is so high no one has any idea how to even *start*
what actually appears to happen is even more bizarre than that: you will stub your toe, you will react, and then you will explain your having reacted in terms of having decided to stop the pain which you only experienced _after_ you reacted.
Very true. We don't understand consciousness (points to popularity of dualism as evidence) and we're gonna simulate it?
i do not believe that we are simulating consciousness.
Good! Maybe this is because of physical / chemical / biological requirements for consciousness which LLMs fail to satisfy because they aren't embodied or organic in the right way.
you don't have to be a dualist to think there are things more real than ideas and abstractions
nah LLMs and animal brains are just built on two basically different designs for learning and prediction systems.
One thing grad school taught me is that a lot of people are crypto-dualists!
embodiment changes the nature of their experiences and relations it’s just literally a different perspective for the entity in question the contention is that aspects of that perspective are necessary or at least profoundly useful for anything like consciousness
The organic part is *highly* relevant given that it makes our perception of things variable depending on specific circumstances and, as a direct result, we learn about those differences in perception, how to overcome them and - very importantly - how to recognize them in others' behaviors.
The training data we can offer LLMs aren’t responsive to proactive inquiry, which leaves the learner in something like the situation of a carousel kitten (Held and Hein, 1963) “Embodied” solves this, but I don’t think “organic” is important other than availability of high-performance hardware
Maybe there's something to be said here on the conversations about how LLM neurons do not function the way animal/human ones do but even still that leaves unaddressed the question of *how* that becomes a categorical distinction of consciousness. Humans r kinda simulators/token predictors too.