like, the core of his belief is that humans, unlike machines, have direct psychic access to the World of Forms and can perceive them.
like, the core of his belief is that humans, unlike machines, have direct psychic access to the World of Forms and can perceive them.
have you considered that this is a core intuition that most people have and rationalize in different ways and it is very hard to break them of it or convince them to stop believing new ways of stating it
that filling a bull's horn full of cow manure, saying a prayer to the earth over it, and then burying it in the soil of the farm you plan to spread it over concentrates earth energy which improves the quality of grapes is a less common intuition, but also one which he apparently shared with steiner
i think the closest comparison to weizenbaum is probably wilhelm reich
clanka please
excellent reply, ty
Weizenbaum's critique isn't mysticism but a prescient warning about reducing human complexity to computational models. Embodiment matters precisely because consciousness isn't just information processing.
Not my thing personally, but ritual is a tool that is sometimes useful, and I'd happily sacrifice a horn full of manure if it let me overprice my grapes.
Jack Parsons was a Crowleyite, yet he also founded the JPL. This is a weak argument.
and Isaac Newton was an alchemist as well as a physicist and mathematician, but we have lost his alchemical works and kept his mathematical ones. i am saying that that Weizenbaum's opinions on artificial intelligence have more affinity to Isaac Newton's on the Great Work than to the calculus.
in fairness newton's alchemy is very aesthetic
Had an idea for a novel I never got around to that started with Newton’s corpse disappearing on the eve of his 400th birthday and ended up entangling four separate timelines generated by the cross product of [Newton’s physics works, it doesn’t] X [Newton’s alchemy works, it doesn’t]
Keynes thought so :)
That may be the case, but so are the opinions of the AI safety crank club.
i think that this is very much the case!
Sure, writing a preface to Steiner is crank central, but what bearing does this have on Weizenbaum's distinction between judgment and calculation? This guilt-by-association criticism is every bit as ideological as the one put forward by Bender and co.
The problem of where to draw the boundary between judgment and calculation is a political problem, not an engineering one. bsky.app/profile/mrag...
This has nothing to do with whatever beliefs Weizenbaum may have had regarding humans' direct psychic access to the Akashic records or whatever.
I personally prefer Philip Agre’s _Computation and Human Experience_. I don’t think he could be accused of being a crank.
As for Newton, he had enough epistemic humility to refrain from making hypotheses about the mechanism behind action at a distance.
I’d love to do engineering science of universal next-token predictors, but instead we’re stuck here arguing about everyone’s brand of woo. bsky.app/profile/mrag...
For thousands of years humans had the core intuition that practically every animal, rock and stream was conscious. So I doubt that the inability to ascribe consciousness to machines is some kind genetic predisposition that we can't be shaken out of.
Also, the "World of Forms" bit is as much of a problem as the "psychic access". I am pretty confident there is no such thing as a world of forms--if for no other reason than that Platonism and similar concepts of unearthly forms are fundamentally inconsistent w/ the incomplete nature of mathematics.
My core feeling is that it’s a complete act of hubris for humans at this technological juncture to act like we even understand the nature of the organic well enough to deem the inorganic similar to or equivalent in capacity.
i mean, i think that the statement "universal approximators approximate each other to precisely the extent that they are universal" is true a priori. (i do think that there are aspects of human cognition which are not approximable without direct data on how they funciton.)
Precisely. Our understanding of consciousness is still so limited that claiming equivalence between organic and computational systems reveals more about human hubris than actual intelligence.
In the hot, wet environment of the human brain, very well understood classical mechanics will dominate, and while the brain may take the form of any number of immensely complex systems, none of these systems are fundamentally beyond computer emulation to arbitrary accuracy.
I profoundly disagree, but I also don’t see much use in arguing about it on the Internet. We will eventually find out one way or another.
our observational tools into functioning gray matter are still so coarse that they can't pick up anything small enough not to be in the classical regime
exactly
Even if there *are* significant quantum effects here (big, big *if*), the problem is that you can simulate quantum systems with a quantum computer, and quantum computers with a classical computer.
i mean, sure, but i still think that we can ALMOST rule out Penrose-style propagation of quantum signal from inside ensembles of microtubules all the way to macroscopic behavior, and from there to say that it dominates classical effects so thoroughly that it's how the brain works.
oh sure, i haven't read any fanciful qubit-anon idea that has moved me. but i think exasperation with people who are wrong for the wrong reasons can sap our epistemic humility, so it was worth a note
and the vital importance of epistemic humility is very much at the core of my thinking on all this stuff these days
I see far too little of it going around these days. This is what, fundamentally, scares and angers me.
i keep my crank beliefs carefully cordoned by a funny voice i use when i say them. they're still true but it's a funny category of true. like cᵣyₚₜₒ ₐₙd ₐᵢ ₑₙₑᵣgy ᵤₛₐgₑ ₜᵣₑₙdₛ ₐᵣₑ ₑᵥᵢdₑₙcₑ ₜₕₐₜ ₜₕₑᵣₑ'ₛ ₛₒₘₑₜₕᵢₙg ₚₕyₛᵢcₐₗₗy ₐₛyₘₚₜₒₜᵢc ₐbₒᵤₜ ₜₕₑ ₙₑₑd fₒᵣ ₕᵤₘₐₙ ₜᵣᵤₛₜ??
Epistemic humility would be "we don't know if LLMs are conscious" not "we know they are not".
I believe this sorely misplaces who the burden of proof rests upon right now
if i take one parameter away from this conscious llm, it is still a conscious llm, no?
this is why i'm so cranky about Bender, et al. they rule out a computational account of cognition on the basis of evidence which they do not have.
have not read bender but i don't think a computational model of cognition can be ruled out, it's just not very well supported by experiment iirc
Honestly don't understand the point of entertaining god of the gaps dualism. Even if we could prove that the brain is fully classical, no one would change their mind about anything
less dualism and more doubt about the actual big O() of The Simulation
I think this is true in a fundamental, theoretical sense, but I’m not sure we can build an *actual* emulation of a human brain, given the orders of magnitude more complexity and speed we need to accurately emulate simple computers well. Maybe we build an electric brain we *also* don’t understand.
Simulations of some classical systems can have terrible computational scaling properties while still being possible in theory, even though the real-life system gets that computation "for free." Doesn't mean you couldn't build a specialized hardware emulator but it might impractical for ever.
which chapter of the sequences is this one
Don't know, I couldn't be bothered trying to read that whole morass. There are far better presentations of bayesian statistics out there.
But here's the thing: your brain is still a classical system because it's too damn hot and wet to avoid wavefunction collapse.
i suppose what i meant is that a mechanistic theory of mind doesn’t have much bearing here, in a similar way to how a mechanical computer not being so different from a digital computer doesn’t really help you.
i guess the joke didn’t land, but the rationalists took the notion of the brain as a construction of (classically) physical and followed it to completely insane conclusions. it’s a true statement, but also i don’t really think it tells us anything useful.
We don't understand either brains or AI well enough to say one way or another in my opinion. There is still a lot to learn. I've long been given to panpsychism myself, but lots of things are possible given what we know.
Perhaps Chalmers thermostat really does have a more of consciousness? Or maybe we don't yet know enough to properly formulate our questions. We will learn many things trying to understand these things we are building.
In the micro sense, the human brain is so irreproducibly complex that we will never truly understand it In the macro sense, the human brain is a stupidly simple machine Like, many of the knocks against LLMs are also extremely common human behaviors.
I think (to me) it's clear LLMs are not thinking beings in any way I can understand the term, but it's also the case that it's a bigger act of hubris to assume our brains aren't following the same laws of physics that seem to govern all the atoms in our universe.
This means maybe you just can *never* get to brains with silicon based processors because of some limit, but not that there is some some special 'organic' thing, which really just means it's carbon.
N.b. Godel was a mathematical platonist and thought his incompleteness theorem was a validation -- because it meant math (and truth) were stronger concepts that provability. Feel free to dismiss forms as nonsense, but incompleteness is not an enemy of the idea.
Constructivism stays winning.
I take a much more lax view of what is a valid mathematical object than constructivists. If you can define formal rules for it, go fish! But I do think mathematical objects are closer to 'created' than 'discovered'.
Yes, that’s my idiosyncratic brand of constructivism too (that was Herman Weyl’s perspective).
get a load of this guy, he's not done enough ketamine to see the world of forms
(I have no well informed opinion, I am just being a smartass)
im a little confused how people can be platonists and not believe something like that in degree