Zettelkasten Forum


The Reductionist Position on AI by Cal Newport

edited November 26 in Random

EDIT: This is what Cal Newport wrote: https://calnewport.com/when-it-comes-to-ai-think-inside-the-box/

I send this mail to Cal Newport. I thought that it might be interesting to discuss:

Hi Cal,

tl;dr: You are positioning yourself for failure by taking a reductionist position on AI. (Doesn’t mean that you are wrong. Just your thinking is set up to for failure here)

I enjoyed your last newsletters about AI, as you have a more down-to-earth approach than the typical speculative takes.

However, as AI advances, you run into the same problems that reductionists run into when they try to reduce consciousness to a network of neurons: You are explaining the mechanistic/deterministic substance of an emergent phenomenon (my old professor conclusively rejected the notion of emergent phenomena, as he was a reductionist).

For example:

The most obvious is that once trained, language models are static; they describe a fixed sequence of transformers and feed-forward neural networks.

Brett Weinstein is talking in general terms, not a bout the current reality. The statement by you quoted above will be outdated when AI will be allowed to be more malleable in its substance and self-modify. (One could argue that, since we don’t build new neurons aside from a few brain areas, any content modification of the AI substance is similar enough to the brain that it doesn’t negate consciousness)

Weinstein’s approach, by contrast, is fundamentally pre-modern in the sense that he never attempts to open the box and ask how the model actually works.

If we open our own boxes, we just see glibber. If we look a bit closer, we see just a bunch of chemicals, neurons, axions etc. The brain, its complexity aside, works pretty boringly. It was the same trap of neuroscientists stating that you open the skull and won’t find any consciousness, so it isn’t there.

There is even a note in Luhmann’s Zettelkasten that rhymes on that notion:

Ghost in the box? Spectators visit. They get to see everything, and nothing but that - like in a porn movie. And the disappointment is correspondingly high. https://zettelkasten.de/posts/luhmanns-zettel-translated/#9_8,3

What you are proposing has been done often and failed accordingly. Even if you take a reductionist position, it won’t get you anywhere the same way that no reductionist, not even Daniel Dennett himself, will be practically consistent with his reductionist belief (or rather the rationalisation that leads to such a belief). He will say that he is a reductionist, but will say “I love you.” to his wife and will raise his children to have high agency.

Live long and prosper
Sascha

PS: Please have me on your podcast to set the record straight regarding the Zettelkasten Method.

Post edited by Sascha on

I am a Zettler

Comments

  • Forgot to add the link to the text I responded to:

    https://calnewport.com/when-it-comes-to-ai-think-inside-the-box/

    I am a Zettler

  • @Sascha said:

    Weinstein’s approach, by contrast, is fundamentally pre-modern in the sense that he never attempts to open the box and ask how the model actually works.

    If we open our own boxes, we just see glibber. If we look a bit closer, we see just a bunch of chemicals, neurons, axions etc. The brain, its complexity aside, works pretty boringly. It was the same trap of neuroscientists stating that you open the skull and won’t find any consciousness, so it isn’t there.

    There is even a note in Luhmann’s Zettelkasten that rhymes on that notion:

    Ghost in the box? Spectators visit. They get to see everything, and nothing but that - like in a porn movie. And the disappointment is correspondingly high. https://zettelkasten.de/posts/luhmanns-zettel-translated/#9_8,3

    What you are proposing has been done often and failed accordingly. Even if you take a reductionist position, it won’t get you anywhere the same way that no reductionist, not even Daniel Dennett himself, will be practically consistent with his reductionist belief (or rather the rationalisation that leads to such a belief). He will say that he is a reductionist, but will say “I love you.” to his wife and will raise his children to have high agency.

    This is interesting, and it is OK if you are elaborating beyond what Cal wrote, but I don't think that what Cal wrote implies reductionism. As I read Cal, he is only advocating for better analysis of how LLMs work. This is reasonable enough. By analogy, it is also reasonable to try to understand consciousness by analyzing how it works, and such analysis does not necessarily imply reducing consciousness to the components of the brain.

  • @Andy said:
    @Sascha said:

    Weinstein’s approach, by contrast, is fundamentally pre-modern in the sense that he never attempts to open the box and ask how the model actually works.

    If we open our own boxes, we just see glibber. If we look a bit closer, we see just a bunch of chemicals, neurons, axions etc. The brain, its complexity aside, works pretty boringly. It was the same trap of neuroscientists stating that you open the skull and won’t find any consciousness, so it isn’t there.

    There is even a note in Luhmann’s Zettelkasten that rhymes on that notion:

    Ghost in the box? Spectators visit. They get to see everything, and nothing but that - like in a porn movie. And the disappointment is correspondingly high. https://zettelkasten.de/posts/luhmanns-zettel-translated/#9_8,3

    What you are proposing has been done often and failed accordingly. Even if you take a reductionist position, it won’t get you anywhere the same way that no reductionist, not even Daniel Dennett himself, will be practically consistent with his reductionist belief (or rather the rationalisation that leads to such a belief). He will say that he is a reductionist, but will say “I love you.” to his wife and will raise his children to have high agency.

    This is interesting, and it is OK if you are elaborating beyond what Cal wrote, but I don't think that what Cal wrote implies reductionism. As I read Cal, he is only advocating for better analysis of how LLMs work. This is reasonable enough. By analogy, it is also reasonable to try to understand consciousness by analyzing how it works, and such analysis does not necessarily imply reducing consciousness to the components of the brain.

    The reductionism that I see is that Cal Newport is arguing from insufficiency of the underlying substance of consciousness. He states: LLMs are not conscious because they are feedforward and static (not self modifying I guess). But you'd have to establish that these characteristics are necessary to give rise to consciousness.

    Putnam's multiple realizability is the problem at hand. In practice, the reductionist position means that you break down the emergent phenomenon and claim that it is actually an epiphenomenon. That means that you have a direct and inflexible relationship between the parts and the whole. What follows is that changes in the underlying substance lead to a rejection of the notion that the emergent phenomenon exists. Here: The underlying substance of AI doesn't share specific characteristics with the underlying substance of consciousness, therefore it isn't conscious.

    Reductionism and the violation of multiple realizability are typical cousins (I didn't thought suuuuper carefully about this here, so I fallen prey to a bias) and they are walking together here in Cal Newport's thinking.

    So, by arguing from specific characteristics of the underlying substance of a conscious system Cal Newport to reject the emergent characteristics of another is fallacious because the emergent problem can be realized by a different underlying substance. It is a reductionist position because to make this rejection you have to assume a reducibility to some extent. (I'd be perfectly willing to settle for some other term that reductionism, if it is to hard of concept and it should be more something along the pattern of hard vs soft determinism. Reductionism being the equivalent of hard determinism and what we need here is the equivalent of soft determinism)

    I am a Zettler

Sign In or Register to comment.