No internet connection
  1. Home
  2. Meat Machines

Dual Process Theories Are Broke: Part 1.2

By hazard
    2021-02-25 14:34:36.766Z

    (written for a diff target audience than this forum, but with content that's relevant. Mostly me doing a sort of ritualized-public-renunciation of my old ways)

    (sequel to Dual Process Theories Are Broke: Part 1.1)

    Bundling Drives and Cognitive Ability

    The last section was looking at the “steelman”; the dual process theories that are actually proposed by researchers in academic settings who read the literature and do experiments and are handsome and praiseworthy and charming. Given that the popular version always looks different from the source, and the source was already on pretty shaky ground, you can guess that I’ve got even more beef with popular dual process theories of the mind.

    The cluster of books I’ve been pointing to all paint a very similar picture. Tim Urban at WaitButWhy gives the clearest illustration of such a model, one that I think is fairly exemplary of this space, and 100% exemplary of how I used to think. He coins two characters: the Primitive Mind and the Higher Mind.

    They’re each in control of one side of this dashboard

    As you might expect, the Primitive Mind isn’t that smart, can’t really think ahead, and is focused on just staying alive. It thinks in quick zigs and zags of intuition. The Higher Mind is where all of human intelligence happens. It thinks with thought out reason. Along with the functional split, there’s a historical split. The Primitive Mind is the older part of the brain, ancient software written by the blind hand of evolution. It’s shared by many other animals and it is the base, survival oriented part of us. The Higher Mind is a more recent development.

    To hear a very similar story, though with less fun drawing, watch this interview with Elon Musk up till the 1:20 mark

    “A monkey brain with a computer on top.”

    “Most of our impulses come from the monkey brain.”

    “Surely it seems like the really smart thing should control the dumb thing, but normally the dumb thing controls the smart thing.”

    Even though Urban makes no reference to brain anatomy, and Musk specifically connects his frame to regions of the brain (limbic system, neocortex), I think they’re more or less working with the same mental mode. In this Urban-Musk model (which yes, that is in fact a perfume), an even more dramatic bundling occurs. Not only are intentionality, controllability, consciousness, and effort all bundled together, but so are the dimensions of:

    • The capacity to get correct answers
    • Drives, from impulses to values
    • Identity and who you are.

    We’ll briefly look at each of these, and I’ll state the points I want to make without really backing them up.

    Capacity to get correct answers:

    This is pretty straight forward, and also present in academic dual process theories. It’s the idea that one system is biased and the other is rational. The pop version dials it up, where biased becomes generally “stupid” and rational becomes “intelligent”. I use quotes for each, because I get the sense that a lot of people don’t distinguish between “producing outcomes they like” and “accurately modeling and steering the world”. They’ll blend the notion of stupid into “doing things I don’t like” and smart into “doing things I like”. When people do piece them apart, there’s a general sentiment that your Primitive Mind (often framed as your intuition or your gut) will get a question wrong more often than your Higher Mind (often framed as your reason).

    Eventually I’m going to argue that your intuition is as good as the data it’s trained on, and your reasoning is as good as the systems of reasoning that you practice. The normative correctness of intuition vs reason is not a factor of innate capabilities of certain mental subsystems, but a result of the quality of your learning.

    Drives, from impulses to values:

    I expect most people when they hear “impulses” to think of all those dials on the Primitive Mind’s side of the control panel, “I’m hungry” “I’m horny” “I’m jealous”. They might use language like, “your Primitive Mind governs your impulses and your Higher Mind governs your values”. I’m going to use drives as a more general term to value-neutrally talk about things that… well, drive you. Musk and Urban explicitly claim that one system is responsible for what might be commonly called your base or vulgar drives, and that the other is responsible for the drives that are often called good or prosocial. Additionally, these two clusters are frequently in conflict.

    While I do think it’s the case that many people have constant conflict between competing drives, I don’t think those drives cluster into groups that align with any of the other properties the Higher Mind and Primitive Mind bundle together. You can scheme intelligently to get food, and you can stupidly follow your curiosity off a cliff. You can be unconscious of your noble desire to contribute to a shared good, and you can be acutely aware of your desire to have sex. Furthermore, drives within the Primitive Mind can and do war with each other all the time, as do drives within the Higher Mind. Any drive can war with any other drive depending on the circumstances. I find it significantly more clarifying to think about the general dynamics of conflict between drives, instead of the narrow-minded conflicts between the Primitive and Higher minds.

    Identity and who you are:

    This last one is probably the sneakiest and most complicated. It’s the most complicated because there’s so little shared language to talk about the self, the ego, self-concept, conscious awareness, the process of identification. As Urban and Musk tell it, the Higher Mind/neocortex is what makes us human. Not only is it what makes us human, it’s literally you. When you do the thing we call “consciously thinking about something”, that’s supposed to be the Higher Mind. The primitive mind is other, a thing that gets in the way of your plans. You have to play nice to the degree that it can’t be surgically removed, but other than that it’s a direct conflict of you trying to control it. Kahneman even alludes to this in his first description of System 2, saying “The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.”

    This is the point I’ll be least able to make sense in a short amount of space, so I’m just going to give this possible cryptic paragraph and hope it lands: I’m working towards showing that S1/S2 isn’t a coherent category split. Neither is the Higher Mind and the Primitive Mind. They both declare this division, and assume each side has a monopoly on all these other qualities. Given that they don’t, and there’s no coherent truth to the Primitive Mind or the Higher mind, when I identify with the Higher Mind, what am I actually identifying with? I think what’s happening is I have a self-concept that utilizes the qualities described in the Higher Mind, and this self-concept is exerting authoritarian control over me-in-my-entirety. This hurts me in the short run and the long run, and is really important to address.

    What You Miss When You Bundle

    All this bundling gets in the way of thinking because it blurs important distinctions and pushes you to infer connections that might not exist. Let’s take a single aspect of this bundling; that the biased parts of you line up with the unconscious parts of your mind. Back to The Mythical Number Two for an example:

    “For instance, the first research on implicit bias (i.e., the unintentional activation of racially biased attitudes) occurred in 1995, and by 1999 researchers started referring to this phenomenon as unconscious bias . Soon enough, people around the world learned that implicit biases are unavailable to introspection. Yet conscious awareness of implicit bias was not assessed until 2014, when it was found that people are aware of their implicit biases after all.”

    That’s wild to me, because I see the claim that people aren’t aware of their biases as a load bearing aspect of the narrative around implicit bias that has become mainstream in the past two decades. This narrative got big because it seemed like a way to push people to change without making them feel like they were being called Bad People (and then resisting change). This seemed plausible because a common moral sentiment is that you aren’t blameworthy for something that you aren’t aware you’re doing. It seemed plausible that people could stomach “okay, I’m reinforcing bad shit in the world because I’m biased, but the bias is unconscious, and also everyone else is biased, so I’m not personally super blame-worthy and going to be attacked.” This whole narrative falls apart if bias and the unconscious aren’t bundled together.

    I eventually want to be able to talk about how our systems of punishment work, at the group and the state level. I want to talk about people's mental models of justice, blame and punishment. A lot of our moral reasoning makes heavy use of concepts like intentionality and conscious awareness, and I don’t want to try and investigate this topic from an intellectual framework (dual process theories) that obscures how intentionality and consciousness work and interrelate.

    Unbundling can help us understand other things as well. Remember those Hidden Brain books, the one’s telling us about how we’ve got all these hidden motives, how no one can introspect, and how everyone’s reasons are fake? None of them give any insight into piecing apart the confabulations from The Real McCoy. It’s obvious that I know why I’m doing things sometimes. And I certainly don’t know why I’m doing things plenty of the time. And I’ve observed my own ability to spin stories and provide false reasons for my behavior. What’s the difference that makes the difference? To have any hope of answering this, we’ve got to undo the bundling that comes with dual process theories and try to understand how introspection, self-awareness, and intentionality actually work.


    I’ve spent a lot of time talking about how I think the mind doesn’t work and not much time talking about how I think it does work. A crucial part of writing for me is communicating the motivations that make me care about a topic in the first place. Hopefully you have a sense of where I’m coming from, and why dual process theories of the mind feel so unsatisfying and unhelpful to me.

    Things to talk about next:

    1. Build up some vocab and clarify terms to talk more precisely about the mind.
    2. Go through a ton of examples where dual process bundling fails.
    3. If S1/S2 isn't a real split in the mind, why did I spend several years feeling like it was so right and like it explained so much? What other features of the mind was I implicitly tracking when I was using S1/S2?
    • 4 replies
    1. crispy
        2021-03-03 07:43:00.665Z

        This is some really great stuff, and I'll have to write a meatier response at some point, especially as I have a lot of my own takes about 一, 二,and, 三.

        In the meantime, I wanted to mention a few things:

        1. I think one of the fundamental reasons we split things up into to two is that we're viscerally aware of certain thought processes and seemingly only aware of others after they happen. This is a bifurcation and we wish to describe the properties of both sides, though they may not actually be discrete except in our perception.

        2. I think a lot of the problem here has to do with "grounding". I always make this argument to people who tell me that as long we base our thinking on axioms, we can be sure our thinking holds "logically", to which I argue: "Okay, but how do you know when one thing implies another?" Often this comes down to the doubtfully cashable check of "it all comes down to ZFC axioms if you look really hard" (a statement I doubt even in mathematics and shiver at the thought of applying to anything not purely synthetic), but again I ask: Why ZFC? There are plenty of good reasons to accept ZFC or some such because of the math we already have that we think is "right" or at least desirable, but ultimately your axioms come from somewhere. When you go past purely synthetic statements, and start talking about the robustness of applied theories this becomes much more apparent: why do Machine Learning people make certain assumptions? Because they think a reviewer won't poke them for it and the people who read the paper for research will be more interested in the resulting statements. I call this "emotional inference": we choose our axioms because we feel we can get away with them.

        3. I don't what you would describe the act by which you created this post (writing? psychologizing? metapsychologizing?) but I see this post as basically the ultimate sneaky argument for the need for more philosophy in the way we think about Science, which is perhaps my biggest crusade. The mythos of Science as the self-correcting island of purity and wonder is painful and already beginning to crumble in the eyes of many intellectuals, but seems to be the engine for Faith in Institutions™ at a public scale. Yet, I think the professionalization of Science has resulted in a much stronger stubbornness about words that are already written down, partially as a result of more politicking due to more people, and partially as a result of the fact that specialization and fast progress on certain metrics has disincentivized rumination about the meaning of results. Fast litmus tests for immediate usability in the local space of ideas is encouraged, causing connections to any shared foundations to become untethered. I think we need a real philosophy of science, and I think this is the beginning of it

        Really good poast, looking forward to the follow-up.

        1. hazard
            2021-03-03 15:24:49.379Z

            On the ZFC thing... Woah. You know people who think that all thinking, in general, not just for math, is done by / should be done by logical deduction grounding in axioms, specifically in ZFC? I'd love for you to paint a more detailed picture of this person because I want to ignore because of how crazy it sounds. I feel like if I tried to reference this to someone else, I'd start to doubt if it was real.

            On 3, totally onboard! I want to write more about some of the ways I see certain theories not actually pulling the weight they say they are. Like, VNM utility theory is a great example of garbage in garbage out, but people pretend there is no input, it's just a magical source of truth.

            1. crispy
                2021-03-03 19:47:09.852Z

                Re: the ZFC strawman/steelman

                I don't think that anyone would emit that belief unprobed, but the conversation goes like this:

                Crispy: <makes some objection to the way probability/statistics/specific->general reasoning is being used>
                I-Fucking-Love-Science (IFLS): ...but fundamentally that's all just statistics. We've proven the theorems we need to prove to use the tools this way.
                Crispy: Okay, but don't you think that our framing of the problem is ultimately what gives it applicability to the real world? Like, if we don't taxonomize two different phenomena that might be causing each other over time, but just take a high-level view of cause-and-effect that doesn't allow for such temporal processes we might end-up misunderstanding the entire system? These theorems don't provide for these possibilities, they expect to be applied in a place where certain assumptions hold that we can never really verify.
                IFLS: Look, all we have access to in reality is correlations, so we're never going to really "know" anything, but if all we have access to is correlations we can just use the very general tools we've proven correct on these correlations till everything makes sense. Theorems boil down to ZFC, and surely you don't think ZFC is "wrong" in any meaningful sense, right? So then we should apply these theorems as broadly as possible and see what they give us. We can never really confirm the boundary conditions of the event spaces we encode in the world, all the more reason not to try and just see where our synthetic and provably-correct tools lead us.
                Crispy: But what if such tools provide higher certainty because they more readily apply to certain cases, cases which are actually deceptive about the underlying causal systems? For instance, sometimes it's bright outside but still raining, with coarse enough data you might conclude that rain causes clouds.
                IFLS: Those problems exist anyway, and we have no real acquaintance with underlying causal systems, or really proof that they exist at all. I believe they exist, but that the best way of finding them is just to create a synthetic model from foundations and see what it gets us when we apply it.
                Crispy: hmm...

                I think the general argument, that our intuition isn't grounded in anything we can really prove, is true. However, I believe in trusting our basic intuitions about the basic structure of the world because I assume (1) objective reality exists to at least a good approximation and (2) our intuitive world models have been honed by evolution to at least be instrumentally useful in situations we believe people have faced before, e.g. physical manipulation. These are actual assumptions, and can't be "proven" from any basic observations, but I'm willing to take them on. The trap that IFLS falls into, in my opinion, is being able to convince themselves of things that clearly aren't true with some basic common sense, but that a misapplication of a statistical tool "shows" because the initial conditions about, say, independent variables are hard or impossible to prove in real life.

                1. hazard
                    2021-03-05 01:28:55.815Z

                    Thanks for the expansion!

                    I think this is a good example of equivocating on "proof". Like, the reason we're supposed to care about proof is that it's what let's you know that you're right. And supposedly you care about if you're right because you or someone else is doing something in the world, and need to behave differently if you're wrong.

                    And the more you make your notion of proof rigorous, the more you have to move the domain away from anything you care about, and into the "abstract". So to keep having Rigorous Proof matter, you have to put more and more attention and care into demonstrating that there's a sensible mapping from your abstract theory to the world. And you have to do that in a less rigorous/abstract way! And I guess IFLS and ilk have [reasons] that make them not care about this task.