No internet connection
  1. Home
  2. Meat Machines

Dual Process Theories Are Broke: Part 1.1

By hazard
    2021-02-25 14:32:22.912Z

    (written for a diff target audience than this forum, but with content that's relevant. Mostly me doing a sort of ritualized-public-renunciation of my old ways)

    The Hidden and Biased Brain

    As a teenager I went on a multi-year pop-psychology reading binge. My best estimate puts it somewhere between 12 and 30 books that all seemed to be talking about the same thing. To describe by pointing, here’s a sample of titles and their taglines:

    • Compelling People: The hidden qualities that make us influential
    • Future Babble: Why pundits are hedgehogs and foxes know best
    • Blink: The power of thinking without thinking
    • Predictably Irrational: The hidden forces that shape our decisions
    • What Every Body Is Saying: An Ex-FBI Agent’s guide to speed-reading people
    • Freakenomics: A rogue economist explores the hidden side of everything
    • Fooled By Randomness: The hidden role of chance in life
    • Subliminal: How your unconscious mind rules your behavior
    • Coercion: Why we listen to what “they” say

    And perhaps a bit on the nose:

    • You Are Not So Smart: Why you have too many friends on facebook, why your memory is mostly fiction, and 46 other ways you’re deluding yourself.

    I’ve got mixed feelings about this cluster of books. On the one hand, it fed and nourished my early curiosity about the mind. On the other hand… a huge amount of it is wrong. If not wrong as in the studies simply do not replicate, wrong in the sense that some interesting studies are used to prop up generalizations about human behavior that are far grander the evidence warrants. I want to explore a very specific and very important thing that this cluster gets wrong, and to do that, we need to explore more exactly what this cluster is.
    The unifying pillars of this cluster are The Hidden Brain and The Biased Brain. They form a one-two-punch of a) people don’t actually know what’s going on in their minds and b) what’s going on is largely biased ad-hoc heuristics. The Elephant in the Brain by Robin Hanson and Kevin Simler is the most refined version of the hidden brain thesis. Most books fail to address why we have “hidden parts of our minds”, and why self-deception would even be a thing in the first place. It largely pulls from Trivers on Self-deception, “deceiving ourselves so we can better deceive others”.

    Thinking, Fast and Slow by Daniel Kahneman is the most refined version of the biased brain thesis. That has something to do with the fact that Kahneman co founded the entire academic field that gave rise to all of these pop-psychology books in the first place. Back in the 70’s, he and Amos Tversky kicked off the Heuristics and Biases research program. Since I came of age in a world that already had Kahneman and Tversky deep in the water supply, it was useful to get a picture for what the academic scene was like before them. From Kahneman’s book:

    “Social scientists in the 1970s broadly accepted two ideas about human nature. First, people are generally rational, and their thinking is normally sound. Second, emotions such as fear, affection, and hatred explain most of the occasions on which people depart from rationality. Our article challenged both assumptions without discussing them directly. We documented systematic errors in the thinking of normal people, and we traced these errors to the design of the machinery of cognition rather than to the corruption of thought by emotion.”

    Apparently, there really was a time where academics treated everyone as “rational agents”, where “rational” means a ad-hoc mashup of a mathematical abstraction (VNM axioms of rationality) and a poorly examined set of assumptions about what people care about. Well, I mean people still do that. The real difference now is that if you disagree with them you have a prestigious camp of academics you can point to as on your side, instead of being a lone voice in the wilderness.

    (As an aside, I’m not going to put much effort into appreciating the intellectual progress that the Biased Brain and Hidden Brain thesis represent before going into the critique. This is mostly because I grew up with them and the world that came before them is only partially real to me. In my emotional landscape, they are the establishment, so that’s what I’ve gotta speak to.)

    So what’s my beef? Where do things go wrong?

    As I previously mentioned, there's a shit ton of psychology that simply doesn't replicate. Kahneman makes heavy reference to priming research and ego depletion research, neither of which has anything going for it.

    You've also got the problem of defending your usage of the word bias. A bias is a deviation from something, supposedly something that you intended and wanted. In order to talk about bias, you need to have a reference point of what unbiased would mean. I’m not the sort of relativist that thinks this is impossible. It's not. There are things that are good, and there are things that are true. Sometimes it’s easy to find them, sometimes it’s hard. What I do think is that no one that I’ve read on the topic of biases has put in much, if any, effort into outlining a specific stance and defending it. It’s often assumed that we’re all on the same side in regards to what counts as bias.

    If you want an extensive academic critique of Kahneman style heuristics and biased research, check out Gigerenzer.

    To understand by beef, we need to look more closely at the ways that some of the biased brain books split the mind into parts.

    Dual Process Theories

    Kahneman's book introduces a framework of two systems to understand human behavior. Everyone more or less takes away that System 1 is fast, biased, and unconscious, while System 2 is slow, rational, and conscious. From the man himself:

    “System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control. System 2 allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.”

    Splitting the mind into two such parts is by no means unique to Kahneman. Kahnemans is just a specific instantiation of a Dual Process theory, which has been getting more and more popular the past half-century. As Kaj Sotala writes:

    “The terms System 1 and System 2 were originally coined by the psychologist Keith Stanovich and then popularized by Daniel Kahneman in his book Thinking, Fast and Slow. Stanovich noted that a number of fields within psychology had been developing various kinds of theories distinguishing between fast/intuitive on the one hand and slow/deliberative thinking on the other. Often these fields were not aware of each other. The S1/S2 model was offered as a general version of these specific theories, highlighting features of the two modes of thought that tended to appear in all the theories.”

    What these models have in common is that they carve the mind into two processes, one that is intentional, controllable, conscious, and effortful, and another that is unintentional, uncontrollable, unconscious, and efficient. This “bundling” together of properties is the defining feature of a dual process theory, and it’s where everything goes horribly wrong.

    The argument is pretty straightforward. Suppose you have four properties that a mental process could have. To simplify, we’ll imagine each property as quite binary, either it’s intentional, or it’s not intentional (this isn’t true, and the gradations of intentionality, consciousness, and effort are incredibly interesting and could be a whole other post). There are 16 different ways you could group together these properties. If you don’t see where the 16 came from, this table should visually explain it:

    Dual Process theories start with the idea that the only possible combinations are the very first, and very last ones. That you aren’t going to have any intentional, uncontrollable, unconscious, effortful thought. Or if not impossible, the idea of a dual process theory is that these two combinations are the most likely to occur, and all the others are rare or insignificant enough to be not worth mentioning. If this is not the case, if these properties don’t come all bundled together, then the dual process split fundamentally doesn’t make sense as two categories. The Mythical Number Two, a paper that inspired this post, expands on this point:

    “Consider this analogy: we say that there are two types of cars, convertibles and hard-tops. No debate there. But now we say: there are two types of cars, automatic and manual transmission. Yes, those are certainly two different types of cars. And still further: there are two types of cars, gasoline and electric motors. Or: foreign and domestic. The point is that all of these are different types of cars. But we all know that there are not just two types of cars overall: convertibles that all have manual transmission, gasoline engines, and are manufactured overseas; and hard-tops that all have automatic transmission, electric engines, and are made in our own country. All around us we see counterexamples, automobiles that are some other combination of these basic features.”

    Unless you see all of the relevant dimensions align together into the clusters you want, there’s not much of a basis for saying “these are the two types of thinking, System 1 and System 2”. You’ve just “arbitrarily” picked two subsets and made them your entire world.

    To seal the deal, we need a bunch of concrete counterexamples to the typical dual process bundling. I’m going to save that for a follow up post, though I’m sure that you could come up with some yourself if you brainstormed for 5 minutes. Additionally, you could read the paper I just quoted.


    (breaking apart the post here cuz talkyard censors length)
    (second half is here)

    • 8 replies
    1. suspendedreason
        2021-02-25 21:38:03.380Z

        (As an aside, I’m not going to put much effort into appreciating the intellectual progress that the Biased Brain and Hidden Brain thesis represent before going into the critique. This is mostly because I grew up with them and the world that came before them is only partially real to me. In my emotional landscape, they are the establishment, so that’s what I’ve gotta speak to.)

        I feel like we need shorthand or boilerplate for this. It's part of a larger idea that people's utterances are always in response to their immediate context.

        Sometimes you'll see people make a claim to reverse racism, or affirmative action, and on Twitter people'll pop in going, "What about the previous 400 years of genocide"? In other words, changing the frame from the immediate present or recent past, into a larger historical view. Some of these frame expansions are perfectly legitimate! I have no interest in litigating the object-level issues here (racism, reverse racism, etc). This is a culture-war-free zone! But it's also an example of how people have to fight against some natural human communication mode, which is the signal and correctives mode.

        Some possible frames to chew on are Nerst's signal/correctives frame, the general idea of "dialogic grounding," the metaphor of a stack, where all that's available is the top of the stack, and all that you can do is access arr[length - 1] and use that for the next element you push. It feels like there've got to be instances, adaptable as metaphor, of systems where this is the only way you can interact with the data—you can see the most recent version, and you can edit it, but the rest is inaccessible. ("Version history"?) Just spitballing around this idea tho, people should chime in.

        1. crispy
            2021-03-03 05:01:23.740Z

            Strong agree! I totally like this stack idea, but I think there's use in a much simpler, more basic version of this which is basically "I'm talking about what I'm talking about; if you want to look at what caused it or what it created, that's fine but really doesn't change this analysis insofar as I'm describing the underlying object." or "I'm talking about what I'm talking about." for short. The norm in this forum should be (is?) that basically nothing is a value judgement so much as an attempt at description unless explicitly made to be a clear value judgement.

            1. In reply tosuspendedreason:
              hazard
                2021-03-03 15:37:20.869Z

                The stack metaphor doesn't connect for me, mostly because I'm already anchored on a convo stack being "topics we brought up and want to get back to eventually". Whereas, in the context of my post "admiring the intellectual progress of bias research" wasn't a thing I ever wanted to do, but my simulated critics brought it up so I dealt with it.

                Though I do support coming up with a handle for something like this. I think for convos there could be a process that's invoked, and in writing there can be a phrase that's invoked. Like, a while ago you and @snav were going at something, and eventually it came out that you two just wanted completely different things. Snav was all on "this is an example of no one having values" and you were like "how does prediction/intelligence work?" I feel like if at that moment one of you used the "I'm talking about what I'm talking about" phrase, at best it just ends the conversation and we don't waste time talking past each other. But I think having a process that get's triggered would be the thing needed to draw the convo forward usefully.

                1. crispy
                    2021-03-03 20:28:22.143Z

                    What do we want out of said process? Triage of different goals or...?

                    1. hazard
                        2021-03-06 00:42:28.196Z

                        One key aspect feels like the we often discover that we care about different things often comes after a good bit of lively/heated argument where both people felt like the other person wasn't getting it and possibly not even listening. At that point, moral is low, and doubt about whether it's even useful to keep collaborating readily come to mind. It can also feel like the two carings are at odds, and that one must come at the cost of the other.

                        I'm thinking of some process to engage in at this point that helps reaffirm collaboration and makes both parties feel like talking to each other is a good and useful thing. Some concrete desirable end states:

                        • Both parties feel that there is space to care about multiple things.
                        • Both parties feel that the other has a sense of what they care about and why they care about it.
                        • Both parties have expressed the common ground in their carings.

                        Like, we'll defs have differences in where we want to put the bulk of our effort. But I think this sort of common ground finding and mutual understanding can go a huge way to keep collaboration fruitful and alive.

                  • In reply tohazard:
                    suspendedreason
                      2021-02-25 21:41:04.962Z

                      Intentionality seems like a really problematic concept, generally. I've seen just how tricky it gets with signal vs cue stuff, but as you point out, depending on how you define it (and define your "self"), unconscious behavior can be intentional or unintentional. Hell, there's behavior that's arguably intentional and controllable, like literal knee-jerk reactions. Anyway, I have yet to read 1.2, but I'm all for this project of breaking down the complex dimensions & distinctions that get ignored/bundled up in the S1/S2 frame.

                      1. hazard
                          2021-02-25 21:47:24.678Z

                          I'm v excited to explore intentionality more. Currently 2.0 doesn't plan to get into, but I do explore some related things.

                          I'm reminded of the book Dynamics in Action which I was reading a bit ago, which seemed to have some cool ideas on intention.

                          1. In reply tosuspendedreason:
                            suspendedreason
                              2021-03-03 23:29:11.366Z

                              @hazard Yeah intentionality's super fraught, I come from a literary theory background and the discourse there over intentionality is a complete disaster, I assume it's similar in most fields