No internet connection
  1. Home
  2. Inexact Sciences

Probability as a Mental Model is Bullshit

By crispy
    2021-02-11 07:46:30.865Z

    This should eventually be a longer, more eloquent post so I can properly hide behind eloquence when making Bold Statements, but @suspendedreason is always telling me that I need to just write.

    Epistemic Status: distilled and filtered to the purest of essences
    ———

    The idea is simple: probabilities don't make any sense as tools for explaining the brain, they make sense as tools for describing what is literally possible. Describing what is possible and what is likely in the physical universe is awesome, I'm all for it. It's cool that we can understand particle physics which may or may not bottom-out at pure (weighted) randomness or that we can get good estimates for how late trains are going to be before they even head-out because of our estimation of the situation.

    However, people do not think in terms of events this way. People tend to consider a few narratively likely events and then make distinctions between them. Here "narratively likely" means that these events may or may not be realistically likely given the current information, but the agent feels like they are possibilities that require explaining either for internal coherency or social cohesion. Consider a woman who is about to confess her love for someone: there are two options to be explained, does her crush like her back or not? Of course, that's not true. There's a lot of spectrum between "yes" and "no" and there are dimensions that are entirely orthogonal: It turns out Mr. Crush has such low self-esteem he simply doesn't take this seriously and the moment is gone, Ms. Lover doesn't push further.

    In order to make probabilities work for explaining mental models we've made mental models very funny. We've made them highly definitive where the brain is fuzzy. We've made them "instantaneous" instead of having the natural temporal thickness with which humans perceive events: the 2020 election went on and on, did it not?

    At its heart, probability is an attempt to make different tradeoffs speak a common language, and I respect that. Probability is often used to describe "likelihood", but it is really just a calculus of the possible and how that interacts with observation, it has nothing to do with the dimension of "time" that likelihood or causation tend to entail. The problem, though, is that the way the brain models tradeoffs is erratic and contextual, and this makes describing things in probability messy.

    Will I be able to finish writing this post before midnight? It's a clear event that should be well-modeled by probability, but my brain doesn't view it that way. My brain views things in terms of stakes. What am I going to lose if I don't do that, and therefore how much energy is it willing to put into making me manic enough to feebly push out my thoughts into words? Stakes always make sense—because resource management is something your brain does by definition. This, in fact, is the heart of Friston's "Free Energy Minimization" theses. Something that can minimize free energy is organizing, and something that is persisting in a complex steady-state over time must be minimizing free energy.

    In order for your brain to maintain its state, it needs to be able to manage resources so that it doesn't fall out, that's why it eventually forces you to go to sleep even if you try your hardest not to Satisfy the Storks. In order to play this game the brain does not need to know about probability, but it does need to make tradeoffs. Hardcore Bayesians, like any true theory addict, will say that the probabilities are there implicitly and how can I argue with that? Any system of tradeoffs can be encoded with the basic machinery of probability, but you want the model that's closest to the representation you're given. The representation we're given is behavior which is hard to extract probabilities from. Instead, when we want probabilities we make prediction markets and allow people to use stakes (which is how they actually operate) and then intuit probabilities from these stakes.

    Indeed, prediction markets are the perfect example, because they can always be converted to probability since it is assumed that the decision of "who won" will be discrete, atomic, and objective. But what about all the other things we bet on that we couldn't possibly make a market for? Are there probabilities there? Who cares, it's time we made a calculus of stakes or forever be renormalizing probabilities because mental models inevitably shift based on saliency and value inflation.

    • 8 replies
    1. hazard
        2021-02-15 23:37:22.483Z

        I wonder if it would help if we all got better versed in being able to precisely talk about different abstraction levels in "the mind". Like, a lot of predictive processing stuff seems like v compelling evidence for there being at least one level that can be precisely described in probabalistic terms. This level is very far from the mind that exist at the experiential level. And I agree that lots of cognition does not at all fit into a probabilistic framework. I'd actually love to do some research on what exists for this. I remember on book broke things down into the "computational level" the "algorithmic level" and the "implementation level" (this was all inside the framework of computational models of the mind). I'd love something that was a better version of that.

        1. In reply tocrispy:
          crispy
            2021-02-16 07:04:34.792Z

            agreed, and I think one of the main "magicians in the brain" is the thing that creates events for which probability can be estimated—that is the essence to understanding a domain, in my view

            1. B
              In reply tocrispy:
              beiser
                2021-02-16 23:41:14.586Z

                I've been struggling with this—I think there's a kind of straw-probabalism that's getting knocked down here, and I'm not really convinced anyone professes to believe it. In fact, I'm not even sure the description is consistent. What exactly is bullshit? Tell me! Tell me!


                I'm also lukewarm on the "narratively likely events" hypothesis—I think it's more often that people tie themselves to a few "likely realities", each of which implies a different possibility-space for successive events. I can say that when I'm navigating a crush, I usually think along the two axes:

                1. Between "feels strongly" and "isn't interested". This is a continuous value, computed through in real time by anyone navigating a crush, and figuring out when to push towards vs step back from "lukewarm but somewhat interested" is a challenge that people successfully navigate every day.
                2. Between "worthwhile match" and "not actually compelling". For many people, a key thing that would push someone to the latter category is if they have weird hangups or incapacities around expressing romantic interests that they have—some kind of "Executile Dysfunction"

                In the example given, someone who seems reticent when given a perfectly reasonable opening is going to pop up a range of possibilities—but the operative one is, are any close enough to "worthwhile match who feels strongly" to be worth pursuing?

                It's true that the conscious mind rarely deals in probabilism, but the moment where parts of that matrix start to light up and appear as discrete possibilities—maybe he's not interested, maybe he is but he's got issues of some sort, etc—is surely happening through a modeling of underlying probabilities that's somewhat opaque to the conscious mind. But they're probabilities in terms of a learned matrix between different worlds—and it's those worlds that get traded off between, not the outcomes.

                1. In reply tocrispy:
                  suspendedreason
                    2021-02-17 00:34:46.471Z

                    The part of the post that jumps out and intrigues me most is

                    At its heart, probability is an attempt to make different tradeoffs speak a common language, and I respect that.

                    But I'm also having trouble squaring this, which I agree with but wanna hear more about, with some of the ideas that open about probabilistic mental models. I guess naively I'm inclined to agree with Beiser that I feel like people agree probabilities are a shorthand? I do understand that there are Bayesian brain advocates, and I'm one (at least, an advocate for the softer brain-as-inference-machine line, how approximately Bayesian or not we are is a different story). But the level of conscious probability calculation, and the level of deep free energy-minimizing models, seem very far removed from each other—opposite extremes of the consciousness spectrum, arguably.

                    Phenomenologically I agree that stakes are more prominent than probabilities. But I'm also worried about talking past each other b/c we're anchored to different cognitive levels of organization.

                    It seems like you're almost suggesting that a Bayesian brain hypothesis is in opposition to the free energy principle, rather than a complementary or alternate factoring.

                    1. In reply tocrispy:
                      crispy
                        2021-02-18 23:56:19.665Z

                        I don't think this is straw-probabilsm. Consider the Rational Speech Acts model (e.g. https://web.stanford.edu/~ngoodman/papers/FrankGoodman-Science2012.pdf , https://wmonroeiv.github.io/pubs/yuan2018understanding.pdf) which I actually like and view as a step forward. It's basic tenet is that you say things that you think will make the other person understand what you said under some model of what that person is thinking. I definitely agree with this, but the way in which its defined (in order to be experimentally viable) is through experiments that model the selection of various objects probabilistically. I think this kind of thinking is very common and it's often described with a little "but of course internal representations are more complex than that", which never really makes it into the reader's mind because this is the kind of way in which we describe everything.

                        I really like your description of navigating a crush, and I 100% agree with the way you flesh-out the decision process, most importantly that you describe how people are grouping things by how paths with will converge into similar outcome spaces that need to be dealt with similarly. This probably is just a matter of evolving strategies for dealing with cognitive load and makes perfect sense. That said, I still think people tend to think about narrative events, for a few reasons, and I'll mention two:

                        1. I think there are things that, whether likely or not, they are "expected to have a plan for" e.g. someone having a heart attack at work. The heart attack scenario is actually enforced by law, but I think there are plenty of other things that aren't directly enforced, that we feel the need to plan for in order to feel and appear responsible. I see this a lot in research, where people will feel the need to reject increasingly ridiculous hypotheses because they would "make sense" in the way people like to talk about things, even people agree there's little evidence that a given framing is correct.
                        2. People are seduced by heavy-upside. People plan to win the lottery despite it being glaringly unlikely, because it's imaginable.

                        Re: unconscious probabilism, I don't think this is falsifiable and I can neither really agree or disagree. As I said in OP, any system of tradeoffs can be modeled using the basic axioms of probability, which is why it's such a strong framing. My only real disagreement with you, then, is that I believe in the way we have access to any of that modeling probability+outcome are melded together inextricably, and using probability to disentangle things as a latent variable isn't very useful. The primary reason I believe that "probability as a latent variable" isn't useful, is that I think we collapse things into "how would I end-up dealing with this outcome anyway" and my instinct tells me that when our brain "marginalizes" over possible worlds, it tends to ignore ones that don't require much future choice. This would imply something deep about the way the brain deals with dilemmas, because we would tend to use heuristics that focus cognitive energy on dilemmas that will require more nested-processing.

                        Still, I would like to say I vastly agree with all of the observations in your reply, but really I'm just betting that a calculus of stakes, desired resources, and prerequisites for achieving goals would give us a much more natural way of thinking about behavior, especially outside the laboratory where choices aren't so discrete. The only evidence I can really leverage there is fleshing it out and showing you, so I shall endeavor to do just that in future posts.

                        1. In reply tocrispy:
                          hazard
                            2021-02-19 14:41:11.513Z2021-02-19 18:11:39.612Z

                            @crispy

                            My only real disagreement with you, then, is that I believe in the way we have access to any of that modeling probability+outcome are melded together inextricably, and using probability to disentangle things as a latent variable isn't very useful. The primary reason I believe that "probability as a latent variable" isn't useful, is that I think we collapse things into "how would I end-up dealing with this outcome anyway" and my instinct tells me that when our brain "marginalizes" over possible worlds, it tends to ignore ones that don't require much future choice.

                            This really jumped out to me. It reminds me of how you can apply a transform to a VNM agents probabilities and utility function such that they get wildly changed, but all the actions stay the same. Put another way, you're gonna have a real hard time trying to infer probabilities unless you know everything about how someone values. This also makes me think of how belief and action are sorta the same think, which Friston takes the the extreme with "action is literally believing your body into movement."

                            I think I'm more onboard with you than I was last week.

                            1. B
                              In reply tocrispy:
                              beiser
                                2021-02-22 22:44:31.208Z

                                I think you’ve totally misread Goodman. You say there’s a “of course, internal representations are more complex than that” that’s elided or doesn’t make it to the reader’s mind. Either that’s wrong, or the readers are wrong. The caveats provided are structurally important.

                                Goodman’s paper shows that their model has a better fit to the empirical data, which suggests that there is a information-theoretical process somewhere involved—ie, that some computation which performs a set of broadly isomorphic operations is involved. It is defined using probabilistic methods because those are the ones that happen to match how this phenomenon functions. But they’re careful to not claim that it accurately represents the state inside the speaker’s head! The most they say is that it “suggests that using information-theoretic tools to predict pragmatic reasoning may lead to more effective formal models of communication.” It is, as they say, a formal model.

                                I don’t mean to sound pedantic, but I remain unable to find a single person who has professed to believe what you’ve labelled bullshit.

                                1. In reply tocrispy:
                                  crispy
                                    2021-02-23 03:06:16.253Z

                                    hmm, I think I'm not explaining myself very well, but I think arguing it here at this point is a bit useless. instead, I think I need to flesh-out an alternative enough for it to be clear. but I did want to reply and say I take this critique to heart, and am now marinating on the problem of what to model in my "resource tradeoffs" framework that would be a nice explanatory example. let's return to this when I have a more substantial object to compare alternative methods to.