No internet connection
  1. Home
  2. Inexact Sciences

The actual assumptions underlying certain "reductionism" or "holism"...

By hazard
    2021-03-12 03:00:08.072Z

    I have sometimes gotten annoyed at people arguing over whether "the whole is greater than the sum of it's parts."

    I think there genuinely is a difference between thinking in a "reductionist mindset" and a "hollist mindset", but I haven't seen either one actually get a good description.

    First, I hate the phrase "the whole is greater than the sum of its parts" because it works on a linguistic trick.

    What does "sum" mean? Or better yet, what operation are you calling a "sum"? If the operation that you are calling a "sum" doesn't reproduce the whole, why did you call it "sum" in the first place?

    For the good of humanity, here is a list of specific properties that a system may or may not have, which I think people are often implicitly trying to gesture to.

    Approximation-ism

    Sure we don't know quantities exactly, and we can't solve for an exact solution, but we can get arbitrarily close approximations, or if not arbitrarily close, close enough for all of our actual use cases.
    Example:

    • Ignore air resistance
    • Many NP-hard problems can't be exactly solved in poly time, but you can get good approximations in poly time

    Arch Nemisis: Chaotic Systems
    Chaos is "When the present determines the future, but the approximate present does not approximately determine the future."
    Double pendulums are chaotic. This gif starts three pendulums in almost the same state. They start correlated, but quickly become independent. The point, your epsilon of measurement error might actually matter a ton.

    Locality

    The only things that matter for predicting or explaining what is happening at X is what is near X. As things get farther from X, their effects rapidly become negligible.
    Examples:

    • Gravity falls off with 1/r^2.

    Arch Nemisis: Spooky Action at a Distance

    The stuff that may matter may be nowhere near where X is.

    • People's moods (cuz internet)
    • Quantum entaglement (no idea what's actually up with this)
    • Using global variables in code
    • Shared memory between processes in code

    Monotonicity

    As you add more terms / get more information / make more observations, you are strictly getting closer to the correct answer. Each new prediction is a strict subset of previous ones. I might also mix this in with "80/20ism" or "marginal returns-ism". The first several terms do most of the work.
    Examples:

    • Taylor series approximations are monotonic, you strictly approach a perfect fit, and it's clear "where things are heading"
    • 20 questions
    • Binary search
    • Statistic, keep sampling the population, approach the "true" value
    • At the point where your approximation is centered, taylor series also exemplify marginal-returns, the first few terms get you the bulk of the accuracy, and the rest are small precision boosts.
    • Math, you only build, you never loose info (for the most part).
    • Top level chess, not that many "upsets", it becomes a game of

    Arch Nemisis: "It's not over till the fat lady sings"
    As you get more info, what the answer looks like could radically change. An "upset victory" can always happen at the 11th hour.

    • Kuhn like paradigm shifts (the paradigm is non-monotonic, not your "total explanatory power")

    Modularity

    As long as you meet the requirements of the minimal interface, all parts are the same and can be swapped out for each other. The whole system is made of modules that interlock nicely at clear interfaces allowing separation of concerns.
    Examples:

    • Literal interfaces in code
    • Dependency injection (for swapping in mocks and fakes for testing code)
    • I can change the tires on my car.
    • Strict contract bound business partnership

    Arch nemisis: Organic intermingled boundaries
    There aren't clean edges between parts. Things are deeply interconnected. Parts can me worked on in isolation.

    • When code starts to depend on implementation details not specified in the API and you can't change anything without breaking people's shit.
    • Body rejecting prosthetic organs (sometimes) even thought they "fulfill the role".
    • Romantic relationship with cohabitation

    Composability

    When composing systems with operation Y, property X is preserved (fav post on composition).
    Examples:

    • Proof tree (as long as the child nodes of your top level statements have valid proof, you're fine)

    Arch Nemisis: Emergence

    • Just because all of your dependencies have X security guarantee, doesn't mean that using these dependencies together will still guarantee X.
    • The property "getting along well together" is not preserved when composing groups of friend with the "smoosh them into one big group" operation.
    • Understanding the words of a sentence doesn't have to imply that I understand the whole sentence.

    Honorable mentions

    • Memorylessness vs Memoryfullness
    • A whole host of "nice" properties
      • Transitivity
      • Communativity
      • Associativity

    A thing that's really interesting about several of these is the way that they are the dual or their arch nemisis. You can make any system where memory is relevant "memoryless" by encoding all of the history into the current state. As such, a critique should never be "You aren't taking history into account!" but instead "you think you need thiiiiiis much historical state to predict the future, but really you need thiiis much". Likewise for modularity, if you make your interface include all the information about a part, then boom, thinks are always modular... except you lost the actual utility of having a small easy to reason about interface.

    I think software engineering is a great domain to study this. Basically every desirable property you could have of a system you want to science exists a design principle for "making understandable code that we can reuse". You can find code bases that do a good some of enforcing locality, and others that fail miserably. You can see the practical side of all this. "Oh, this is how much locality is necessary for me to easily solve the problem."

    For any given system in any given domain, there is a factual question of "does locality apply?" "can we reason monotonically about it?" or "is this chaotic?". I hope to not have to have conversations where we accuse each other of being "too reductionistic" or "too hollist" and instead can use some of the language here to say "you're assuming locality, which doesn't hold cuz ABC" or "you're failing because you aren't taking advantage of modularity, so try XYZ".

    • 10 replies

    There are 10 replies. Estimated reading time: 35 minutes

    1. suspendedreason
        2021-03-14 23:50:39.504Z

        Do you think it's fair to call Internet stuff, or global variables in code, non-local influence/"spooky influence at a distance"? I guess to me it seems like these/there are mediums/technologies for bringing information that is distant in one sense (e.g. geographic) into the current "runtime context". I don't know too much about how compiling/stack/runtime stuff works, so sorta speculating, but. I say something because I've been thinking, as well, about Goffman's idea of an "ecological huddle," a kind of "joint territoriality of copresence" that defines the "natural situation." (See See Cetina 2009: "The Synthetic Situation: Interactionism for a Global World") Telegraph, radio, telephones etc undermine the normal idea of what it means to be co-present, or for one individual's actions to ripple directly and immediately into another's. Maybe I'm just trying to transcend the local/distant antagonism and figure out what it means in both cases to have influential proximity?

        I like "it's not over til the fat lady sings" as an archnemesis to monotonicity.

        Would be curious @snav's thoughts on this post, since he knows a lot more about the deep internals of software engineering

        1. hazard
            2021-03-15 00:09:50.372Z

            Calling something non-local defs always has to be refering back to a certain frame of what counts as local. I like using the software examples because I've got a gut sense for how easy things are to reason about at different levels of "localness".

            The thing with global variables is that, depending on the language, if any of your dependencies happen to declare a global variable with the same name, one will overide the other. So if you declare and depend on a global variable in your code, you are opening yourself up to "non local" effects, i.e some codebase that isn't yours fucking with your code. Compared to a variable with function scope, where you only need the context of the function to understand how the variable changed, with a global variable, you possibly need to lookout out for ALL of the code you depend on.

            (note this is kind language specific. Javascript has this issue, but clojure does some automatic namespacing stuff that makes this less of an issue)

            1. In reply tosuspendedreason:
              Ssnav
                2021-03-15 01:10:13.619Z

                Hazard's post was good. Here's some random other thoughts:

                True global variables "belong" to the process. However we think about code execution in terms of (execution) threads, and in particular we think about memory ownership in terms of a thread's function stack. This stack in particular preserves locality at the level of the function: each function "owns" its own variables (functional programming is definitionally the guarantee that the global context is equivalent to a particular function's context). Global variables break this locality because they must be thought of as belonging to the context of ALL threads in a process simultaneously. This is necessary for certain tasks, though, specifically for creating primitives to manage multithreaded code, such as mutexes and condition variables, as their state must be shared across all threads.

                Harder problem: all functions are technically global symbols (variables referring to functions). So we introduce the idea of modules (classes, declarations) to ensure locality in function access (i.e. a thread can access its own stack variables and also any functions "adjacent" to it contextually). Similar situation.

                This is all pure abstraction. At the level of the machine (assembly), you have a finite set of global variables (registers), and the ability to declare arbitrary symbols to manage control flow (functions). The role of the compiler, then, is to "unwrap" the abstractions of locality, to ensure that the guarantees it promises to a program are kept when translated into machine code. Basically the compiler itself is what provides the space such that locality can emerge, at least without the programmer having to consciously adhere to strict stylistic practices when working. I recommend learning some assembly to really understand what I mean by this: you start coming up with "implicit" ways of bounding the usage of global state such that you can get things done without worrying about which register has what value.

              • In reply tohazard:
                crispy
                  2021-03-15 07:02:57.333Z

                  just gonna note I'm the only who "liked" this post, y'all are heartless (literally)

                  1. Ssnav
                      2021-03-15 15:02:45.857Z

                      There's a reason one of Scott's feature requests on substack was disabling the "like" feature LOL

                    • In reply tohazard:
                      crispy
                        2021-03-16 05:17:41.646Z

                        What about the communication bottleneck?

                        I like this poast and agree with it. I think it's time to think about where the drive for reduction comes from. Sure, it's easy to say "people want to simplify things" and leave it at that, but I think there's ways of decomposing this problem that explains most of the behavior we see in it: the communication bottleneck. We cannot specify things exactly, as @hazard himself says in a different venue:

                        I want to force being super explicit about what things you consider equal/the same
                        vanilla correspondence theory "my proposition is true if it corresponds to reality" cool. "corresponds" hides a lot.
                        Peak Materialist bitch would demand all propositions to take the form of complete specifications of the position and velocity of all particles in the universe
                        cuz "particles is what's most real"

                        for better or for worse, we cannot present particles as propositions, and doing so boggles the mind: what robustness would a given slice of the universe have as a statement? I suppose we could do existence proofs, but anything higher abstraction would be ruled-out. In the words of a recent president: Sad!

                        In the above post @hazard poasts:

                        What does "sum" mean? Or better yet, what operation are you calling a "sum"? If the operation that you are calling a "sum" doesn't reproduce the whole, why did you call it "sum" in the first place?

                        This is the crux of the issue, and @hazard rightfully calls it a linguistic trick: the idea the whole is more than the sum of its parts is a result of summing over an assumed default. Why are we making this assumption? Because the way we were talking about an object wasn't conducive to describe its subtle, non-linear composition.

                        If you ask me to describe a friend of mine, I believe I can go pretty deep into giving you a strong model for what to expect from any interaction you might have with them, and even give you a little bit of their soul, but there's lots of stuff I couldn't get across.

                        The biggest constraining factor is time. I generally assume that I can't talk for more than a minute before it's a lecture. This has secondary effects: I have designed my entire strategy of communication around the idea of a non-stationary conversation, in which I should give the most important bits I have to say about a topic and only expand upon what my conversation partner asks about. This has made me effective but limited in my scope, like most strategies in any game.

                        The second biggest constraining factor is the coordination problem. I don't actually know that much about how you, the reader or listener, conceive of reality. I have to fall-back on well-established principles that I feel comfortable assuming. For instance, if I talk about what a "government" is I can assume it has a bureaucracy, that it manages and likely taxes land, that it has access to guns. A government is a very vague thing, so just to be able to introduce some aspect of it like "state capacity" I would need to touch on a lot of things to make the grounding make sense. Instead, we tend to talk about the things we know we can talk about with relative ease, so for the most part political scientists who talk about state capacity because they don't need to force their friends to read a textbook to talk about: their friends have already read the same textbooks.

                        What I see in @hazard's post is a list of conceptual technologies developed because it made it easier to point certain things out, either in formal systems or in natural language. Often both.

                        I promised myself I would talk in examples from now as @hazard has been pointing out how silly so many disagreements are without grounding. I think I've been doing an OK job so far in this comment, but let's really get into it. Let's make an example of why a given conceptual technology is useful for communication and where it breaks down. @hazard provides examples of where these occur but we will attempt to show where the pressure for their existence and dissolution is.

                        Approximationism

                        If I say "There are 27 chickens, one of which is missing a leg, how many legs does the group of chickens have in total?" you will respond "Fuck you, Crispy, why are you giving me homework? And why is it so easy?" Therefore, I will bring a gun to your head and ask you to answer. As you stutter, never having realized until this very moment that the pursuit of answers meant so much to me, you say "53". "Wrong," I say putting down the gun, "one of the chickens has three legs, it's 54." We must make assumptions that certain things need specification, and certain things don't.

                        It is tempting to claim that, in reality, we would specify these things when they mattered. This assumes that we know, a priori, what matters and to what degree. The double pendulum visualization in OP shows this is not the case. Another case, is to imagine what you should say to a friend who you have setup with another friend on a blind date. What will matter? You may have a vague idea, but you just don't know. You could have 2 years and still miss something vital. We approximate, because there is no alternative.

                        Furthermore, other people get used to the ways in which reasonable assumptions give people wrong predictions. Every little kid who doesn't like ice cream gets used to the three minute dialogue when an adult finds this out. Every bilingual white American has a canned spiel about why they know the languages they do. Every member of a profession has their little prepped responses to correct a misunderstanding or stereotype that others would apply to them and that they wish to dissuade. Communication is an emergent property, so these corrections are necessary because the way people have decided on words and concepts makes them necessary.

                        Locality

                        When I describe something I remember my mother doing back to her, she often tells me "I wouldn't do something like that." This drives me insane, but it makes sense. She does not remember everything. No one does. We use a compressed representation to remember things, and that compression is lossy because we won't know what matters later. One way we do this is by reasoning via nearest neighbor. If I tell my mother something about a time close to another important memory she has, say when she switched jobs, then she not only has a clearer memory of it generally—she has a more specific impression of how she would have acted. Often this specific impression is wrong, for instance because I am referring to an event that happened before a specific change happened, e.g. when I stopped being a loser and had friends to hang-out with on the weekends, something that totally changed our relationship.

                        Locality is useful, because we need compression, just like approximationism. In fact, in the lives of human beings, locality is just a kind of approximationism. It is also almost always right. Most people in San Francisco don't care about Wisconsin politics. Except suddenly it's an election year and a lot of people do. You also can't walk into a bar in San Francisco and be sure that no one there cares about Wisconsin politics. There are some Wisconsinites in San Francisco, but if we wanted to shit on them we wouldn't be half as careful as walking down the streets of Madison, and I might say "No one here is from Wisconsin." That's the kind of shorthand thinking that makes locality selected-for.

                        Monotonicity

                        People generally talk as if more of a good thing will always be good, e.g. "I'll take as many free beers as you have." This is because they are unlikely to encounter someone giving away so many that it will actually become a logistical problem in its own right. So they can assume a monotone curve of utility and speak as such.

                        If I ask you how much you'll know in ten years about X subject you know a lot about now, you're likely to say you'll know more. This may not be true, you may have moved on in your interests. But people generally use locality to put things into two buckets: monotone and complex. If it's locally monotone, that's usually good enough. And if it's something that has to do with your identity/knowledge then we usually consider our future selves supersets of our current selves. This is useful for planning, because generally we can operate to make any selected properties improved in the future, but not all of them, so acting as if all if them will be improved in the future allows us to coordinate with others about which axes they would want us to improve. For instance, my girlfriend wants me to workout more because I've gained fat in lockdown. When we talk about it, I talk about how much weight I'll have lost in 2025, because it's good to plan for monotonicity if that's what I want to happen. This plan is often confused with reality.

                        Modularity

                        People often assume that you'll do well in a community where there are other people who they think are like you. Unfortunately, a community often has more complex dependencies than these isolated characteristics, and so you are not a very modular unit. But a lot of things are "modular enough" and so we'll treat them as modular. You need fruits and veggies in your diet. My girlfriend is in charge of ordering things on Amazon fresh, and I might ask her to order some vegetables. Any of them will do, but I hate broccolini, and when it arrives I refuse to cook it or eat is, so the interface I presented was not good enough. This is often the case with "I'm up for anything." "Anytime works." etc. The point is that, there's pressure to simplify, but it's very easy to overdo. Modularity is actually the most common over-expressed reductionism in language in my experience, because we tend to talk about things in terms of categories, since language has plenty of those lying around. It is not clear if the proliferation of categories is a cause or an effect, so it's probably both.

                        Compositionality

                        Cooking is not compositional, but certain parts of it are. Real recipes leave some space for experimentation on the part of the cook, but it requires a good cook to see where these spaces are as often they are not explicitly encoded in your grandma's back-of-the-envelope cookie recipe. Think of an "Everything Bagel". It does not contain everything. I am not in that bagel. It is assumed only certain things will compose well on a bagel. Everything bagels do not contain everything that even could go on a bagel: they are usually salty and do not contain possible sweetening ingredients, as salty/sweet are considered non-compositional. That is why "sweet & salty" is a thing, because it's special to make them work together. Yet, this is all understood and we have "everything bagels" and "kitchen sink cookies" and "anything goes orgies".


                        I definitely lost steam at the end of this, but as @beiser says:

                        [we] need [sic] messages that are trash to serve as bait for beautifully sculpted rebuttals

                        1. hazard
                            2021-03-16 15:40:27.503Z

                            I agree that communication bottlenecks rule lots of conversational dynamics.

                            They defs rule twitter. Character limit + uncertainty about who actually cares enough to talk about something in depth (subset of coordination I guess).

                            Unless we expand communication bottlenecks a lot more, I don't think they rule a lot of the stuff I'm thinking of. Peak Materialist Bitch (PMB) is not Dave the Plumber. PMB weilds all of these "reductionist principles" in a very different way from Dave.

                            PMB "has time" to get in a 8h argument with someone online about this. That's not the sign of one who is a flying-by-the-seat-of-their-pants, smooth operator, deftly acting on the world, ruthlessly pragmatically abstracting as necessary to make the bottom line.

                            I mean, there's a sense in which that is all of us, all the time. But I think it's all of us all the time for "the stuff that is most salient and real to our lives". I think most of stats isn't "real and salient" to many statisticians, and I would surprised to meet a bunch that did live this way.

                            I can imagine a conversation between Pragmatic Simplifyier (Pragma Simp) and It's-Not-That-Easy Guy (INTE Guy). INTE Guy could either be the literalist who has an itch everytime someone says something that they can interpret as not maximally (from their perspective) accurate, or it could be the grumpy old timer who's seen it all, or it could be the Debater who just found a hot new take they want to try out. Prag Simp could be "i don't give a shit/ma'am, this is a wendey's", or grumpy field worker pissed at uppity theory kids, or Ex-INTE guy literalist,.

                            None of those are quite the characters I had in mind who were arguing, to whom this post outlined aids for. And none of those characters (expect maybe for the two grumpy ones) are Dave. Btw, Dave is who I have in mind when you mention examples like "Most people in SF don't care about Wisconsin politics."

                            I think there's another useful nugget to respond to here:

                            Locality is useful, because we need compression, just like approximationism. In fact, in the lives of human beings, locality is just a kind of approximationism. It is also almost always right.
                            I want to draw out the difference between "Locality applies to a system in the way I expect and that is useful" and "People thinking using locality". Another way: there are systems where locality simply doesn't hold, in the sense that if you think about it locally, you'll reliable be wrong and lose bets to someone who uses more horsepower and attacks the problem with non-locality in mind.

                            Ah, perhaps this is a useful point. Right now, I'm not thinking about "how does everyday cognition work?" or even "how would I like everyday cognition to work?". Introducing these "reductionist principles" and their counters feels like it's part of a frame like: I have found a particular X I'm trying to understand. I either am explicitly thinking about how to improve my understanding of it, or I previously have no understanding of it and I want some. There is time to ponder this. I'm pondering it right now. From this place, I then ask the question "how local/monotonic/modular is this system?" Sometimes when I do this, I hear people give different answers than I do, in ways that seems like they aren't Prag Simp, but might just be wrong (or I'm wrong).

                            Having said aaaaaaaaaall of that, let me ponder what you seem to see as interesting here. I read your comment as expressing interest in the cognition of Dave. How does Dave make sense of the world and act on it? Part of your comment on Dave seems to be that he's "not playing the same game" as INTE Guy. INTE Guy sees Dave as just wrong about everything he says, but it's probably more often correct to say that Dave and INTE Guy are rarely talking about the same things, and that Dave also doesn't have the time or desire to fully explicate himself always. I jive with that (if that is a think you're putting out).

                            ^fullstop^

                            1. crispy
                                2021-03-17 03:08:00.722Z

                                what a cathedral of characters you have given us! I am quite fond of them.

                                this is a perfect example of (a) how I explain myself badly by going too far into one part of my point and (b) why I should just poast more and resolve it discursively.

                                First, I agree that I misunderstood something key about your original post—that you were trying to think about how to address a situation in which you have identified X and now want to cut away at it, reducing it to parts you can understand. Indeed, in these cases it is easy to feel somewhat paralyzed because any assumption will be wrong at some level. Pretty much nothing is modular, except by construction, and in the physical world construction isn't actually good enough to ensure full modularity.

                                But that's the thing, isn't it? We must accept error. You use a lot of examples from programming for which the error is in some cases non-existent and in most cases would only exist under adversarial conditions, e.g. I instantiate an object to be modular but something else in the program alters it using sketchy metaprogramming hacks in a way I was trying to disallow. However, I would like to point out that when we observe most phenomena, all of these things are a kind of approximationism, because we have no way to be sure when they hold or if they are persistent properties except upon observation, and even then there is the problem of induction.

                                I am no trying to say these are useless, very from it. Rather, I would like to say that the reason we choose the reductions we do is because of a kind of communication bottleneck we engage with ourselves and with whomever we are studying alongside. If I examine X and it has a reasonably low number of unique states, I could write them all down and try to study its dynamics. But coming up with a good model, is actually performing the above reduction. In the best case, my reduction is completely clean, and X really does decompose into underlying variables that are simpler than a state table. But most of the time states aren't finite (or even discrete), data isn't completely trustworthy, and/or the situation is evolving—so I impose a reduction in order to keep a basic model in mind.

                                I fundamentally believe the general way humans analyze objects is in a coarse to grain pattern. The first things we try to explain are first and foremost those things we already have good conceptual handles for, then edging into things we don't but which seem to explain most of the variance in a phenomenon. This process is essentially dialogic: we go back and forth with ourselves and with our friends, trying to scrape away the edges of uncertainty and error in description. But because it is dialogic, it is driven by salience. Theories about people's communication don't explain their farting behavior (which, frankly, is probably complex and worth studying) because you can't make people look at it.

                                So I guess what I'm saying is that I think it's difficult to get yourself to stop being Dave. We're all Dave, but maybe we've replaced "Plumbing" with "X", but at the edges where X gets weird we start applying the "normal plumbing rules of thumb" to X and we get ourselves all confused. So breakdown the system, but we're PragSimp about it, and we do so with the tools at hand trying to pin down the most variance we can with each step, greedily locking ourselves in a theory that doesn't make sense. And then we forget we were doing that, because we never really realized it in the first place and wonder why our theory was wrong when we only took correct steps along the way. HOW?

                                Dave and INTE are playing different games, but the truth is the INTE is either losing the game he's playing or already has enough status that by making a fuss he only lowers everybody else's. The examples and counter-examples you give basically show how much error these approximations give for a specific set of phenomena, and that's obviously real and useful. But I think the question of why we reach for them is what I'm trying to explain. And I think we can't help but reach for them, because reduction is basically the only way we explain things at all. I suppose that's my main question: what kind of explanation doesn't yield a "more than the sum of its partsness" other than stamp collecting?

                              • In reply tocrispy:
                                hazard
                                  2021-03-17 01:11:03.957Z

                                  Also congrats on us being the same person

                                  Think of an "Everything Bagel". It does not contain everything. I am not in that bagel. It is assumed only certain things will compose well on a bagel. Everything bagels do not contain everything that even could go on a bagel: they are usually salty and do not contain possible sweetening ingredients, as salty/sweet are considered non-compositional. That is why "sweet & salty" is a thing, because it's special to make them work together.

                                  A joke from a standup set I wrote last year:

                                  You get a classic everything bagel, and what's on it? Sesame, salt, poppy, onion and garlic. That's five things!
                                  "You promised me __every__thing."
                                  "I don't see the small island nation of Madagascar on this bagel. I don't see the abstract concept of hope, on this bagel. I don't even see the common household condiment, mayonaise, on this bagel."
                                  "This bagel lacks most things"

                                  1. crispy
                                      2021-03-17 03:22:18.128Z

                                      interesting...I've been developing my first stand-up set, actually. I have this belief, that Stand-up Comedians are the profession most representative of my soul, even though I wouldn't want to be one. I want to be a comedian a to a targeted audience, with targeted intention. I want to bring comedy to science. a poast for another time...

                                      "This bagels lacks most things." would be a good T-Shirt.