Goffman 1969: Strategic Interaction
Gonna keep notes, thoughts, and raised questions here as I work through the book. Strategic Interaction is divided into two essays, "Expression Games" and "Strategic Interaction." The opening sections of "Expression Games" are incredible: succinct, cutting, insightful.
We'll start out with what this book's about: "strategic interaction"—the "calculative, gamelike aspects of mutual dealings."
In this paper, I want to explore one general human capacity [...] to acquire, reveal, and conceal information.
We're in the field of optics, opticratics. Negotiating the gap between being and appearing. Goffman looks to intelligence communities, espionage theory, interrogation tactics for his answers, though we can imagine similar studies drawing on poker, warfare, business, and bargaining (that's the side of the road Schelling walked in Strategy of Conflict; the approaches are closely linked).
Goffman starts with a couple premises/carvings, and builds on them:
- Individuals "exude expressions."
- These expressions contain information.
- This information is "discursive" in the sense that it "pertains to the general relationship of that individual to what is transpiring" around him, i.e., to use Garfinkel's term, it's indexical: meaning is "bound to context."
- All interactions with individuals carry some information, but "face-to-face interaction has a special place because whenever an individual can be observed directly, a multitude of good sources of expressed information become available."
Much like ethology distinguishes between signals and cues—one strategic & intentional, the other unintentional—Goffman distinguishes between communicated information and expressed information. If a signal is a songbird proclaiming his presence, a cue is the rustle of branches as he takes flight. To the observer, both events are sources of information, but to the emitter, only the former was desired to be communicated. (The emitter would rather hide his cues, and not rustle the branches, but the branch-rustling is a side-effect of movement. At best, emitted cues tend to be harmless; at worst, they're fatal. There's a conflation here of "benefit" with "intent," since the two naturally couple in systems of natural selection, but they aren't necessarily bound together in human domains—teasing that apart is a project for another post.) Anyway, so it is with Goffman. In expressed information, like a cue, "the generating of expression and hence making its information available is not an official end of the action but (at least ostensibly) only a side effect." Whereas communicated information involves "the intentional transmission of information."
Anxiety is expressed; concerns are communicated. Dandruff is expressed (cue); while a haircut is communicated (signal). Things get complicated because the same haircut might be a signal (e.g. intended symbol of belonging) to one person but a cue (e.g. unintended give-away of being foreign) to someone else. Goffman sorta skips around this issue with, "as a source of information, the individual exudes expressions and transmits communications," where I worry "exude" and "transmit" are doing a lot of unacknowledged work.
Somehow we've got to get a clear picture of information sending & receiving to understand this distinction... If we're wargaming, and I surreptitiously wave an orange flag to tell @Hazard to cut left, it is equally a cue to @Snav on the opposing team, or is it just an intercepted signal? (Yes, this is a words problem, but words have to be carved clearly enough to operationalize.) What if a medical inspector, tasked by my employer to decide whether my sick day is legitimate, is observing the game, and sees me vigorously waving the flag, taking it as a sign of good health? How do we effectively carve up these spaces of intentionality, especially when there’s a hard conscious/subconscious problem? (“What am I really signaling”) etc? One thing that jumps to mind from this silly example, is that whether something is a cue vs signal depends on who it's intended for. It's not just a relationship between an agent and an expression, but between an agent, an expression, and that expression's receiver. In other words, there is not just an expression, but many expressions, one for each person who receives it. The "unit" under consideration is tripartite.
- 14 replies
There are 14 replies. Estimated reading time: 53 minutes
Some of the texts I'm gonna peruse to try to answer this cues/signals question:
- Donath 2011: Signals Cues and Meaning
- Saleh et al 2007: Distinguishing signals and cues: bumblebees use general footprints...
- Garcia et a 2018: Signal or cue—the role of structural colors in flower pollination
- Lehman et al 2014: From Cues to Signals—Evolution of Interspecific Communication via Aposematism and Mimicry in a Predator-Prey System
- Jamie 2017: Signals, cues, and the nature of mimicry
If folks have suggestions, please add!
I'll give another example of the cue/signal confusion I'm trying to work through: we might, naively, begin with a definition of signals as intentional or voluntary communications to a receiver-in-mind (and a receiver-in-reality), and of cues as unintentional or involuntary communications to an observer (who may be known or unknown to the cueing organism). This works fine for humans, if you're willing to do some handwaving around conscious vs unconscious intent. But what about plants?
Angle dependent colors, such as iridescence, are produced by structures present on flower petals changing their visual appearance. These colors have been proposed to act as signals for plant–insect communication. However, there is a paucity of behavioral data to allow for interpretations of how to classify these colors either as a signal or a cue when considering the natural conditions under which pollination occurs.
Here, this definition just won't work. Instead, we have an evolutionary relationship of benefit and preservation. If the cue is advantageous to the flower, for pollinating purposes, it sticks around. That's all. There is no "intent" or "motivation." Here, we can only rely on consequence.
The use of the term signal when referring to angle dependent colors in plants implies that these colors allow for an effective visual communication between plant (sender) and insect (receiver). More precisely, these type of colors should comply with 3 conditions to be considered as a signal: (a) effectively transmit information from the signaler to the receiver, (b) have evolved for this particular purpose, and (c) both parties should benefit from producing and monitoring these colors (Smith and Harper 2003; Bradbury and Vehrencamp 2011). Visual traits producing stimuli that do not meet the fore mentioned 3 criteria may be defined as a cue... Unlike signals, cues have not specifically evolved for communication purposes and may be produced as a secondary effect or byproduct of inherent anatomical characteristics to the emitter
Aha, this may help a lot. Both those references are full-blown books, but I'll see what I can do.
OTOH, if they're defining a signal as necessarily an honest signal, this seems.... parochial and not standard? Quick recap, honest signals are mutually beneficial for observer and signaler (if the bright red frog is poisonous, I don't wanna eat it). Dishonest signals are good for signaler but bad for observer (if bright red frog is mimicking poison dart frogs, but is perfectly edible, I do wanna eat it, and observing its don't-eat-me-signal will come at my own cost as a predator). In these authors' frame, a signal must be honest per criteria (c)
Donath 2011 gives us a more human-centric understanding of signals:
Many of the things we want to know about each other are not directly perceivable. These qualities include emotional states (are you happy?), innate abilities (are you smart?), and the likelihood of acting a particular way in the future (will you be a loyal friend?). Instead, we must rely upon signals, which are perceivable indicators of these not directly observable qualities.
And she makes almost no distinction between intentionally communicated information as signals, and unintentional expression (e.g. an accent as giving away where you're from—the lack of ability to selectively disclose this means it'll often be disadvantageous to the speaker, such as when visiting a foreign country and hoping to blend in or avoid scammers):
We rely on signals when direct evaluation of the quality is too difficult or dangerous. [...] Saying “yes, I would like another helping of your special Tuna-Delight” can signal either hunger or politeness while the accent with which it is said can signal country of origin and social class. Indeed, much of our communication, whether it is with words, gestures, or displays of possessions, consists of signaling information about who we are and what we are thinking.
Her piece does do a good job of summarizing the symmetrical asymmetry of relationships, however, where it's in an observer's interest to know the "truth" of their observed subject, and in the observed's interest to put on the most advantageous appearance. (By symmetrically asymmetrical, I mean, there is an observed-->observer relationship, with conflicting or competing goals, and that relationship is mirrored in almost all interaction, e.g. the job interviewer is also being observed by the candidate to decide whether he wants to take the job, just as the interviewer is evaluating to decide whether to offer it).
even within cooperative relationships there are elements of competition and conflicts of interest about plans and identity: I wish to present myself in the best possible light while you want to know what I am really thinking and what I really can and will do.
When it comes times to distinguish signals from cues, she avoids the complicated problem of intent vs consequence with an "either":
We will use the term “cue” to refer to all the things we perceive that indicate some other hidden state or intention and we will reserve the word “signal” for those cues that are meant to serve as communication, either because they have evolved for that purpose or because they are intentionally communicative.
To define cues, she cites Maynard Smith and Harper:
"any feature of the world, animate or inanimate, that can be used … as a guide to future action" [...] Everything that we use to infer a hidden quality is a cue. A cue is a signal only if it is intended to provide that information.
Cues are a kind of "evidence"; we do not choose to emit CO2 in order to guide a mosquito to us, but it guides the mosquito all the same. "The purpose of a signal is communication and its goal is to alter the receiver’s beliefs or behavior." I dig—this echoes the line I like that all communication is manipulation—but I can't help but think she is continuing to dodge or ellide the key question here: what does it mean for a piece of information to have a purpose, or intent, or to be meant for some end. If we can answer this, we can also perhaps answer important questions about the nature of a game's "spirit," since many of the relevant games are subject to the same evolutionary issues as cues. How can we define a purpose, or an intent, in an evolutionary system?
Dennett here might say bah humbug—animals do have intents, there is a goal of their collective cells:
Agents, in this carefully limited perspective, need not be conscious, need not understand, need not have minds, but they do need to be structured to exploit physical regularities that enable them to use information (following the laws of computation) to perform tasks, beginning with the fundamental task of self-preservation, which involves not just providing themselves with the energy needed to wield their tools, but the ability to adjust to their local environments in ways that advance their prospects.
The other amazing thing that happens when cells connect their internal signalling networks is that the physiological setpoints that serve as primitive goals in cellular homeostatic loops, and the measurement processes that detect deviations from the correct range, are both scaled up. In large cell collectives, these are scaled massively in both space (to a tissue- or organ-scale) and time (larger memory and anticipation capabilities, because the combined network of many cells has hugely more computational capacity than the sum of individual cells’ abilities). This means that their goals – the physicochemical states that serve as attractors in their state space – are also scaled up from the tiny, physiological homeostatic goals of single cells to the much larger, anatomical homeostasis of regeneration and development.
I'm still left unsatisfied though, this kind of intentionally doesn't "click" into place w/r/t problems like "is X mutation that helps an organism by attracting pollinators meant to attract pollinators," or, in the immortal lines of Van Morrison, "There's no why why why / It just is"—it just does happen to help the organism, and that's all there is to it. In human terms, we might want to know that X helps accomplish goals, and then we can choose purposefully to keep doing it, for the accomplishing of this goal. But in evolutionary terms, behavior that helps an organism is just statistically more likely to reproduce itself.
Maybe our best heuristic for this problem is, would the behavior still be performed, or the expression still exist, if there was no receiver?
When a signal loses its audience – perhaps because it is too unreliable – eventually it ceases to be produced, since with no audience, it has no benefit. Non-signaling cues – such as the CO2 one emits – operate outside this system. Even if there are no mosquitoes around, we still give off carbon dioxide
My face’s mirroring of internal emotions like stress, anguish, or joy seems to occur regardless of whether others are present. But we can also imagine previous signals persevering in the gene code because there has not yet been adequate selection pressure to select them out.
A single stimulus, obviously, has many meanings, a meaning relative to each receiver who observes and interprets it. Thus, a signal exists between a signaler and a projected receiver or theoretical audience. It is an effort that we may call more or less successful if the received meaning more or less matches the intended meaning. (See the rule of pragmatic reader-response.)
A feature may function simultaneously as a signal and as an unintentional cue. One might intentionally display a signal for one receiver only to have it be picked up as evidence by another. One may dress in furs as a signal of success and wealth – but a robber may interpret this same clothing as evidence that waylaying the fur-wearer will net a hefty haul of fine jewelry. Or, the intended receiver may interpret a signal in unintended ways. The fur-wearer may intend to the signal wealth, taste and success to someone she hopes to impress, but that person may instead interpret the furs as evidence that she is cruel to animals.
A feature may be evidence in one context and used as a signal in another. Wrinkled hands are usually evidence of old age, their appearance results from the loss of collagen, elastin and subcutaneous fat, not from any communicative purpose. Yet in situations where being old is advantageous, ranging from ticket booths that give senior citizen discounts in our otherwise youth obsessed society to cultures where old age is revered and respected, one might choose to show off gnarled and wrinkled hands, amplifying their appearance to signal advanced years. All cues, both signals and evidence, provide a means to infer some quality. Signals are meant to communicate the quality; their purpose is to alter the receiver’s beliefs or behaviors in ways that benefit the signaler. Unintentional cues, or evidence, exist for other reasons and they may provide information detrimental to the one who reveals them.
Another strategy for differentiating signals and cues is the context-sensitivity of the behavior. Following Saleh et al 2007, if a bumblebee leaves a certain scent marker everywhere he goes, without context-based discretion, it is almost certainly a cue.
Alright, now that we have signals vs cues out of the way—maybe not compressed as nicely as I'd like, but at least the major nuances are laid out—I wanna talk about reliability and bottlenecking.
Classically, reliable signals are ones that can always be trusted (by a receiver) as honest. Unreliable signals may or may not be honest. Unreliable signals' efficacy is frequency dependent: the ratio of honest to dishonest systems in an ecosystem, modified by the relative costs of acting on vs. ignoring the signal, entails the optimal behavior of a receiver. If eating a poison dart frog is fatal, it may only take 5% of all bright red frogs being poisonous to protect the entire population from predators (that is, to protect the other 95% of free-riding, deceptive signalers). On the other hand, if eating a dart frog is merely unpleasant for the predator, it may take upwards of 95% honest signals in the ecosystem for that species to leave bright red frogs alone.
Transcending the reliable/unreliable binary, we can call signals with a high percentage of honesty, in the ecosystem, more reliable; signals that are only sometimes honest we'll call more unreliable.
What does this tell us? That, all else being equal—controlling for the relative cost of ignoring vs acting on a signal, as well as the relative cost of faking the signal—the most effective signals for a free-riding mimic to target are those which are already most reliable.
Goffman (ostensibly unknowingly, but he cites only rarely) recreates this insight from signaling theory in his own expression games. In expression games, many kinds of communication cannot be trusted by the observer. For instance, it is in individuals' self-interest to present themselves as flatteringly as possible (opticratics), rather than as honestly as possible; moreover, many individuals will fully fabricate biographical details for the sake of competitive advantage. In such a landscape, observers (i.e. evaluators) are forced to rely on certain "very special signs on which he puts great weight," and believes he can rely.
The very tendency of the observer to suspect the subject and try to seek out means of piercing the veil means that the observer will shift his reliance to the very special signs on which he puts great weight; and if these signs can be discovered and faked by the subject, the latter will find himself dealing, in effect, with an ingenuous opponent.
I'll call these "evaluative bottlenecks." In an old book club call with @thechickenman and @crispy, discussing McLuhan, I shared an anecdote about drug trafficking. When the means by which a subject will be observed or evaluated are limited ("bottlenecked"), and known in advance, they are at their most vulnerable. If a trafficker does not know how he will be evaluated—if the inspection agents may possibly have x-ray, metal detector, drug sniffing dogs, full body patting, etc—then it is difficult for him to optimize his deceitful self-expression. There are many different potential tests he will be subject to, and optimizing performance across all those tests gets increasingly difficult, as the space of possible solutions to all of them shrinks. However, if there is only a single type of test, and it is known in advance, we reach a bottleneck scenario in which the trafficker need only optimize his hiding place for a single "uncovering move" (Goffman's term for any move that attempts to unveil the truth below surface impressions).
These kinds of highly valued tests, or uncovering moves, or measurements, or sought-after signs are weighted in proportion to their purported reliability. If we see a DNA test as authoritative, and our courts will throw out even witness testimonies and criminal confessions in favor of a DNA test, perceiving it to be always reliable, then there is now only one, single aspect a defense attorney must fabricate or tamper with in order to get his client off. (E.g. by paying off a DNA lab technician to certify the wrong result.)
Often, gameable bottlenecks arise because a single metric or sign has been implemented based on previous reliability; however, the act of making this metric/sign authoritative actually changes how reliable it is. For instance, we can imagine that if, in a college admissions regime where an SAT score was only very small part of one's college admissions result, it might be one of the strongest predictors of applicants' academic success. That is because there was very little evaluative pressure being put on it, and "gaming" by applicants was relatively limited and infrequent. In other words, the low weighting of an SAT score on college admission decisions helped make it a reliable signal.
If colleges began putting enormous weight on that same SAT score, all of the sudden, applicants would be highly incentivized to game their scores, be it through tutoring, paying test-takers, or other forms of cheating. Suddenly the measure would be less reliable. This is known as Campbell's Law, and is a kind of surrogation effect. Campbell 1976:
Achievement tests may well be valuable indicators of general school achievement under conditions of normal teaching aimed at general competence. But when test scores become the goal of the teaching process, they both lose their value as indicators of educational status and distort the educational process in undesirable ways. (Similar biases of course surround the use of objective tests in courses or as entrance examinations.)
A potential dishonest signaler, looking for his most effective deception strategy, ought to find signals that are perceived as uniquely reliable, or which hold uniquely authoritative weight, and bottleneck his own efforts into faking this signal. Long-term, given a steady environment, the game will work itself—the most reliable signals will be those which are hardest to fake, and the least reliable will be easiest. There will be a perfect correlation or equilibrium at-hand. Of course, in the real world, and especially in human domains where technological and sociocultural drift means the game environment is constantly changing, this will not be the case: certain once-reliable signals will be vestigially relied upon as authoritative, and these will be the optimal exploits for those who wish to deceive their evaluators. If evaluators wish to counter this bottleneck exploit, they should (1) employ a multitude of evaluative techniques, that are deployed either randomly or simultaneously (complementarily); (2) they should minimize transparency, so that the evaluated subject will not be able to focus his deception efforts to a single point of failure, and instead must spread his resources/optimize across many possible tests.
TL;DR, highly reliable signals can become singularly authoritative in evaluative process, creating a "bottleneck" or single point of failure in adversarial expression games.
Back to general discussion/notes/synopsis. Goffman sees personal style as a major source, for observers, of both intentional signals and unwitting cues:
Discursive statements seem inevitably to manifest a style of some kind, and can never apparently be entirely free of "egocentric particulars" and other context-tied meanings. Even a written text examined in terms of the semantic meaning of the sentences can be examined for expression that derives from the way a given meaning is styled and patterned, as when Izvestia and Pravda are read by our intelligent people "symptomatically," for what the Russians do not know they are exuding through the print.
While all expression contains some trace of style, it is face-to-face expression in particular that is most rich in information.
Goffman has a proto-predictive processing view of behavior, intelligence, and optimization:
All organisms after their fashion make use of information collected from their immediate environment so as to respond effectively to what is going on around them and to what is likely to occur.
And as it is in observers' interests to seek out information and act on its basis, it is in observed agents' interest to manipulate the information observers pick up, so as to bring about their own best interests:
Just as it can be assumed that it is in the interests of the observer to acquire information from a subject, so it is in the interests of the subject to appreciate that this is occurring and to control and manage the information the observer obtains; for in this way the subject can influence in his own favor responses to a situation which includes himself."
(In a phrase, "All communication is manipulation.")
This capacity is the capacity to "inhibit and fabricate expression"; it is exacerbated in "situations where an observer is dependent on what he can learn from a subject, there being no sufficient alternate sources of info." When evaluation bottlenecks on self-report, the observer is in trouble. "A contest over assessment occurs"; the interrogator/observer wishes to get the "truth," at least relevant to his ends, while the interrogated subject wishes for his own optimal outcome. (Which IMO can be meaningfully conceptualized as selection or non-selection.) Goffman, following ethnomethodology, calls subjects' self-representing communications "accounts" or "explanations"—tellings meant to alter the assessment an observer would otherwise make. Accounts can be produced spontaneously or in response to questioning, and either voluntarily or under pressure.
In these expression games, where an observer attempts to extract reliable information from a subject, and the subject attempts to manipulate or mask that information, there are several types of moves.
First, an unwitting move: "the subject's observable behavior is unoriented to the assessment an observer might be making." Second, a control move: "the intentional effort of an informant to produce expressions that he thinks will improve his situation," of which one kind is a "covering move."
On the observer's end, he may or may not perceive the subject's move as "naive," that is, as unwitting. If he perceives it as not being naive, that is, as being a control move, he may make an uncovering move.
Control moves are made through theory of mind calculations: "the subject turns on himself and from the point of view of the observer perceives his own activity in order to exert control over it" (cf Rochat's Others in Mind). This Goffman calls impression management.
Goffman goes on to differentiate between other deceptive moves and strategies, drawing on espionage theory. Overt secret ops admit their general agenda, but conceal the details. Covert ops carry on under a false cover agenda. Clandestine ops are totally hidden from view; their mere fact is concealed. Furthermore, there is a distinction between a feint, when one fakes a course of action, and a feign, when one fakes a belief, attitude, or preference.
Of the 50 American buyers at the Pitti Palace show of spring fashions, several admitted off the record that they never clap for the haute couture creations they like best. They don't want their competitors to know what they will order. At the same time, these buyers confessed, they applauded enthusiastically for sportswear, boutique items, and gowns that they wouldn't consider featuring in their own stores.
One system we have for preventing false accounts by malicious actors Goffman calls "identity tags." I've talked before about names, reputational ledgers, social security numbers, and credit scores as systems of tracking historical defection/cooperation behavior. Similarly, Goffman writes, these tags are "officially recognized seals which bond an individual to his biography"; they are "an admission that an expression game is being played, and that through identification devices the person who would misrepresent himself will be defeated."
In espionage, "pocket litter"—objects like pens, notes, billfolds, etc—are one means of verifying identities of foreign spies. But more subtle tests like questioning about biographical info and local lore, a subject's sense of fashion and orientation to local context, etc can constitute identity tags/tests.
Natives never appreciate how well-trained they are in the arts of detection until they find an alien among themselves who is trying to pass. Then, ways of doing things that had always been taking for granted stand out by virtue of the presence of someone who is inadvertently doing the same things differently, as when milk is put in a cup before the tea, or the numeral four is written with a crossbar, or pie eaten from the apex, not the side.
Tarantino fans, of course, will recognize an Inglorious Basterds scene in this description:
What are natives doing, in such a situation? They are separating the real from the fake, the realities from the appearances, the actuals from the imposters. (They will inevitably lean on surrogates.)
Alright, I might be willing to cede all credit for opticratics frames to Goffman, the guy clearly lit up this part of the discourse half-a-century ago. He even anticipates Trivers' self-deception arguments. Here's the film critic Henry Taylor on David Mamet's House of Games:
Akin to Erving Goffman's "presentation of self in everyday life" and the playing of roles in social behaviour, of theatrical conceptions of symbolic interaction in reality, Mamet's cinema draws attention to dramaturgical concepts such as proposed by Goffman in his distinction between storefront and backstage behaviour; on the backstage, however, we do not encounter undisguised, "true reality," but only further game-playing. In the context of Goffman's theorization of "strategic interaction," focusing on espionage and secret agents, and of the frame to analyse different spheres and boundaries of game-like human interaction, it would seem that Mamet's films not only lend multiple, fertile meanings to these concepts, but also to illustrate a creative and productive symbiosis of theatre and cinema. His theatrical cinema, non-realist both in the conception of acting as reflexive performance and of narrative filmmaking based on a formalist theory of montage—the "juxtaposition of uninflected images"—not only self-consciously highlights social role-playing, but envisions human interaction as strategic interaction in a world of simulation and dissimulation, reality as consisting of game-play on various orders. In this, his approach aligns with postmodernism's view of reality as a series of games being played, there being ultimately no accessible reality outside of these games: even death as a liminal event, as House of Games suggests, is not really available to us outside of the symbolic order. The theatricality of performance in Mamet is thus diegeticised into a filmic reality in which appearances are notoriously deceptive, and frames, cons, and double-crosses inform both the world his characters inhabit and, last but not least, the relationship of his films to the cinema audience. In this Chinese-box world of illusions, we seem to be perpetually stuck in Plato's cave.
Alright, moving on from the "Expression Games" essay now and into "Strategic Interaction," the titular and second of two essays in the collection.
Wherever students of the human scene have considered the dealings individuals have considered the dealings individuals have with one another, the issue of calculation has arisen: When a respectable motive is given for action, are we to suspect an ulterior one? When an individual supports a promise or threat with a convincing display of emotional expression, are we to believe him? When an individual seems carried away by feeling, is he intentionally acting this way in order to create an effect?
This strategic self-interest, Goffman writes, is "one sense in which an actor is said to be rational." The strategic situation does not take many assumptions to emerge—"Once nature, self-interest, and an intelligent opponent are assumed, nothing else need be; strategic interaction follows."
Goffman carves up four entities in strategic interaction:
- PARTY: "something with a unitary interest to promote"
- COALITION: 2+ parties united in provisional common interest
- PLAYER: works on behalf of a party or coalition, which is often himself
- PAWN: a player whose own welfare is placed in jeopardy by the interests of playing parties
- Operational codes (their orientation to gaming)
- Normative constraints (self-imposed moral conditions for achieving goals)
- Styles of play
- Informational states (what they know relevant to situation at-hand)
- Resources and capacities (what they can do relevant to situation at-hand)
...and some of the skills that make "quality gamesmen" include the ability to:
- Pass up short-term incentives for long-term payoff
- Parse information effectively
- Simulate moves (predictively)
- Act under pressure and set aside personal feelings
Finally, this player and his interaction have a position, the intersection of historic moves and enabled, possible future moves.
On to some of the moves and strategies and constraints these players face.
Players can make conditional and unconditional avowals (e.g. "I will X if you Y," or just "I will X irrespective of your move"). These avowals are assessed based on credibility, a composite evaluation of the avower's sincerity, resolve, and capacity to follow through. This is different, Goffman stresses, from "trust," which he sees as a moral concept covering sincerity and resolve alone. But, just as crucial in a strategic assessment is, separate from whether a party believes or intended to follow through on a promise, whether they successfully can.
"It is in the nature of words," Goffman writes, "that it will always be physically possible to employ them unbelievingly. Schelling in Strategy of Conflict concurs: Players negotiating amidst play "have no way to persuade each other that they mean what they say, except by showing it in the way that they play." (And even here, there is the n-1 problem in game theory, where in iterated games cooperating players are incentivized to begin defecting near the final turns.)
How then do we convince others to believe us? It appears that we constantly make decisions based on others' words, and take others' words at least provisionally at face value. How does this work? In Goffman's phrasing, "since words can be faked, what grounds can self-respecting players have for putting faith in any of them?"
There are several answers Goffman proposes. First, he proposes that many of the situations in which we appear to make high-cost decisions on words alone, we in fact do nothing of the sort. For instance, in auctions, we appear to take individual bids at face value, and to make decisions based on them. But, with closer inspection, we come to realize that, in fact, nothing material is bought or sold in this process by words alone. The top bidder has merely won the right to purchase the item at the bid price. Similarly, in brokerage, if a client puts in an order for a stock over the telephone, a broker can "trustingly" put in the order for the stock, using the brokerage firm's funds, because the greatest risk is that the stock is re-sold a few days later, after non-payment, and the aggregate financial risk of such transactions is negligible. The client has not bought the stock verbally so much as they've verbally ordered the option to buy the stock from the brokerage firm. Goffman:
Verbal communication in the absence of a normative basis for trust ought to be possible whenever the speaker can show the listener that there is relatively little to lose in crediting his words—a reduction of the need for trust by a reduction of what is trusted.
(Sidenote: it's very weird he uses "trust" here instead of "credibility"; it seems to contradict his previous distinction.)
A second, somewhat less obvious case in which individuals can take communications at face value is any situation in which both parties are playing a purely coordinative (to Schelling, fixed-ratio, variable sum) game. All harms and benefits are shared equally among participants. Importantly, the activity must not merely by purely coordinative, but there must be common knowledge that it is purely coordinative, so that each party is not suspicious of the other's intentions or understanding—so that each knows that the other knows that each knows cooperation is in their rational self-interest.
Finally, there are iterated games. You build reputational credit in order to keep playing. Deception is against your own self-interest because, if uncovered, you will be excluded from the game. Intriguingly, Goffman notes that, in the standard game theoretic picture, rationally self-interested actors would "build up these trust credits until a time is found when the stakes are such as to make it worthwhile to expend all one's credits in a very profitable betrayal of one's world." And yet few, in reality, behave this way—cash in. Why? One answer I've come up with is the mythological concept of legacy after death.
Alright, on to what I think is some of the most interesting stuff here, because it directly corresponds to surrogation stuff. The big distinction Goffman brings up is games with "intrinsic" payoffs and games with "extrinsic" payoffs.
Intrinsic payoffs are when
the course of action taken and the administration of losses and gains in consequence of play are part of the same seamless situation, much as in duels of honor, where the success of the swordsman's lunge and the administration of an injury are part of a single whole.
This is in contrast to socially mediated systems of extrinsic payoff.
A clear hit in mortal swordplay can perfectly well occur in a foggy night, the clarity of the hit having to do with its psychological consequence for the hit organism. But in games where hits are merely points, a move must often be terminated with an act of perceptual clarity, lest there be a dispute as to what, actually, happened.
Wherever "enforcement is part of implacable nature, cheating is not possible—it is not even thinkable. But where judges have to attend to points, trickery of various kinds will always be possible."
Thus, we might speculate, as practices and games are increasingly social, and less natural, they become more opticratic. Here's Lee Jussim, interview with Yoel Inbar & Michael Inzlicht (2021):
Success in our field... Academia and social science is fundamentally not a truth-seeking enterprise. It is a social evaluation seeking enterprise. You want approval from your colleagues. You need letters of recommendation to get into grad school, you need outside letters to get tenure and promotions, grant panels are evaluations, peer review are evaluations. None of this is objective reality, it's just everybody's opinions. Now, it's opinions that might be hinged to reality to some degree, but the prime mover is opinions.
In the reading docket:
Camerer 2015: A psychological approach go strategic thinking in games
Psychologists have avoided using game theory because of its unrealistic assumptions on human cognitive ability, such as perfectly accurate forecasting, and its large reliance on equilibrium analysis to predict behavior in social interactions. Recent developments in behavioral game theory address these limitations by allowing for bounded and heterogeneous thinking, recognizing limitations on people's forecasting abilities, while keeping models as generally applicable as those using equilibrium analysis. One such psychological approach is cognitive hierarchy (CH) modeling, in which players reason accurately only about those who think less. CH predicts non-equilibrium behaviors that have been observed in more than 100 laboratory experiments and several field settings, and has process implications that have been tested with eye-tracking and data from brain imaging.
Griessinger 2015: The neuroeconomics of strategic interaction
We describe here the theoretical, behavioral and neural bases of strategic interaction — multiagent situations where the outcome of one’s choice depends on the actions of others. Predicting others’ actions requires strategic thinking, thus thinking about what the others might think and believe. Game theory provides a canonical model of strategic thinking implicit in the notion of equilibrium and common knowledge of rationality. Behavioral evidence shows departures from equilibrium play and suggests different models of strategic thinking based on bounded rationality. We report neural evidence in support of non-equilibrium models of strategic thinking. These models suggest a cognitive-hierarchy theory of brain and behavior, according to which people use different levels of strategic thinking that are associated with specific neural computations.