Consider a pile of sand. Trickle more sand onto it from above, and eventually it will undergo a phase transition: an avalanche will cascade down the pile.
As the sand piles up, the slope at different points on the surface of the pile grows steeper, until it passes the critical point at which the phase transition takes place. The trickle of sand, whatever its source, is what causes the dynamical system to evolve, driving the slope ever back up toward the critical point. Thanks to that property, the critical point is also an attractor. However, crucially, the overall order evident in the pile arises entirely from local interactions among grains of sand. Criticality events are thus self-organized.
Wars are self-organized criticality events. So are bank runs, epidemics, lynchings, black markets, riots, flash mobs, neuronal avalanches in your own brain’s neocortex, and evolution, as long as the metaphorical sand keeps pouring. Sure, some of these phenomena are beneficial — evolution definitely has a lot going for it — but they’re all unpredictable. Since humans are arguably eusocial, it stands to reason that frequent unpredictability in the social graphs we rely on to be human is profoundly disturbing. We don’t have a deterministic way to model this unpredictability, but wrapping your head around how it happens does make it a little less unsettling, and can point to ways to route around it.
A cellular automaton model, due to Bak, Tang, and Wiesenfeld, is the classic example of self-organized criticality. The grid of a cellular automaton is (usually) a directed graph where every vertex has out-degree 4 — each cell has four neighbors — but the model generalizes just fine to arbitrary directed graphs. You know, like social graphs.
Online social ties are weaker than meatspace ones, but this has the interesting side effect of making the online world “smaller”: on average, fewer degrees separate two arbitrary people on Facebook or Twitter than two arbitrary people offline. On social media, users choose whether to share messages from one to another, so any larger patterns in message-passing activity are self-organized. One such pattern, notable enough to have its own name, is the internet mob. The social graph self-reorganizes in the wake of an internet mob. That reorganization is a phase transition, as the low become high and the high become low. But the mob’s target’s social status and ties are not the only things that change. Ties also form and break between users participating in, defending against, or even just observing a mob as people follow and unfollow one another.
Some mobs form around an explicit demand, realistic or not — the Colbert Report was never in any serious danger of being cancelled — while others identify no extrinsic goals, only effects on the social graph itself. Crucially, however, both forms restructure the graph in some way.
This structural shift always comes with attrition costs. Some information flows break and may never reform. The side effects of these local interactions are personal, and their costs arise from the idiosyncratic utility functions of the individuals involved. Often this means that the costs are incomparable. Social media also brings the cost of engagement way down; as Justine Sacco discovered, these days it’s trivial to accuse someone from halfway around the planet. But it’s worse than that; even after a mob has become self-sustaining, more people continue to pile on, especially when messages traverse weak ties between distant groups and kick off all-new avalanches in new regions of the graph.
Remember Conway’s law? All systems copy the communication structures that brought them into being. When those systems are made of humans, that communication structure is the social graph. This is where that low average degree of separation turns out to be a problem. By traversing weak ties, messages rapidly escape a user’s personal social sphere and propagate to ones that user will never intersect. Our intuitions prepare us for a social sphere of about a hundred and fifty people. Even if we’re intellectually aware that our actions online are potentially visible to millions of people, our reflex is still to act as if our messages only travel as far and wide as in the pre-social-media days.
This is a cognitive bias, and there’s a name for it: scope insensitivity. Like the rabbits in Watership Down, able to count “one, two, three, four, lots,” beyond a certain point we’re unable to appreciate orders of magnitude. Furthermore, weak long-distance ties don’t give us much visibility into the size of the strongly-tied subgraphs we’re tapping into. Tens of thousands of individual decisions to shame Justine Sacco ended in her being the #1 trending topic on Twitter — and what do you suppose her mentions looked like? Self-organized criticality, with Sacco at ground zero. Sure, #NotAllRageMobs reach the top of the trending list, but they don’t have to go that far to have significant psychological effect on their targets. (Sociologist Kenneth Westhues, who studies workplace mobbing, argues that “many insights from [the workplace mobbing] literature can be adapted mutatis mutandis to public mobbing in cyberspace,” and I agree.)
In the end, maybe the best we can hope for is user interfaces that encourage us to sensitize ourselves to the scope of our actions — that is to say, to understand just how large of a conversation we’re throwing our two cents into. Would people refrain from piling on to someone already being piled on if they knew just how big the pile already was? Well, maybe some would. Some might do it anyway, out of malice or out of virtue-signaling. As Robert Kegan and Lisa Laskow Lahey point out in Immunity to Change, for many people, their sense of self “coheres by its alignment with, and loyalty to, that with which it identifies.” Virtue signaling is one way people express that alignment and loyalty to groups they affiliate with, and these days it’s cheap to do that on social media. Put another way, the mobbings will continue until the perverse incentives improve. There’s not much any of us can individually do about that, apart from refraining from joining in on what appears to be a mob.
That’s a decision characteristic of what Kegan and Lahey call the “self-authoring mind,” contrasted with the above-mentioned “socialized mind,” shaped primarily “by the definitions and expectations of our personal environment.” Not to put too fine a point on it, over the last few years, my social media filter bubble has shifted considerably toward the space of people who independently came to a principled stance against participation in mobs. However, given that the functional programming community, normally a bastion of cool reason and good cheer, tore itself apart over a moral panic just a few months ago, it’s clear that no community is immune to flaming controversy. Self-organized criticality means that the call really is coming from inside the house.
Here’s the moral question that not everyone answers the same way I do, which has led to some restructuring in my region of the graph, a local phase transition: when is it right to throw a handful of sand on the pile?
Some people draw a bright line and say “never.” I respect that. It is a consistent system. It was, in fact, my position for quite some time, and I can easily see how that comes across as throwing down for Team Not Mobbing. But one of the implications of being a self-authoring system is that it’s possible to revisit positions at which one has previously arrived, and, if necessary, rewrite them.
So here’s the core of the conundrum. Suppose you know of some information that’s about to go public. Suppose you also expect, let’s say to 95% confidence, that this event will kick off a mob in your immediate social sphere. An avalanche is coming. Compared to it, you are a pebble. The ground underneath and around you will move whether you do anything or not. What do you do?
I am a preference consequentialist, and this is a consequentialist analysis. I won’t be surprised if how much a person agrees with it correlates with how much of a consequentialist they are. I present it mainly in the interest of braindumping the abstractions I use to model these kinds of situations, which is as much in the interest of information sharing as anything else. There will be mathematics.
I am what they call a “stubborn cuss” where I come from, and if my only choices are to jump or be pushed, my inclination is to jump. Tor fell down where organizational accountability was concerned, at first, and as Karen Reilly’s experience bears out, had been doing so for a while. So that’s the direction I jumped. To be perfectly honest, I still don’t have anything resembling a good sense of what the effects of my decision were versus those of anyone else who spoke up, for whatever reason, about the entire situation. Self-organized chaotic systems are confounding like that.
If you observe them for long enough, though, patterns emerge. Westhues has been doing this since the mid-1990s. He remarks that “one way to grasp what academic mobbing is is to study what it is not,” and lists a series of cases. “Ganged up on or not,” he concludes of a professor who had falsified her credentials and been the target of student protests about the quality of her teaching, “she deserved to lose her job.” Appelbaum had already resigned before the mob broke out. Even if the mob did have an extrinsic demand, his resignation couldn’t have been it, because that was already over and done with.
Okay, but what about the intrinsic outcomes, the radical restructuring of the graph that ensued as the avalanche settled? Lovecruft has argued that removing abusers from opportunities to revictimize people is a necessary step in a process that may eventually lead to reconciliation. This is by definition a change in the shape of the social graph. Others counter that this is ostracism, and, well, that’s even true: that’s what it looks like when a whole lot of people decide to adopt a degrees-of-separation heuristic, or to play Exit, all at once.
Still others argue that allegations of wrongdoing should go before a criminal court rather than the court of public opinion. In general I agree with this, but when it comes to longstanding patterns of just-this-side-of-legally-actionable harm, criminal courts are useless. A bad actor who’s clever about repeatedly pushing ever closer to that line, or who crosses it but takes care not to leave evidence that would convince a jury beyond a reasonable doubt, is one who knows exactly what s/he’s doing and is gaming the system. When a person’s response to an allegation boils down to “no court will ever convict me,” as Tor volunteer Franklin Bynum pointed out, that sends a game-theoretically meaningful signal.
Signaling games are all about inference and credibility. From what a person says, what can you predict about what actions they’ll take? If a person makes a particular threat, how likely is it that they’ll be able to make good on it? “No court will ever convict me” is actually pretty credible when it comes to a pattern of boundary-violating behavior that, in many cases, indeed falls short of prosecutability. (Particularly coming from someone who trades on their charisma.) Courts don’t try patterns of behavior; they try individual cases. But when a pattern of boundary-pushing behavior is the problem, responding to public statements about that pattern with “you’ll never prove it” is itself an instance of the pattern. As signals go, to quite a few people, it was about the loudest “I’m about to defect!” Appelbaum could have possibly sent in a game where the players have memory.
Courts don’t try patterns of behavior, but organizations do. TQ and I once had an incredibly bizarre consulting gig (a compilers consulting gig, which just goes to show you that things can go completely pear-shaped in bloody any domain) that ended with one of the client’s investors asking us to audit the client’s code and give our professional opinion on whether the client had faked a particular demonstration. Out of professional courtesy, we did not inquire whether the investor had previously observed or had suspicions about inauthenticity on the client’s part. Meanwhile, however, the client was simultaneously emailing conflicting information to us, our business operations partner, and the investor — with whom I’d already been close friends for nearly a decade — trying to play us all off each other, as if we didn’t all have histories of interaction to draw on in our decision-making. “It’s like he thinks we’re all playing classical Prisoner’s Dilemma, while the four of us have been playing an iterated Stag Hunt for years already,” TQ observed.
Long story short (too late), the demo fell shy of outright fraud, but the client’s promises misrepresented what the code actually did to the point where the investor pulled out. We got a decent kill fee out of it, too, and a hell of a story to tell over beers. When money is on the line, patterns of behavior matter, and I infer from the investor’s action that there was one going on there. Not every act of fraud — or force, for that matter — rises to the level of criminality, but a pattern of repeated sub-actionable force or fraud is a pattern worth paying attention to. A pattern of sub-actionable force or fraud coupled with intimidation of people who try to address that pattern is a pattern of sociopathy. If you let a bad actor get away with “minor” violations, like plagiarism, you’re giving them license to expand that pattern into other, more flagrant disregard of other people’s personhood. “But we didn’t think he’d go so far as to rape people!” Of course you didn’t, because you were doing your level best not to think about it at all.
Investors have obvious strong incentives to detect net extractors of value accurately and quickly. Another organization with similarly strong incentives, believe it or not, is the military. Training a soldier isn’t cheap, which is why the recruitment and basic training process aims to identify people who aren’t going to acquire the physical and mental traits that soldiering requires and turn them back before their tenure entitles them to benefits. As everyone who’s been through basic can tell you, one blue falcon drags down the whole platoon. Even after recruits have become soldiers, though, the military still has strong incentives to identify and do something about serial defectors. Unit cohesion is a real phenomenon, for all the disagreement on how to define it, and one or a few people preying on the weaker members of a unit damages the structure of the organization. The military knows this, which is the reason its Equal Opportunity program exists: a set of regulations outlining a complaint protocol, and a cadre trained and detailed to handle complaints of discriminatory or harassing behavior. No, it’s not perfect, by any stretch of the imagination. The implementation of any human-driven process is only as rigorous as the people implementing it, and as we’ve already discussed, subverting human-driven processes for their own benefit is a skill at which sociopaths excel. However, like any military process, it’s broken down into bite-sized pieces for every step of the hierarchy. Some of them are even useful for non-hierarchical structures.
Fun fact: National Guard units have EO officers too, and I was one. Again and again during the training for that position, they hammer on the importance of documentation. We were instructed to impress that not just on people who bring complaints, but on the entire unit before anyone has anything to bring a complaint about. Human resources departments will tell you this too: document, document, document. This can be a difficult thing to keep track of when you’re stuck inside a sick system, a vortex of crisis and chaos that pretty accurately describes the internal climate at Tor over the last few years. And, well, the documentation suffered, that’s clear. But now there’s some evidence, fragmentary as it may be, of a pattern of consistent and unrepentant boundary violation, intimidation, bridge-burning, and self-aggrandizement.
Even when the individual acts that make up a pattern are calculated to skirt the boundaries of actionable behavior, military commanders have explicit leeway to respond to the pattern with actions up to and including court-martial, courtesy of the general article of the Uniform Code of Military Justice:
Though not specifically mentioned in this chapter, all disorders and neglects to the prejudice of good order and discipline in the armed forces, all conduct of a nature to bring discredit upon the armed forces, and crimes and offenses not capital, of which persons subject to this chapter may be guilty, shall be taken cognizance of by a general, special, or summary court-martial, according to the nature and degree of the offense, and shall be punished at the discretion of that court.
It’s the catch-all clause that Kink.com installed a bunch of new rules in lieu of, an exception funnel that exists because sometimes people decide that having one is better than the alternative. Realistically, any form of at-will employment implicitly carries this clause too. If a person can be fired for no reason whatsoever, they can certainly be fired for a pattern of behavior. Companies have this option; organizations that don’t maintain contractual relationships with their constituents face paths that are not so clear-cut, for better or for worse.
But I take my cues about exception handling, as I do with a surprisingly large number of other life lessons, from the Zen of Python:
Errors should never pass silently.
Unless explicitly silenced.
When a person’s behavior leaves a pattern of damage in the social fabric, that is an exception going silently unhandled. The whisper network did not prevent the damage that has occurred. It remains to be seen what effect the mob-driven documentation will have. Will it achieve the effect of warning others about a recurring source of error (I suppose nominative determinism wins yet again), or will the damaging side effects of the phase transition prove too overwhelming for some clusters of the graph to bear? Even other consequentialists and I might part ways here, because of that incomparability problem I mentioned earlier. I don’t really have a good answer to that, or to deontologists or virtue ethicists either. At the end of the day, I spoke up because of two things: 1) I knew that several of the allegations were true, and 2) if I jumped in front of the shitstorm and got my points out of the way, it would be far harder to dismiss as some nefarious SJW plot. Sometimes cross-partisanship actually matters.
I don’t expect to change anyone’s mind here, because people don’t develop their ethical principles in a vacuum. That said, however, situations like these are the ones that prompt people to re-examine their premises. Once you’re at the point of post-hoc analysis, you’re picking apart the problem of “how did this happen?” I’m more interested in “how do we keep this from continuing to happen, on a much broader scale?” The threat of mobs clearly isn’t enough. Nor would I expect it to be, because in the arms race between sociopaths and the organizations they prey on, sociopath strategies evolve to avoid unambiguous identification and thereby avoid angry eyes. “That guy fucked up, but I won’t be so sloppy,” observes the sociopath who’s just seen a mob take another sociopath down. Like any arms race, it is destined to end in mutually assured destruction. But as long as bad actors continue to drive the sick systems they create toward their critical points, there will be avalanches. Whether you call it spontaneous order or revolutionary spontaneity, self-organized criticality is a property of the system itself.
The only thing that can counteract self-organized aggregate behavior is different individual behavior that aggregates into a different emergent behavior. A sick system self-perpetuates until its constituents decide to stop constituting it, but just stopping a behavior doesn’t really help you if doing so leaves you vulnerable. As lousy of a defense as “hunker down and hope it all goes away soon” is over the long term, it’s a strategy, which for many people beats no strategy at all. It’s a strategy that increases the costs of coordination, which is a net negative to honest actors in the system. But turtling is a highly self-protective strategy, which poses a challenge: any proposed replacement strategy that lowers the cost of coordination among honest actors also must not be significantly less self-protective, for idiosyncratic, context-sensitive, and highly variable values of “significantly.”
I have some thoughts about this too. But they’ll have to wait till our final installment.