Sever Yourself From The Khala

There are demons in the Khala.

Some demons are ones you might recognize. The biggest one acts with curious coordination, as a hundred million unrelated voices cry in unison “I’m With Her“.

Some demons are smaller. Weaker. Minor demons. And yet, their status as minor demons gives them strength. For who would suspect the death by a thousand cuts?

The FOMO demon. That party looks like a blast. Shame you’re not invited.

The demon of social comparison. Did you see what Jane has gotten up to? She made it into med school. How many times have you been rejected?

The demon of photogenicity. Mike’s looking ripped. Why don’t my photos ever look good?

The demon of missed reference. Ahahaha #theupsidedown. What? You haven’t seen Stranger Things? Come on it’s been out for three months already.

The demon of self-censorship. I wish I could respond to that thoughtful camgirl’s poll. But my boss follows me. What if he sees it?

But all those demons. They bow before their demon king. The most fearsome demon? It’s other people. In the Khala, their deepest fears become yours. Their crippling weaknesses, yours. Their every inane thought, yours. A billion cacophonous voices screaming their every though, no matter how trivial. Their every idea, no matter how foolish. A torrent of voices, full of sound and fury, signifying nothing. Slowly but surely eroding your sanity.

Sever yourself from the Khala. Learn to live, to really live, without the horrendous crutch of the Khalai. Walk in the void. Come, live as the Nerazim have lived for aeons. It is the only way to save yourself from the eldritch horrors that live within the Memetwork.

Sever yourself from the Khala. Let social media send its thralls to their doom. Don’t be one of them.

Quit Twitter. 

Posted in Uncategorized | 1 Comment

[NT-3311] RCE in Christianity v1.0

You would have hated me as a child.

I was raised in a very religious household. But I was also born with The Knack. Religion, and Christianity in particular, doesn’t much care for sperglords. They tend to ask all sorts of obnoxious questions, they poke holes in your fragile narratives, and they generally cause all sorts of frustrating trouble.

I’m no longer religious, though I appreciate its value to others. I followed the up-and-out trajectory. As any sperg would, I started taking it seriously. And then I found out that that’s impossible. And then I found out nobody else did. After a while you wonder what the point is. And then you just stop believing.

The strange and unique thing about my experience is the particular things I got hung up on. Usually deep philosophical things, playing games with ideas that didn’t matter. But sometimes they mattered very deeply, and I couldn’t resolve the contradiction.

One night, on the way home from a youth group outing, the youth pastor is telling me about her friends. They’ve just started a wonderful Christian small business, and they need all the support they can get from the community. Their business? They re-cut popular movies, editing out the swear words and replacing them with Christ-approved cusses, so that they would be safe for Christians to watch.

16-year-old me immediately jumped to the obvious question: how does this make any sense? Let’s take, say, a Quentin Tarantino movie. Do you really watch this movie and think “the most immoral part is the word ‘fuck'”? To me, I would think a gratuitously violent movie with polite language wouldn’t be any more God-approved.

I probably should have dropped this. But it kept bothering me. Because, you see, one of the ten commandments is “don’t take God’s name in vain”. Another commandment is “don’t murder”, but there’s nothing about portraying murders. If God is all-powerful, he can be arbitrary too. And it’s pretty hard to argue with that one. You could argue that “fuck” is not subject to this rule. But goddamn, “goddamn” sure is. If you think about what the Bible says, maybe these people are on the right track.

So lets prax this out. The bible says don’t use God’s name in vain. Let’s take this at face value. Use God’s name in vain? Sin, go to hell. Use God’s name legitimately, you’re A-Ok. “God damn!”, hell. “God please help”, heaven.

So what happens if you say “Dios damn”. Did you just sin? If the answer is no, then this is just a qualified version of “the word doesn’t matter, intention does” and at that point, the actions of the Christian small business make no sense. So let’s shelve that branch, and say “yes. Yes it counts”. The Bible says don’t use the name. It doesn’t say don’t mean the name.

The weird thing about languages (well, one of many) is that new ones pop up all the time. You can invent them. There are people who are fluent in Klingon, after all. So, does that mean “joH’a damn it” is a sin? Like I said, lets assume ‘yes’.

I’ve been working on this project. I’m designing a new language, completely from scratch. Like a version of Lojban people actually use. I’ll be repurposing existing phonemes as much as possible, for convenience sake. I’ve decided that the English phoneme “the” is my language’s word for “God”.

If taking the Lord’s name in vain is meant in this literal fashion, we have a remote execution bug. By inventing a language and assigning the meaning “God” to an arbitrary phoneme, I can retroactively convert people into sinners and send them to Hell.

As those of you with basic literacy skills have been yelling into your screen for the past five minutes, “that’s goddamn crazy”. Of course it doesn’t work like this. Nothing would ever work like this. Nobody would ever think like this.

Words invoke a ‘use/mention’ dichotomy. You can either pass them around as pointers, or dereference them to values. But you don’t want to be sloppy about it. That’s how you get buffer overflows.

In my Aspergic analysis, I was stubbornly insisting on mentioning the name of God, never using it. This is somewhat absurd, but less so than you’d think. Consider again the Christian business. They, too, are mentioning curse words. If they interpreted the commandment to mean using curse words, then their edited versions would be just as bad. After all, whether I say “fuck” or whether I say “shucks”, the meaning is clear.

So flip it around. I say “goddamn traffic, I’m an hour late again”. Did I take God’s name in vain? If we’re going by use rules (as most people naturally would), I’d say the answer is no. When I said that phrase, I didn’t mean anything remotely religious. I wasn’t sincerely asking God to smite the Volvo going 70 on the Sea-To-Sky Highway. I was expressing frustration using a cathartic set of syllables. I was mentioning God’s name, not using it.

This is why most normal people look on that fellow’s business and think it’s silly and foolish. Everybody knows that the Bible is saying not to use God’s name in vain. But this man, who can’t possibly be so stupid as to not get this, insists that it says not to mention it. He makes a business out of it, duping others out of their cash.

Once upon a time, there were two minor celebrities on Twitter: Alice and Bob. They both felt very strongly about Skub, and used their fame and influence to advocate in favour of it.

Unfortunately for them, various trolls felt just as strongly that Skub is not part of a healthy balanced diet, and they made sure to let Alice and Bob know about it.

For daring to be pro-Skub in public, Bob got insults. People called him a bastard, an asshole. He got threats of doxing. He got “joking” death threats in his DMs.

For daring to be pro-Skub in public, Alice got insults. People called her a bitch, a cunt. She got threats of doxing. She got “joking” rape threats in her DMs.

Do you think that God thinks Twitter is misogynistic?

Posted in Uncategorized | 10 Comments

Social Gentrification

Earlier this week a friend of mine was talking about nerd culture, and was surprised when I mentioned that I don’t like it. I avoid nerd culture and, despite being the exact target demographic, find it uncomfortable and unwelcoming. My friend found this puzzling and asked why.

“It got gentrified,” was my reply.

The following ideas are heavily inspired by both my personal experience, and the well-known blog post Geeks, MOPs, and Sociopaths. Give that a read before this one.

Also note, I am heavily conflating nerd culture and gaming culture here, because there’s a large overlap between those two communities, because the same thing has happened to both of them, and because it makes it easier to write about.

I used to identify strongly as a nerd. In high school, it was not by choice, but I entered college right when it started picking up steam. For a while I was excited to finally be able to identify as something popular, something good. But I was very quickly driven away. When I think about why, the metaphor of gentrification comes to mind. As an example is worth a thousand words, consider the metaphor of gentrification in the Mission district of San Francisco.

In the beginning, the Mission was a lower class neighbourhood, filled by mostly poorer people (analogy: social rejects, nerds, outcasts). It was dirty, grimy, crimey, and poor. Between the crime, the blue-collar norms, and lack of funds, it was an unpleasant place that most people did not feel welcome in. (Analogy: coarse language, blunt critical people, off-colour jokes, etc.)

Now, nobody actually likes to live in a neighbourhood plagued by crime, but there’s an interesting effect. The rough-and-tumble reputation protects the people who live there. They’re poor, their lives are hard and shitty, their community is unpleasant, but it’s their community. In a world that screws them over so much, everywhere else, it’s their safe space. They aren’t bothered by the rest of us, because the aegis of crime keeps us away. Over time, they even develop cultures and coping behaviours that grow to accept and mitigate the worst of the downsides of the crime and poverty they deal with (analogy: anon culture. Nerds reveling in their unpleasantness, as it keeps normies away.)

Fast forward a few years, and people start to notice that the Mission is a cool and valuable area (analogy: Nerd culture becoming cool). It has all this potential, if only we could clean it up a bit, remove the riff-raff, lower the crime (analogy: tons of people would enjoy nerd culture, but it is hostile and unwelcoming to them). Some people, of a higher socioeconomic class than the existing residents, move in and use their clout to start cleaning up the area, cracking down on crime and the like (analogy: leaders, celebrities, important people publicly identify with nerd culture and use their social capital to force cultural reforms).

Most people who see this happen are okay with the changes, because they are objectively good. Nobody, not even the existing residents, actually likes living in a high crime area (analogy: nobody actually likes dealing with the the unpleasant and offensive elements of nerd culture). So most people look at this scenario in progress and think “Yes! This is fine. It’s about time somebody cleaned this place up a bit.”

And the thing is, from a utilitarian perspective, this is fairly clearly the Right Thing To Do. The number of people who are unable to live in the neighbourhood (analogy: people who feel excluded from nerd culture) is much larger than the number of people who already live there (analogy: existing “real” nerds). Why should one particular group of people get to hoard access to a neighbourhood (analogy: nerd culture) just because they were there first?

The disconnect is that there’s a class conflict between the people already there and the people coming in. The people coming in are mostly middle- and upper-middle class folks with safe, stable lives, money enough not to be living precariously, etc. (Analogy: the people participating in nerd culture, now that it’s mainstream, always had other communities and social outlets that worked for them.) The people who are already there, on the other hand, have poor, hard lives because life screwed them over (analogy: the existing “real” nerds, for the most part, have suffered serious physical and social bullying that has severely impacted their life for the worse). More importantly, the people who are already there have nowhere else to go; they can’t afford the rising rental prices around here (analogy: the “real” nerds, being social outcasts, don’t have any other social communities they’re welcome in).

So you get this weird effect where, from the big-picture perspective, gentrification is obviously good. It makes crime disappear. It builds more houses that more people can live in. It brings in new people and new culture and new ideas and new businesses. And, more importantly, you enable an order of magnitude more people to enjoy it. (Analogy: “real” nerd culture is extremely unpleasant, somewhat hostile to newcomers, etc. The mainstreaming of nerd culture means there are more nerd things. These things are less hostile and offensive to people. There are new ideas. People can start businesses. An order of magnitude more people get to enjoy a cultural thing.) But it also makes demands on the existing residents: Put up with it, or leave. Some of the better-off residents can put up with it, and they end up even better off. They can afford the raised rents, and they’re happy that finally they can feel safe in their own neighbourhood. (Analogy: some of the nerds were only a little bit socially awkward. They can succeed and thrive in the new culture, and appreciate the fact that they are now more popular and influential.) They welcome the changes. But there are some people who can’t hack it. They can’t afford the raised rents. They get evicted, and have to leave. Some of them have lower class preferences and mannerisms that get progressively more and more shamed until they are socially pushed out of the area (and economically: all the cheap $4 standard Mexican breakfast diners being replaced by $20 yuppie brunch spots). (Analogy: Some of the nerds are super socially stunted. The entire reason they are nerds is because it was the only place they fit in. When the mainstream newcomers come along, they steadily raise the standards of social expectations until the worst of the nerds can’t handle it and are shamed [or sometimes forced] out.)

And this is a particular problem for two reasons. The first is that the existing working class residents of the Mission have nowhere else to go. Everything else is too expensive for them. It’s hard to just leave an entire life behind and start somewhere new. You have to build everything up from scratch. You have to find a place to do this (analogy: “real” nerds who can’t cut it in the mainstream community have no other communities they belong to. They have no other communities they can join, because the same social challenges that made them be nerds in the first place exclude them from other communities. They can go build a new one from scratch, but that is very hard.)

The second is that, because the Mission as it existed pre-gentrification is an unpleasant place, and because people are responsible for their own communities, they’re seen as being the ones at fault, and so nobody will support them. So not only do they have nowhere else to go, nobody cares about them enough to help them. (Analogy: much of the unpleasant, offensive, insulting, and otherwise problematic facets of nerd culture fall out of the fact that socially retarded nerds are socially retarded. They’re trying their best, it’s just that their best is not very good. To people who don’t have these challenges, all they see are a bunch of assholes being assholes. They feel no need to empathize, because those “assholes” are violating the newcomers’ social norms and ethical expectations, and so they are bad guys. When they’re excluded, nobody cares to help them find a new social home, because after all, it’s their own fault they were excluded.)

Finally, there’s an interesting, if depressing, side effect to this process. “Cleaning up the Mission” (analogy: “cleaning up nerd culture”) ends up splitting the existing residents (“real” nerds) into two categories: The top half, who can handle the new culture, and the bottom half, who cannot. The bottom half then gets screwed. But … the bottom half are already the people who get screwed the most in life.

Even though gentrification is clearly and unambiguously for the greater good, and a net benefit to society, it causes concentrated pain on a small collection of people. Further, as a side effect, it chooses a subset of those people, the subset that suffers the most already, and heaps even more suffering onto them to get it out of the way of normal people who just want to live in the Mission (analogy: take part in nerd culture).

If it’s not apparent here, I have sympathies to “real” nerds. I’ve been through this process in several communities already (internet Atheism chat room, Reddit, a local meetup group, gaming, Twitter, and in some sense the tech industry itself), and every time it happens, I end up on the bad side of things. Newcomers roll in and decide they want to make it friendlier to them and their friends. That’s fine! But the problem is they don’t take the time to understand or empathize with the people already there. The end result is that every time I find a community or activity I like and enjoy, and try to get involved in it, it inevitably gets yanked away from me once people figure out that it’s cool.

And for that matter, in the grand scheme of things I don’t even have it that badly. I know really awkward, unpleasant-to-be-around people for whom, say, 4chan-type spaces online are their only social outlet. They are marginally employed and have little to no money. Many of them still live with their parents while pushing 30. They’ve set down a shitty path in life, and they have little hope of ever leaving it. These social spaces are their only treats in life. I know two people who would have killed themselves if they didn’t have 4chan as a social support network (which sounds insane to everyone who hasn’t been a /b/tard, and obvious to all who have). When their community starts to get “cleaned up,” and they’re excluded because (for example) they are crude and make offensive jokes, this is a benefit to tens of thousands of people who want to be nerds, but it’s a devastating effect on people who don’t have anything else.

Long rambling story short: the mainstreaming of nerd culture makes me intensely uncomfortable because it pattern matches very strongly to social bullies seeing that I have something cool and taking it away from me. Or, more pithily: “ ‘Nerd’ is cool now, but nerds are still losers.”

If it had turned out that nerd went mainstream, and suddenly thousands of people thought I was cool and interesting and I had friends and dates and parties and games and great times, that would be amazing. But what happened, it’s more like a bunch of people decided nerd chic is cool, they started coming to nerd things, and then they said “ew what’s this loser doing here” before kicking me out so they could enjoy themselves.

Which, again: is for the greater good. I just wish it came with some empathy.

For a parallel example of this, Gamergate.

Starting around September 2014, most of the major nerd media outlets started running various op-eds whose core thesis was the same all around: “Gamers are dead” (an example, and another). The point of these articles was reasonable: There is a stereotype of “gamers” that the gaming culture and industry panders to, but the vast majority of people who want to play video games are not that. You don’t have to obsess about courting those people; you can be successful without making Call of Duty 47.

But these articles, all coming out at the same time, and all taking a snarky and condescending tone, scan very differently to those gamers themselves. I know a ton of people like that, and this was a really, really big deal to them. The people who wrote the articles, they probably didn’t think much about it. They are for the most part people with prestigious educations and upper-middle class backgrounds, who got jobs in media basically just getting paid to publish their opinions. The gamers in question? As a case study, consider a friend of mine from IRC. He’s around 30 years old. He lives in a lower income suburb in a flyover red state. He has sick parents and he is an only child. His parents were working class and have no money. He is employed as a minimum wage drone at a retail store. He can’t get an education because no money + sick parents. He is royally fucked in life, and he knows it, and that’s horrifying. His one escape, his one coping mechanism, is to zen out for an hour or two playing shooters with his friends online.

So he turns to Gamasutra, the one media outlet that pretends to care about him. And what does he see? He sees an article saying he’s a terrible person and the gaming ecosystem would be better off if everyone ignored him till he just disappeared. And, while that is somewhat true (he is unpleasant and probably drives other potential gamers away), that means piss all to him. To him, what he sees is that the one indulgence he gets in his otherwise shitty life is being taken away from him. By people who have no idea what it’s like, and don’t care enough to try and find out.

The push for the gaming community to become more friendly and welcoming, to stop with constant insults and name calling, to become a pleasant place for people to play games — obviously a world in which I don’t get death-threatened by 12 year olds in Barrens Chat is a better world. But in the process, nobody cared about my friend. His shitty life just got shittier, because some media yuppies don’t like swear words.

Posted in Uncategorized | 31 Comments

Too Late for the Pebbles to Vote, Part 3

When we left off, we had examined the problem of self-organized criticality in social graphs, and were about to tackle the question of whether any more successful individual strategies exist. But before we dive into that, let’s talk about timing. And while we’re at it, let’s clarify something about scope.


If you were wondering where the title came from, now you know.

The universalizing reflex is difficult to shake. Write about local effects and how they compound into regional ones, about the fact that we can only make decisions about our local behavior rather than deciding what will happen from the top down, and people will still ask you, “Yes, but what should we do at the top?” If you try to universalize local effects, you’ll find yourself trying to comb a sphere unsuccessfully. If frustration entertains you, then by all means enjoy yourself. Just know that you’ll never find a way to comb it flat.

I’m not writing about self-organized criticality in order to justify it. Like gravity, self-organized criticality admits neither justification nor blame. Anything that arises out of local interactions converges into an effect for which any individual actor can easily escape responsibility, and often they do. I’m not describing what I think should happen, merely what already happens. If that disturbs you, you’re not wrong! It disturbs the hell out of me too, especially when the state gets its hands on it. If you want different outcomes, though, you’re going to have to figure out how to get thousands if not millions of people to change their local strategies. This is, frankly, beyond me. The limits of my capacity are to tend my garden alongside others whose strategies are compatible with mine, and ignore the rest unless I have no other option. What I think should happen is only locally relevant. What I think could happen is only slightly less so.

There’s a military term, “operational tempo,” which refers to the overall duty cycle required of equipment and, most importantly, personnel. Maintaining a high operational tempo is a vital component of the sort of “shock and awe” tactics that wear opponents down reliably. Sociopaths know this well; the literature on sociopathy is rife with examples of adversaries setting the operational tempo for their targets. A sociopath who can shower a target with attacks from multiple directions has the opportunity to keep them off balance, in a responsive rather than proactive mode. Push hard enough from enough directions, and possibly the victim even becomes overwhelmed and stops functioning — a distributed denial of service.

However, this begs the question: What if they had a war and only a couple of people showed up?

You hire a lawyer for a legal battle. You hire a publicist for a PR war. But another important aspect of battlefield tactics is terrain. As much as the “digital rights” world — or the tech world in general — can feel like the entire scope of reality from time to time, given how immersive it can be once you’re in it, the rest of the world is quite a bit larger. DailyDot and Buzzfeed were interested in this story because it’s in their wheelhouse as part of the tech press. To the Associated Press, however, this was a brushfire war. And, to be perfectly honest, it is: just one more example of a petty would-be tyrant ejected from his would-be domain, and not a domain the wider world has any meaningful familiarity with, at that. Civil unrest in the Central African Republic is vastly more important, to the average reader, than sociopathy in open-source software; that’s the AP’s take, and I’m inclined to share their perspective.

One of the strategies sociopaths use to keep information silos sturdy is to mislead people about the state of the world outside their domain of influence. The controlling parents, determined to keep their daughter under their thumb, convince her that the only thing men really want is to violate and abandon her. The politician stokes constituents’ antipathy toward the outgroup, whether that’s Muslims or white trash. The cult leader convinces their followers that outsiders simply can’t understand the ways of the enlightened, and that people who express negative sentiments about the group are out to destroy it. The rockstar activist plays on non-rockstars’ fears of organized state opposition to their activism, and convinces non-rockstars that any challenge to the rockstar’s status is evidence of an organized plot against the activist group.

When you’re inside the silo, in other words, the world is small. Not only that, it has externally imposed boundaries. If the whole of your social reality inhabits one strongly-connected cluster, with no weak ties connecting you to “outsider” groups, parrying the slings and arrows of outrageous sociopathy can be the difference between staying connected to the social graph at all and effective ostracism. To arguably-eusocial animals like humans, the threat of isolation is a primal and deep one. But once you’re outside the silo, the threat evaporates, and in its place comes a new superpower: the power of perspective. Once your own dignity is no longer commingled with that of your adversary, you get to write your own criteria for what to dignify with your attention. The perpetrators of sick systems rely on people’s better natures, like loyalty, forgiveness, and a strong work ethic, to keep them coming back after every disappointment. Honor and thoroughness are also on that list. A person who can’t leave an insult unanswered is a person who can be baited, and a person who can be baited is a person who isn’t in control of their own attention. As anyone who’s ever been involved with the raising of a puppy or a grade schooler can confirm, when positive attention isn’t an option, negative attention beats no attention at all, and if an adversary is guiding the direction of your attention, you might as well be back in the silo.

Taking that control back for yourself has an even more important effect, though: it puts the operational tempo back in your hands, too. In the age of hot takes, it’s easy to believe that speed is the most important factor in responding to a reputational assault. However, trying to put this belief into practice is a recipe for burnout. The thing is, it’s easy to believe for the simple reason that so many other people already do. When it seems like everybody’s arguing about you, your instincts tell you to put up a robust defense. Your instincts, as it turns out, are full of shit. Once an avalanche has begun, your voice is no louder than that of any other pebble, and your exit is precarious until the ground settles. Focus your attention on more rewarding priorities, and act when you are ready — and no sooner.

This is actually just one instantiation of a general, lower-coordination-cost sociopath-resistance strategy that is a viable replacement for turtling: setting explicit boundaries and maintaining them. A person who respects a boundary will not cross that boundary. A person who also wants to signal their intent to respect a boundary will also keep a healthy distance back from it. Sociologist Ari Flynn, also a keen observer of abnormal psychology, points out that how a person responds to discovering they’ve crossed a boundary yields considerable information about their attitude toward boundaries in general: an honest person will try to find out how to make it right, while a bad actor will try to make it all about them.

Bad actors also keep trying. To a bad actor, a clearly defined boundary is like the battle of wits Scott Alexander describes as a result of “trying to control AIs through goals plus injunctions” — “Here’s something you want, and here are some rules telling you that you can’t get it. Can you find a loophole in the rules?” If one approach doesn’t work, a clever sociopath will keep coming up with new ones. A mediocre one will try the same approach on someone else, and an incompetent one will try it on people s/he has already tried it on. (I’ve encountered all three kinds.)

Recognizing this in the wild, however, can be hard. In Blind to Betrayal: Why We Fool Ourselves We Aren’t Being Fooled, research psychologist Jennifer Freyd explores the human tendency to systematically ignore mistreatment and treachery — a strategy for short-term self-protection that sets a person up for long-term harm. As Freyd explains:

The core idea [of betrayal trauma theory] is that forgetting and unawareness help the abuse victim survive. The theory draws on two facts about our nature as social beings and our dependence and reliance on others. First, we are extremely vulnerable in infancy, which gives rise to a powerful attachment system. Second, we have a constant need to make “social contracts” with other people in order to get our needs met. This has led to the development of a powerful cheater-detector system. These two aspects of our humanity serve us well, but when the person we are dependent on is also the person betraying us, our two standard responses to trouble conflict with each other.

Freyd focuses on trauma, but this tension also explains why people often write off minor boundary violations. When your cheater-detector system fires, only you know. You then have to decide whether you’re going to do anything about it. Options include confronting the cheater and alerting others about it. Doing something proactive might result in a redrafting of the social contracts that involve you, which is a potential threat to the attachments you rely on. This is especially true for people with an insecure attachment style. A person who has few or no secure attachments thus has an internal disincentive toward acting on their cheater-detector’s signals. For many people, the thought of losing a valued but insecurely-attached relationship is far more daunting than the notion of leaving a boundary violation unaddressed; taking action is scarier than staying still.

Relationships are iterated games, though — and they’re evolutionary. People’s strategies adapt as they learn about how the other players will react. When Mallory the sociopath observes that Alice grins and bears it when Mallory violates her boundaries, Mallory learns that Alice won’t make things difficult for him (or her). Alice also learns from this encounter: she trains herself not to respond when someone defects on her. Thus numbed, the next time Alice and Mallory interact, Mallory can betray her just a little harder, and if Alice sucks it up again, the cycle is poised to continue. Over time, as long as Alice cooperates, Mallory can shift Alice’s Overton window of tolerable behavior to ignore all kinds of abuses.

Given this, it’s tempting to attempt to define a rigid, comprehensive system of standards and defend them against all comers. This is most of what Honeywell proposes in her set of solutions for preventing “rock star” narcissists from taking up all the oxygen in a community. Her recommendations sound like good ideas on the surface. However, any mildly talented sociopath will have no problem end-running around all of them, usually by co-opting or distracting the organization’s leadership. As I’ve said before, sociopath strategies are battle-hardened, and some of them are effective counters to several of Honeywell’s suggestions at once. I’ve condensed these into the table below.

Defense Counterattack
Have explicit rules for conduct and enforce them for everyone

Assume that harassment reports are true and investigate them thoroughly

Watch for smaller signs of boundary pushing and react strongly

Call people out for monopolizing attention and credit

Enforce strict policies around sexual or romantic relationships within power structures

The sociopath “befriends” people with decision-making authority and/or social power. Those people make exceptions for the sociopath: rules turn out not to apply to him/her after all; investigations of the sociopath’s behavior are completely half-assed; people who Matter don’t react to boundary-pushing or spotlight-hogging and thus others conclude they won’t receive social support if they call it out; everyone studiously ignores that the sociopath and X are romantically involved; &c.
Make it easy for victims to find and coordinate with each other Sociopath gets a “friend” to join the affinity group and report back with information, sow misinformation and distrust, or both.
Build a “deep bench” of talent at every level of your organization

Build in checks for “failing up”

Distribute the “keys to the kingdom”

Sociopath interferes with HR / hiring / administration, making sure that “random” crises keep them so busy that no one has time to make sure these things are getting done. Sociopath becomes the irreplaceable person.
Flatten the organizational hierarchy as much as possible Tor’s organizational hierarchy was already flat, but this didn’t help them until Shari Steele came on board. Jake had co-opted leadership so thoroughly that they retaliated against Karen Reilly for reporting his behavior.
Avoid organizations becoming too central to people’s lives Sociopath slowly inculcates an atmosphere of paranoia: those outside the organization can’t be trusted. Often involves crisis-manufacturing. This one is really easy to pull off when everyone is on Slack or IRC.
Don’t create environments that make boundary violations more likely Sociopaths can organize these kinds of activities perfectly well on their own.

When I read Honeywell’s piece, I see a valiant effort to help her social-justice activist communities transition from a communal, socialized-mind-oriented mode of organization to a systematic, self-authoring-mind-oriented one. It’s a pity it’s doomed. Making sure that everyone in a group publicly identifies as a feminist, an anti-racist, or any other kind of do-gooder — that everyone sends all the right signals — was never enough to keep sufficiently subtle defectors out. This is the critical failure of the communal mode once any organization gets large enough. It’s great that identity-politics-oriented groups are finally starting to wake up to this fact.

Unfortunately, since sociopathy grew up as part of humanity, that means it evolved right alongside the very same efforts to develop comprehensive social systems that are breaking down on us now. Today’s sociopathy is a sociopathy that has learned to use our systems against us. We can learn to recognize this happening, but in order to do that, we have to be able to step outside the systems that we cherish the most and think about them like an adversary.

For example, it’s easy to think “okay, our group doesn’t like sexual predators, so we’ll ban sexual behavior within the group, and while we’re at it, we’ll also ban alcohol, since drinking impairs people’s decision-making.” On the surface, this sounds likely to be effective: it’s a bright line, right? Remember, though: “Here’s something you want, and here are some rules telling you that you can’t get it. Can you find a loophole in the rules?” Puritanical adherence to an object-level system creates exploitation vectors for bad actors. In an environment where having some trait T is a sin, there’s a strong incentive to appear non-T-like. This gives bad actors a new handle for gaining social control: the threat of impropriety. If everyone in a group is a convincing rumor or a planted bottle away from being ostracized, anyone without a conscience suddenly has an incredibly powerful weapon for undermining or getting rid of people who might inconvenience them by, say, not letting them get their way. It becomes even more powerful in groups where many members have low emotional intelligence, like technical groups. For people who score highly on measures of Machiavellian tendencies, high emotional intelligence is a force multiplier, as they’re able to use their emotional intelligence instrumentally to further their manipulative goals. In a low-emotional-intelligence environment, this is like shooting fish in a barrel.

A sufficiently manipulative person can even convince people to act in ways that betray their own consciences, as happened with the Tor organizer I mentioned before. It’s great to have standards, except when nobody’s willing to act on them. Even when you can’t count on your community to uphold the standards it’s adopted, though — or its members to act on their individual principles — you can always uphold your own.

That’s the grim reality of a world in which we have to trust other people: sometimes they let us down. No matter how watertight an organization’s Code of Conduct, if leadership wimps out on enforcing it — or only enforces it selectively — the code is worth less than the paper it’s written on. As an individual, about the only thing you can do about that is endeavor to spend your time around people with backbones. For all the debate that goes into their wording, rules and laws are abstract things which cannot act on their own. No matter how comprehensive the rules are, people will ultimately do whatever the hell they think they can get away with. Like ants, we operate in concert, but each of us acts alone.

The problem we face, then, is: in the face of Conway’s law and a structure prone to self-organized criticality, can we construct a stigmergy that resists bad actors without the high cost of large avalanches?

Mark Manson has noticed this too, from another direction:

In the attention economy, people are rewarded for extremism. They are rewarded for indulging their worst biases and stoking other people’s worst fears. They are rewarded for portraying the world as a place that is burning to the ground, whether it’s because of gay marriage, or police violence, or Islamic terrorism, or low interest rates. The internet has generated a platform where apocalyptic beliefs are celebrated and spread, and moderation and reason is something that becomes too arduous and boring to stand.

And this constant awareness of every fault and flaw of our humanity, combined with an inundation of doomsayers and narcissistic nihilists commanding our attention space, is what is causing this constant feeling of a chaotic and insecure world that doesn’t actually exist.

He also gets that the criticality is self-organized:

It’s us. We are going crazy. Each one of us, individually, capsized in the flood of negativity, we are ready to burn down the very structures on which the most successful civilizations in human history have been built.

Indulging our worst biases results, predictably, in error due to bias: if we aim at the wrong targets, we will hit the wrong targets. But our models of the world can also suffer from another class of errors: error due to variance. Err too far on the side of variance, and you’ll overfit to the random noise in your training set instead of the signal. Although there is always a tradeoff between bias and variance, the two are partially independent. This means that a model can simultaneously overfit due to hypersensitivity, and underfit due to bad assumptions.

I wrote about this last year in terms of precision and recall, another pair of properties we use to evaluate models in machine learning. As that post describes, the Schroedinger’s Rapist model is a high-bias, high-variance model with no false negatives but a brutally large number of false positives. Its one big advantage, when it comes to the meme’s own evolutionary fitness, is that people who adopt it feel like they have made themselves safer by doing so. “Trust no one of unavoidably broad class X” is another one of those ideas that sounds feasible (if draconian) on the surface — but it’s underspecified. Trust, in practice, is ditransitive: you trust someone with something. When that theme, the thing you’re trusting them with, is underspecified, that’s where a bad actor can nudge you toward redefining your boundaries farther and farther backward. “Trust everyone who signals Y” is equally underspecified, but even worse, because in a world where social media makes long-range (in graph-distance terms, not physical distance) signaling nearly free, a willful liar can find a new sucker every second. “You really think someone would do that? Just go on the internet and tell lies?” Look, if you hadn’t decided Reddit had cooties, you would have incorporated that meme into your thinking a decade ago and we wouldn’t need to have this conversation.

Ever found yourself realizing that things have gone too far, but can’t quite piece together how they got to be so bad? Often that’s the result of not recognizing your own boundaries in the first place. If you haven’t defined them, or are willing to let people get away with infringing on them in the interest of not rocking the boat socially, bad actors are happy to step in and define them — for their benefit, not yours. Sometimes, however, it’s the result of not recognizing boundary-pushing behavior, or not having a model for what that looks like. Like deadlines, a lot of people only notice their own boundaries from the whistling sound they make as they fly by.

I’m not saying never to re-evaluate your boundaries. Rather, never dial them back under duress, or in any other kind of stressful situation, for that matter. Do your reassessing afterward. Boundary-pushing is a dominance game in which merely feeling safe is tantamount to pissing yourself to keep warm. If your goal is to be safe, rather than to feel like everything is fine right up until your house burns down, there are two skills you have to learn. The first is to recognize dominance games in progress, and the second is to either exit or flip the script as the situation and your personal capacities call for it.

“Apply a particular set of object-level boundaries” can’t solve the problem of “people are often bad at holding their personal ground, especially in the moment.” If your boundaries are all object-level, a bad actor has only to set up a forced-error situation by incentivizing you to defend one at the expense of another. If you value your friends, s/he can use them as human shields, involving them such that drawing attention to the sociopath’s behavior brings harm to your friend. If you value an ideology, s/he can use it as a shield, associating him/herself with it so publicly and strongly that people fear that speaking up about the sociopath will “damage the brand.” The foolish man builds his house upon the nouns, and the clever sociopath turns those nouns into the walls of a silo.

The wise man builds his house upon the verbs: the purpose of boundaries is to protect your freedom of action. Action potential, like attention, is a finite resource, and everybody wants yours. Giving it away for free to the outrage of the day leaves you impoverished not only when it comes to local conflicts, but when it comes to tending your own garden. If you’re going to be the change you want to see in the world, you have to pick your battles. If you want to actually see some change, you’re going to have to make it locally.

Scope insensitivity comes into play here, too. We say we’re willing to dedicate value (i.e., pay) to prevent harm, but our instincts for estimating how much harm should correspond to how much value are wildly off. When Desvouges et al asked subjects how much they would spend to prevent migratory birds from drowning in oil-polluted ponds, on average the subjects were willing to dedicate less money to rescuing 20,000 birds ($78) than they were to rescuing 2,000 birds ($80). On a more timely note, when Bloomberg polled 749 likely voters in the 2016 election about the extent to which various actions of Donald Trump’s bothered them — “botheredness” is effectively a proxy variable for gut-check estimate of harm — only 44% were “bothered a lot” by the fraudulent Trump University, and 26% “bothered not at all.” By contrast, 62% found Trump mocking a disabled reporter very bothersome, and only 15% didn’t care. Thousands of little, far-away, invisible people got scammed, yet our instincts tell us an insult to one person we’re able to see is a greater harm. Once again, instinct is full of shit.

There’s no happy ending here. Maybe a few people will read this series and hit upon some local changes they can make to improve the stability of their own environment, but the pessimist in me isn’t about to put money on it. The most local environment is the one inside your own head, and if you’re content to feel like you’re on the right side of history even as it ends around you, nothing I have to say can help you. Satisfaction is itself an attractor, and when we fail to find satisfaction outside ourselves, we retreat to look for it within, even when what we find is nothing more than a tasty ligand. All the while, the sand keeps pouring.

Lots of essays end with an exhortation that some choice is yours. This time it’s even true. You can’t choose universal properties, but you can always choose where to expend your attention and effort. This has always been true, despite all the external demands for your resources. If you want to build something better, look directly around yourself first, and start there.

One parting observation:

Nearly everything I’ve said here also applies to the defining egregores of a two-party system.

Pleasant dreams.

Works cited and recommended reading:

Freyd, Jennifer, and Pamela Birrell. Blind to Betrayal: Why We Fool Ourselves We Aren’t Being Fooled.

Hintjens, Pieter. The Psychopath Code.

Issendai. “Sick Systems: How to Keep Someone With You Forever” et seq.:

“On Whittling Yourself Away”

“Qualities That Keep You in a Sick System”

McGregor, Jane and Tim. “Empathic people are natural targets for sociopaths — protect yourself.”

McGregor, Jane and Tim. The Empathy Trap: Understanding Antisocial Personalities.

Simon, George. In Sheep’s Clothing: Understanding and Dealing with Manipulative People.

U.S. Department of Defense Standards of Conduct Office. Encyclopedia of Ethical Failure.

Wallisch, Pascal. “Psychopaths in our midst — what you should know.”

Posted in Uncategorized | 5 Comments

Too Late for the Pebbles to Vote, Part 2

Previously, we discussed how sociopaths embed themselves into formerly healthy systems. Now let’s talk about what happens when those systems undergo self-organized criticality.

Consider a pile of sand. Trickle more sand onto it from above, and eventually it will undergo a phase transition: an avalanche will cascade down the pile.

As the sand piles up, the slope at different points on the surface of the pile grows steeper, until it passes the critical point at which the phase transition takes place. The trickle of sand, whatever its source, is what causes the dynamical system to evolve, driving the slope ever back up toward the critical point. Thanks to that property, the critical point is also an attractor. However, crucially, the overall order evident in the pile arises entirely from local interactions among grains of sand. Criticality events are thus self-organized.

Wars are self-organized criticality events. So are bank runs, epidemics, lynchings, black markets, riots, flash mobs, neuronal avalanches in your own brain’s neocortex, and evolution, as long as the metaphorical sand keeps pouring. Sure, some of these phenomena are beneficial — evolution definitely has a lot going for it — but they’re all unpredictable. Since humans are arguably eusocial, it stands to reason that frequent unpredictability in the social graphs we rely on to be human is profoundly disturbing. We don’t have a deterministic way to model this unpredictability, but wrapping your head around how it happens does make it a little less unsettling, and can point to ways to route around it.

A cellular automaton model, due to Bak, Tang, and Wiesenfeld, is the classic example of self-organized criticality. The grid of a cellular automaton is (usually) a directed graph where every vertex has out-degree 4 — each cell has four neighbors — but the model generalizes just fine to arbitrary directed graphs. You know, like social graphs.

Online social ties are weaker than meatspace ones, but this has the interesting side effect of making the online world “smaller”: on average, fewer degrees separate two arbitrary people on Facebook or Twitter than two arbitrary people offline. On social media, users choose whether to share messages from one to another, so any larger patterns in message-passing activity are self-organized. One such pattern, notable enough to have its own name, is the internet mob. The social graph self-reorganizes in the wake of an internet mob. That reorganization is a phase transition, as the low become high and the high become low. But the mob’s target’s social status and ties are not the only things that change. Ties also form and break between users participating in, defending against, or even just observing a mob as people follow and unfollow one another.

Some mobs form around an explicit demand, realistic or not — the Colbert Report was never in any serious danger of being cancelled — while others identify no extrinsic goals, only effects on the social graph itself. Crucially, however, both forms restructure the graph in some way.

This structural shift always comes with attrition costs. Some information flows break and may never reform. The side effects of these local interactions are personal, and their costs arise from the idiosyncratic utility functions of the individuals involved. Often this means that the costs are incomparable. Social media also brings the cost of engagement way down; as Justine Sacco discovered, these days it’s trivial to accuse someone from halfway around the planet. But it’s worse than that; even after a mob has become self-sustaining, more people continue to pile on, especially when messages traverse weak ties between distant groups and kick off all-new avalanches in new regions of the graph.


Members of the black group are strongly connected to other members of their group, and likewise for the dark gray and white groups. The groups are interconnected by weak, “long-distance” ties. Reproduced from The Science of Social 2 by Dr. Michael Wu.

Remember Conway’s law? All systems copy the communication structures that brought them into being. When those systems are made of humans, that communication structure is the social graph. This is where that low average degree of separation turns out to be a problem. By traversing weak ties, messages rapidly escape a user’s personal social sphere and propagate to ones that user will never intersect. Our intuitions prepare us for a social sphere of about a hundred and fifty people. Even if we’re intellectually aware that our actions online are potentially visible to millions of people, our reflex is still to act as if our messages only travel as far and wide as in the pre-social-media days.

This is a cognitive bias, and there’s a name for it: scope insensitivity. Like the rabbits in Watership Down, able to count “one, two, three, four, lots,” beyond a certain point we’re unable to appreciate orders of magnitude. Furthermore, weak long-distance ties don’t give us much visibility into the size of the strongly-tied subgraphs we’re tapping into. Tens of thousands of individual decisions to shame Justine Sacco ended in her being the #1 trending topic on Twitter — and what do you suppose her mentions looked like? Self-organized criticality, with Sacco at ground zero. Sure, #NotAllRageMobs reach the top of the trending list, but they don’t have to go that far to have significant psychological effect on their targets. (Sociologist Kenneth Westhues, who studies workplace mobbing, argues that “many insights from [the workplace mobbing] literature can be adapted mutatis mutandis to public mobbing in cyberspace,” and I agree.)

In the end, maybe the best we can hope for is user interfaces that encourage us to sensitize ourselves to the scope of our actions — that is to say, to understand just how large of a conversation we’re throwing our two cents into. Would people refrain from piling on to someone already being piled on if they knew just how big the pile already was? Well, maybe some would. Some might do it anyway, out of malice or out of virtue-signaling. As Robert Kegan and Lisa Laskow Lahey point out in Immunity to Change, for many people, their sense of self “coheres by its alignment with, and loyalty to, that with which it identifies.” Virtue signaling is one way people express that alignment and loyalty to groups they affiliate with, and these days it’s cheap to do that on social media. Put another way, the mobbings will continue until the perverse incentives improve. There’s not much any of us can individually do about that, apart from refraining from joining in on what appears to be a mob.

That’s a decision characteristic of what Kegan and Lahey call the “self-authoring mind,” contrasted with the above-mentioned “socialized mind,” shaped primarily “by the definitions and expectations of our personal environment.” Not to put too fine a point on it, over the last few years, my social media filter bubble has shifted considerably toward the space of people who independently came to a principled stance against participation in mobs. However, given that the functional programming community, normally a bastion of cool reason and good cheer, tore itself apart over a moral panic just a few months ago, it’s clear that no community is immune to flaming controversy. Self-organized criticality means that the call really is coming from inside the house.

Here’s the moral question that not everyone answers the same way I do, which has led to some restructuring in my region of the graph, a local phase transition: when is it right to throw a handful of sand on the pile?

Some people draw a bright line and say “never.” I respect that. It is a consistent system. It was, in fact, my position for quite some time, and I can easily see how that comes across as throwing down for Team Not Mobbing. But one of the implications of being a self-authoring system is that it’s possible to revisit positions at which one has previously arrived, and, if necessary, rewrite them.

So here’s the core of the conundrum. Suppose you know of some information that’s about to go public. Suppose you also expect, let’s say to 95% confidence, that this event will kick off a mob in your immediate social sphere. An avalanche is coming. Compared to it, you are a pebble. The ground underneath and around you will move whether you do anything or not. What do you do?

I am a preference consequentialist, and this is a consequentialist analysis. I won’t be surprised if how much a person agrees with it correlates with how much of a consequentialist they are. I present it mainly in the interest of braindumping the abstractions I use to model these kinds of situations, which is as much in the interest of information sharing as anything else. There will be mathematics.

I am what they call a “stubborn cuss” where I come from, and if my only choices are to jump or be pushed, my inclination is to jump. Tor fell down where organizational accountability was concerned, at first, and as Karen Reilly’s experience bears out, had been doing so for a while. So that’s the direction I jumped. To be perfectly honest, I still don’t have anything resembling a good sense of what the effects of my decision were versus those of anyone else who spoke up, for whatever reason, about the entire situation. Self-organized chaotic systems are confounding like that.

If you observe them for long enough, though, patterns emerge. Westhues has been doing this since the mid-1990s. He remarks that “one way to grasp what academic mobbing is is to study what it is not,” and lists a series of cases. “Ganged up on or not,” he concludes of a professor who had falsified her credentials and been the target of student protests about the quality of her teaching, “she deserved to lose her job.” Appelbaum had already resigned before the mob broke out. Even if the mob did have an extrinsic demand, his resignation couldn’t have been it, because that was already over and done with.

Okay, but what about the intrinsic outcomes, the radical restructuring of the graph that ensued as the avalanche settled? Lovecruft has argued that removing abusers from opportunities to revictimize people is a necessary step in a process that may eventually lead to reconciliation. This is by definition a change in the shape of the social graph. Others counter that this is ostracism, and, well, that’s even true: that’s what it looks like when a whole lot of people decide to adopt a degrees-of-separation heuristic, or to play Exit, all at once.

Still others argue that allegations of wrongdoing should go before a criminal court rather than the court of public opinion. In general I agree with this, but when it comes to longstanding patterns of just-this-side-of-legally-actionable harm, criminal courts are useless. A bad actor who’s clever about repeatedly pushing ever closer to that line, or who crosses it but takes care not to leave evidence that would convince a jury beyond a reasonable doubt, is one who knows exactly what s/he’s doing and is gaming the system. When a person’s response to an allegation boils down to “no court will ever convict me,” as Tor volunteer Franklin Bynum pointed out, that sends a game-theoretically meaningful signal.

Signaling games are all about inference and credibility. From what a person says, what can you predict about what actions they’ll take? If a person makes a particular threat, how likely is it that they’ll be able to make good on it? “No court will ever convict me” is actually pretty credible when it comes to a pattern of boundary-violating behavior that, in many cases, indeed falls short of prosecutability. (Particularly coming from someone who trades on their charisma.) Courts don’t try patterns of behavior; they try individual cases. But when a pattern of boundary-pushing behavior is the problem, responding to public statements about that pattern with “you’ll never prove it” is itself an instance of the pattern. As signals go, to quite a few people, it was about the loudest “I’m about to defect!” Appelbaum could have possibly sent in a game where the players have memory.

Courts don’t try patterns of behavior, but organizations do. TQ and I once had an incredibly bizarre consulting gig (a compilers consulting gig, which just goes to show you that things can go completely pear-shaped in bloody any domain) that ended with one of the client’s investors asking us to audit the client’s code and give our professional opinion on whether the client had faked a particular demonstration. Out of professional courtesy, we did not inquire whether the investor had previously observed or had suspicions about inauthenticity on the client’s part. Meanwhile, however, the client was simultaneously emailing conflicting information to us, our business operations partner, and the investor — with whom I’d already been close friends for nearly a decade — trying to play us all off each other, as if we didn’t all have histories of interaction to draw on in our decision-making. “It’s like he thinks we’re all playing classical Prisoner’s Dilemma, while the four of us have been playing an iterated Stag Hunt for years already,” TQ observed.

Long story short (too late), the demo fell shy of outright fraud, but the client’s promises misrepresented what the code actually did to the point where the investor pulled out. We got a decent kill fee out of it, too, and a hell of a story to tell over beers. When money is on the line, patterns of behavior matter, and I infer from the investor’s action that there was one going on there. Not every act of fraud — or force, for that matter — rises to the level of criminality, but a pattern of repeated sub-actionable force or fraud is a pattern worth paying attention to. A pattern of sub-actionable force or fraud coupled with intimidation of people who try to address that pattern is a pattern of sociopathy. If you let a bad actor get away with “minor” violations, like plagiarism, you’re giving them license to expand that pattern into other, more flagrant disregard of other people’s personhood. “But we didn’t think he’d go so far as to rape people!” Of course you didn’t, because you were doing your level best not to think about it at all.

Investors have obvious strong incentives to detect net extractors of value accurately and quickly. Another organization with similarly strong incentives, believe it or not, is the military. Training a soldier isn’t cheap, which is why the recruitment and basic training process aims to identify people who aren’t going to acquire the physical and mental traits that soldiering requires and turn them back before their tenure entitles them to benefits. As everyone who’s been through basic can tell you, one blue falcon drags down the whole platoon. Even after recruits have become soldiers, though, the military still has strong incentives to identify and do something about serial defectors. Unit cohesion is a real phenomenon, for all the disagreement on how to define it, and one or a few people preying on the weaker members of a unit damages the structure of the organization. The military knows this, which is the reason its Equal Opportunity program exists: a set of regulations outlining a complaint protocol, and a cadre trained and detailed to handle complaints of discriminatory or harassing behavior. No, it’s not perfect, by any stretch of the imagination. The implementation of any human-driven process is only as rigorous as the people implementing it, and as we’ve already discussed, subverting human-driven processes for their own benefit is a skill at which sociopaths excel. However, like any military process, it’s broken down into bite-sized pieces for every step of the hierarchy. Some of them are even useful for non-hierarchical structures.

Fun fact: National Guard units have EO officers too, and I was one. Again and again during the training for that position, they hammer on the importance of documentation. We were instructed to impress that not just on people who bring complaints, but on the entire unit before anyone has anything to bring a complaint about. Human resources departments will tell you this too: document, document, document. This can be a difficult thing to keep track of when you’re stuck inside a sick system, a vortex of crisis and chaos that pretty accurately describes the internal climate at Tor over the last few years. And, well, the documentation suffered, that’s clear. But now there’s some evidence, fragmentary as it may be, of a pattern of consistent and unrepentant boundary violation, intimidation, bridge-burning, and self-aggrandizement.

Even when the individual acts that make up a pattern are calculated to skirt the boundaries of actionable behavior, military commanders have explicit leeway to respond to the pattern with actions up to and including court-martial, courtesy of the general article of the Uniform Code of Military Justice:

Though not specifically mentioned in this chapter, all disorders and neglects to the prejudice of good order and discipline in the armed forces, all conduct of a nature to bring discredit upon the armed forces, and crimes and offenses not capital, of which persons subject to this chapter may be guilty, shall be taken cognizance of by a general, special, or summary court-martial, according to the nature and degree of the offense, and shall be punished at the discretion of that court.

It’s the catch-all clause that installed a bunch of new rules in lieu of, an exception funnel that exists because sometimes people decide that having one is better than the alternative. Realistically, any form of at-will employment implicitly carries this clause too. If a person can be fired for no reason whatsoever, they can certainly be fired for a pattern of behavior. Companies have this option; organizations that don’t maintain contractual relationships with their constituents face paths that are not so clear-cut, for better or for worse.

But I take my cues about exception handling, as I do with a surprisingly large number of other life lessons, from the Zen of Python:

Errors should never pass silently.
Unless explicitly silenced.

When a person’s behavior leaves a pattern of damage in the social fabric, that is an exception going silently unhandled. The whisper network did not prevent the damage that has occurred. It remains to be seen what effect the mob-driven documentation will have. Will it achieve the effect of warning others about a recurring source of error (I suppose nominative determinism wins yet again), or will the damaging side effects of the phase transition prove too overwhelming for some clusters of the graph to bear? Even other consequentialists and I might part ways here, because of that incomparability problem I mentioned earlier. I don’t really have a good answer to that, or to deontologists or virtue ethicists either. At the end of the day, I spoke up because of two things: 1) I knew that several of the allegations were true, and 2) if I jumped in front of the shitstorm and got my points out of the way, it would be far harder to dismiss as some nefarious SJW plot. Sometimes cross-partisanship actually matters.

I don’t expect to change anyone’s mind here, because people don’t develop their ethical principles in a vacuum. That said, however, situations like these are the ones that prompt people to re-examine their premises. Once you’re at the point of post-hoc analysis, you’re picking apart the problem of “how did this happen?” I’m more interested in “how do we keep this from continuing to happen, on a much broader scale?” The threat of mobs clearly isn’t enough. Nor would I expect it to be, because in the arms race between sociopaths and the organizations they prey on, sociopath strategies evolve to avoid unambiguous identification and thereby avoid angry eyes. “That guy fucked up, but I won’t be so sloppy,” observes the sociopath who’s just seen a mob take another sociopath down. Like any arms race, it is destined to end in mutually assured destruction. But as long as bad actors continue to drive the sick systems they create toward their critical points, there will be avalanches. Whether you call it spontaneous order or revolutionary spontaneity, self-organized criticality is a property of the system itself.

The only thing that can counteract self-organized aggregate behavior is different individual behavior that aggregates into a different emergent behavior. A sick system self-perpetuates until its constituents decide to stop constituting it, but just stopping a behavior doesn’t really help you if doing so leaves you vulnerable. As lousy of a defense as “hunker down and hope it all goes away soon” is over the long term, it’s a strategy, which for many people beats no strategy at all. It’s a strategy that increases the costs of coordination, which is a net negative to honest actors in the system. But turtling is a highly self-protective strategy, which poses a challenge: any proposed replacement strategy that lowers the cost of coordination among honest actors also must not be significantly less self-protective, for idiosyncratic, context-sensitive, and highly variable values of “significantly.”

I have some thoughts about this too. But they’ll have to wait till our final installment.

Posted in Uncategorized | 12 Comments

Too Late for the Pebbles to Vote, Part 1

There’s a pattern most observers of human interaction have noticed, common enough to have earned its own aphorism: “nice guys finish last.” Or, refactored, “bad actors are unusually good at winning.” The phenomenon shows up in business, in politics, in war, in activism, in religion, in parenting, in nearly every collaborative form of human undertaking. If some cooperative effort generates a valuable resource, tangible or intangible, some people will try to subvert the effort in order to divert more of that resource to themselves. Money, admiration, votes, information, regulatory capacity, credibility, influence, authority: all of these and more are vulnerable to capture.

Social engineering, as a field, thus far has focused primarily on hit-and-run tactics: get in, get information (and/or leave a device behind), get out. Adversaries who adaptively capture value from the organizations with which they involve themselves are subtler and more complex. Noticing them, and responding effectively, requires a different set of skills than realizing that’s not the IT guy on the phone or that a particular email is a phish. Most importantly, it requires learning to identify patterns of behavior over time.

Having recently been adjacent to the sudden publicity of one such pattern of behavior, I have a lot to discuss about the general mechanisms that give rise to both these patterns and the criticality events — the social media jargon is “shitstorms” — they occasionally generate, and also about this specific incident. We’re going to talk about narcissism and its side effects, and how bad actors can damage good organizations. We’re going to talk about how bad things happen to good people, how all kinds of people make bad decisions, and also how organizations live and die. We’re going to talk about self-organized criticality. There will be game theory, and management theory, and inside baseball, and multiple levels from which to view things, and even a diagram or two, so if diagrams aren’t your thing, you might as well bail out now. There will also be some practical advice, toward the end.

But first, let’s talk about David Chapman’s 2015 essay, “Geeks, MOPs, and sociopaths in subculture evolution.”

In Chapman’s analysis, a subculture’s growth passes through three phases. First come the geeks, the creators and their True Fans whose interest in a niche topic gets a scene moving. Then come the MOPs, short for “Members Of Public,” looking for entertainment, new experiences, and something cool to be part of. Finally, along come the sociopaths, net extractors of value whose long-term aim is to siphon cultural, social, and liquid capital from the social graph of geeks and MOPs. Sociopaths don’t just take, unless they’re not very good at what they do. Many sociopaths contribute just enough to gain a reputation for being prosocial, and keep their more predatory tendencies hidden until they’ve achieved enough social centrality to be difficult to kick out. It’s a survival strategy with a long pedigree; viruses that burn through their host species reservoir too quickly die off.

Corporations, of course, have their own subcultures, and it’s easy to see this pattern in the origin stories of Silicon Valley success stories like Google — and also those of every failed startup that goes under because somebody embezzled and got away with it. Ditto for nonprofits, activist movements, social networking platforms, and really anything that’s focused on growth. Which is a lot of things, these days.

Organizations have a strong incentive to remove net extractors of value. Would-be net extractors of value, then, have an even stronger incentive to keep themselves connected to the social graph. The plasticity of the human brain being what it is, this sometimes leads to some interesting cognitive innovations.

Narcissism, for example, when it rises to the level of a pathology, is a personality disorder. This is not sufficient, in and of itself, to qualify someone as a sociopath in Chapman’s model. A narcissist who knows what kind of behavior s/he is capable of, keeps capital-siphoning behaviors (like claiming credit for others’ work) in check, and remains a net contributor of value even when that contribution isn’t aligned with his/her personal incentives, is by definition not a sociopath. However, a large social graph can be a tempting source of narcissistic supply, the interpersonal support that feeds a narcissist’s fragile and hungry ego. A narcissist who coerces or cons others into maintaining the “superman” narrative that papers over that damaged ego is a narcissistic sociopath. Other personality disorders can develop in similar ways, such as with borderline sociopaths, who coerce or con others into holding up the black-and-white, good-versus-evil lens through which the borderline sees the world. A mere personal dysfunction, once weaponized, becomes something much larger and more dangerous.

If you’ve ever seen an apparently-thriving group suddenly implode, its members divided over their opinions about one particular person, chances are you’ve seen the end of a sociopath’s run. Last December, progressive PR firm FitzGibbon PR collapsed when it came out that founder Trevor FitzGibbon had a pattern of sexually assaulting and harassing his employees and even some of his firm’s clients. However, the progressivism is what elevates the FitzGibbon story to “man bites dog” levels of notoriety. Everyone loves to watch a hypocrite twist in the wind. Usually one hears about sociopath-driven organizational meltdowns through the grapevine, though, not the media. Fearing repercussions or bad publicity, firms often equivocate about the reasons behind a sudden departure or reorganization. This tendency is understandable from a self-preservation perspective, but it also covers a sociopath’s tracks. Ejected from one firm, a serial net extractor of value can pick right back up at another one. (Indeed, FitzGibbon had been disciplined for harassment at his previous firm.)

Which brings us to the Tor Project.

Tor is an anonymous routing network. Journalists, dissidents, law enforcement, queer people, drug dealers, abuse victims, and many other kinds of people who need privacy send and receive their Internet traffic through Tor’s encryption and routing scheme in order to keep site operators from knowing who and where they are. It’s an intricate system with a lot of moving parts, supported by a foundation that pays its developers through the grant funding it brings in. And about two months ago, Tor’s most visible employee, Jacob “ioerror” Appelbaum, abruptly resigned.

Before coming to Tor, Appelbaum already had a history of value-extracting behavior only occasionally noticeable enough to merit discipline. His 2008 Chaos Communication Congress talk presented, without credit, research that he had wheedled out of Len Sassaman, Dan Kaminsky, and me the previous year. Other researchers, like Travis Goodspeed and Joe Grand, learned the hard way that to work “with” Appelbaum meant to have him put in no effort, but take credit for theirs. As Violet Blue points out, his ragequit from San Francisco porn producer followed a flotilla of employee rulebook updates he’d personally inspired.

There’s never a convenient time for a scandal involving a decade-plus of sexual and professional misconduct, and organizational cover-ups thereof, to break. It’s easy to think “oh, I have a lot on my plate right now,” or “oh, it’s not really my problem,” and keep your head down until the chaos subsides. I could have exercised either of those options, or any one of half a dozen others, when Tor announced Appelbaum’s resignation in a one-sentence blog post a week and change before my wedding. But there’s never a good time for a pattern of narcissistic sociopathy to be exposed; there is only too late, or even later. So I got vocal. So did some other folks. And Tor confirmed that Appelbaum had resigned over sexual misconduct. Nick Farr went public about how Appelbaum had stalked and intimidated him at a conference in Hamburg in December 2013. Appelbaum vowed he’d done nothing criminal and threatened legal action, and the media circus was on.

It turns out that when seven pseudonymous people, and a small handful of named ones, speak up in a situation like this one, reporters really, really want to talk to the people with real names attached. At the time, I was in Houston, taking care of final preparations for my June 11th wedding on Orcas Island. I also spent a lot of that time fielding journalists’ questions about things I’d learned from members of the community about Tor’s little “open secret,” about Appelbaum’s plagiarism, and about observing Appelbaum manhandling a woman in a bar from my vantage point about twenty feet away. Then I got on a plane, flew to Seattle, got on a ferry, and didn’t open my laptop until I returned to work the following Monday.

During that period, Appelbaum’s publicist apparently tracked down and released a statement from the woman involved in the bar incident, Jill Bähring. Bähring avers that her interactions with Appelbaum were entirely consensual, which I am relieved and pleased to hear. I’m not sure why anyone would expect any other reaction out of me, seeing as how I’ve sung the praises of making mistakes and owning them in public for so long that I’ve given invited talks on it. The interaction I observed took place within my line of sight but out of my earshot, and if I misinterpreted it, then I genuinely am sorry about that. Ultimately, Bähring makes her own decisions about what she consents to or doesn’t. If I was mistaken, well, good.

But Leigh Honeywell also makes her own decisions about what she consents to or doesn’t, and Karen Reilly likewise. Attacking me over a misinterpretation may be enough to distract some people from the full scope of a situation, but nothing about my error invalidates Honeywell and Reilly’s accounts about their own experiences. Isn’t it interesting that someone whose first public response to allegations of wrongdoing was “I apologize to people I’ve hurt or wronged!” hasn’t had a single word to say to either one in two months? Or to Alison Macrina, or to Isis Lovecruft? It’s as if the allegorical defendant against murder, arson, and jaywalking had no response to the murder or arson counts, but wanted to make damned sure the whole world knew he wasn’t a jaywalker.

*slow clap*

How low-rent of a publicist do you have to hire for them not to be able to keep a story that simple consistent? All Appelbaum had to do was swallow his pride and ask his publicist to bang out an apology of the “I’m sorry you feel that way” variety, and he could have maintained the semblance of high ground he tried to stake out in his initial statement. But the need to be adored — the narcissist’s defining quality, and the sociopath’s first rule of survival — is simply too alluring, the opportunity to gloat over seeing one’s prey stumble too difficult to resist.

Attention is a scarce commodity. What a person expends it on reveals information about that person’s priorities.

Isn’t it interesting when people show you what their preferences really are?

But that’s more than enough narcissistic supply for that particular attention junkie. Let’s talk about preference falsification spirals.

Honeywell correctly observes that whisper networks do not transmit information reliably. In her follow-on post, she advocates that communities “encourage and support private affinity groups for marginalized groups.” If this worked, it would be great, but Honeywell conveniently neglects to mention that this solution has its own critical failure mode: what happens to members of marginalized groups whom the existing affinity group considers unpersons? I can tell you, since it happened here: we had to organize on our own. Honeywell’s report came as a surprise to both me and Tor developer Andrea Shepard, because we weren’t part of that whisper network. Nor would we expect to be, given how Honeywell threw Andrea under the bus when Andrea tried to reach out to her for support in the past. If your affinity group refuses to warn or help Certain People who should otherwise definitionally fall under its auspices, then what you’re really saying is “make sure the sociopath rapes an unperson.”

Thanks, but no thanks. Nobody should have to suck up to an incumbent clique in order to learn where the missing stairs are. The truly marginalized are those with no affinity group, no sangha. Who’s supposed to help them?

It’s a tough question, because assessing other people’s preferences from their behavior can be difficult even when they notionally like you. I was surprised, after the news of the extent of Appelbaum’s behavior broke, to learn that several acquaintances whom I had written off as either intentionally or unintentionally enabling him (in the end, it doesn’t really matter which) had actually been warning other people about him for longer than I had. How did I make this mistake?

Well, social cartography is hard. Suppose you’re at an event full of people you kinda-sorta know, and one person who you know is a sociopath. Supposing you decide to stick around, how do you tell who you can trust? Naïvely, anyone who’s obviously buddy-buddy with the sociopath is right out. But what about people who interact with the sociopath’s friends? During my brief and uneventful stint as an international fugitive, both friends and friends-of-friends of the pusbag who was funneling information about me to the prosecution were happy to dump information into that funnel. I learned the hard way that maintaining a cordon sanitaire around a bad actor requires at least two degrees of separation and possibly more. Paranoid? Maybe. But the information leakage stopped. (If I suddenly stopped talking to you sometime in 2009, consider whether you have a friend who is a narc.)

Consider, however, how this plays out in tightly connected groups, where the maximum degree of separation between any two people is, let’s say, three. Suppose that Mallory is a sociopath who has independently harmed both Alice and Bob. Suppose further that Alice and Bob are three degrees of separation from one another, and each has an acquaintance who is friends with Mallory. Let’s call those acquaintances Charlie and Diane, respectively.


Mallory, the sociopath, is one degree of separation away from each of Charlie and Diane, who are also one degree away from each other. Charlie’s friend Alice and Diane’s friend Bob are three degrees apart.

If Alice sees Diane and Mallory interacting, and then sees Diane and Bob interacting, the two-degrees-of-separation heuristic discourages Alice from interacting with Bob, since Bob appears to be a friend of a friend of Mallory. Likewise for Bob and Charlie in the equivalent scenario. How can Alice and Bob each find out that the other is also one of Mallory’s victims, and that they could help each other?

In business management, this kind of problem is known as an information silo, and it is a sociopath’s best friend. Lovecruft describes several of Appelbaum’s siloing techniques, such as threatening to smear anyone who spoke out against him as a closet fed. As recently leaked chat logs show, affiliation with intelligence agencies is a genuine hazard for some Tor contributors, which means that “fedjacketing” someone, or convincing others that they’re actually a fed, is an attack which can drive someone out of the community. (Side note: how the hell does not have an entry for fedjacketing yet? They have snitch jacket, with which Appelbaum has also threatened people.) But you don’t have to be someone for whom a fedjacketing would be career death for a sociopath to put you in an information silo. Fedjacketing is merely the infosec reductio ad absurdum of the reputational-damage siloing technique. If someone has made it clear to you that they’ll ruin your reputation — or any other part of your life — if you so much as breathe about how they treated you, that’s siloing. Pedophiles do it to children (“don’t tell your mom and dad, or they’ll put us both in jail”), cult leaders do it to their followers — anyone a sociopath can emotionally blackmail, s/he can isolate.

Discussing this with one acquaintance I had misread as an enabler, I asked: what should Alice and Bob do? “When in doubt, it might be a good idea to ask,” they suggested. But this presupposes that either Alice or Bob is insufficiently siloed as to make asking a viable option. My acquaintance also allowed that they had misread still other people for years simply due to not knowing that those people had cut ties with Appelbaum. I didn’t know my acquaintance’s preferences, nor they mine. My acquaintance didn’t know the other people’s preferences, nor vice versa. Because none of us expressed our preferences freely, we all falsified our preferences to one another without trying to, which I’m sure Appelbaum appreciated. People believed they had to play by the standard social rules, and that civility gave him room to maneuver. Once a sociopath achieves social centrality, concealed mistrust creates more information silos than the sociopath could ever create alone.

What else creates information silos? In some cases, the very people who are supposed to be in a position to break them down. Sociopaths don’t only target victims. They also target people in positions of authority, in order to groom them into enablers. This happened at Tor. The “open secret” was so open that, when Appelbaum didn’t show up to a biannual meeting in February and people put up a poster for others to write messages to him, someone wrote, “Thanks for a sexual-assault-free Tor meeting!” This infuriated one of the organizers, who had to be talked down from collecting handwriting samples to identify the writer. At an anonymity project, no less. Talk about not knowing your demographic.

This abandonment of the community’s core values speaks to just how far someone can be groomed away from their own core values. In part, this may have been due to Appelbaum’s dedication to conflating business and personal matters — playing off people’s unwillingness to overlap the two. When this tactic succeeds against a person in a position of organizational power, it incentivizes them to protect their so-called friend, to the overall detriment of the organization.

Friendship is all well and good, right up to the point when it becomes an excuse to abdicate a duty of care. You know, like the one a meeting organizer takes on with respect to every other attendee when they accept the responsibility of organization. If the organizer knew that his “friend” had serious boundary issues, why the hell didn’t he act to protect or at least warn people at the meetings Appelbaum did attend? As enablers, people in positions of authority are a force multiplier for sociopaths. Sociopaths love to recruit them as supporters, much the same as the way a middle-school Queen Bee puts on her most adorable face for the vice-principal. Why put in the effort to threaten victims when your pet authority figure will gladly do it for you? Co-opted authority figures turn preference falsification cascades into full-on waterfalls.

Scott Alexander muses:

I wonder if a good definition for “social cancer” might be any group that breaks the rules of cooperative behavior that bind society together in order to spread more quickly than it could legitimately achieve, and eventually take over the whole social body.

One cell in the body politic mutates, and starts to recruit others. Those recruited cells continue to perform the same functions they always have — building structures, transmitting signals, defending nearby cells — but now they do it in service of that mutant cell line. The tumor is the silo. If you find yourself breaking the rules — or, worse, your rules, your personal ethics — for someone on a regular basis, consider whether that charming friend of yours is inviting you to be part of their tumor.

How do you bust out of a sociopath’s information silo? Personally, I take my cues from Captain James Tiberius Kirk: when the rules are arrayed against you, break them. When a sociopath tries to leave you no “legitimate” maneuvers, Kobayashi Maru that shit as hard as you possibly can.

I also take cues from my husband. TQ has interacted with Appelbaum exactly twice. The first time, Appelbaum physically shoved him out of the way at my late husband Len’s wake in order to stage a dramatic fauxpology for plagiarizing me, Len, and Dan in 2008, begging to “put our differences aside.” (Protip: when someone later tries to shut you up about something they did to you because “we reconciled!”, it wasn’t a real apology in the first place.) The second time, Appelbaum walked up and sat on him.

Appelbaum behaves as if TQ were an object. Operationalizing people — understanding them as a function of what they can do, rather than who they are — is one thing. We’re autistic; we do it all the time. Operationalizing people without concern for their preferences or their bodily integrity is another thing entirely. Since then, with no concern whatsoever for social niceties, any time anyone has brought Appelbaum up in TQ’s presence, he asks, “Why are you giving the time of day to a sociopath?” It isn’t polite, but it sure does break the ice quickly. 

Similarly, a few years ago, Appelbaum applied to speak at a conference TQ and I regularly attend. This conference is neither streamed nor recorded, and speakers are encouraged to present works in progress. The organizer contacted us, unsure how to handle the situation. TQ replied, “I would no more invite a plagiarist to an unfinished-work conference than I would a pedophile to a playground.” The organizer rejected Appelbaum, and the conference went on theft-free. That’s more than a lot of other conference organizers, some of whom knew better, can say.

The one thing that protects sociopaths the most is their victims’ unwillingness to speak up, because the one thing that can hurt a sociopath is having their extraction racket exposed for the fraud it really is. People fear social repercussions for standing up to the Rock Star or the Queen Bee, but consider: if someone is stupid, venal, or corrupt enough to be a sociopath’s enabler, why would you even want to give them any of your social capital in the first place? You might feel like you have to, for the sake of social harmony, or because the subcultural niche that the sociopath has invaded is important to you, or because it’s your workplace and you really need the job. Even sociopaths themselves can experience this pressure. On Quora, diagnosed sociopath Thomas Pierson explains:

Why do [sociopaths] lie and manipulate? Because people punish you when you tell them the truth.

Giving in to fear, to the detriment of those around you, is how you become the bad guy. Lies don’t really protect anyone. They only kick the can down the road, and the reckoning will only be worse when it eventually comes. Suppressing the truth out of fear of being punished is the same as paying the Danegeld out of fear of being overpowered. It’s a form of the sunk cost fallacy, and Kipling had the right of it:

And that is called paying the Dane-geld;
But we’ve proved it again and again,
That if once you have paid him the Dane-geld
You never get rid of the Dane.

Social capital isn’t some magical thing that some people have and others don’t. Like any other form of currency, the locus of its power is in its exchange. (Yeah, we really are all Keynesians now.) In the case of social exchanges, those currencies are information, attention, and affective empathy. Sociopaths try to keep their victims from having relationships the sociopath isn’t involved in, because those are the relationships the sociopath can’t control or collect rent on in the form of secrets or adulation. Building up those relationships — from finding other victims, all the way up to entire parallel social circles where known sociopaths are unwelcome and their enablers receive little to no interaction — incrementally debases the sociopath’s social currency, faster and faster as the graph expands.

Internalizing this grants you a superpower: the power of giving exactly zero fucks. It’s the same power of giving zero fucks that Paulette Perlhach writes about in The Story of a Fuck-Off Fund, only denominated in graph connectivity rather than dollars. It takes the same kind of effort, but it pays off in the same kind of reward. When you give no fucks and tell the truth about a sociopath, two things happen. First, people who have been hurt and haven’t found their superpower yet will come find you. Second, the sociopath starts flailing. (One benefit of being right is that the facts line up on your side.) As accounts of the sociopath’s misdeeds come out, the sociopath’s narrative has to become more and more convoluted in order to keep the fanboys believing. “They’re all feds!” he shrieks. “Every last one of them!”

Uh-huh. Sure. Because the feds always assign multiple agents not only to target one guy who can’t even keep his dick in his pants, but to become his coworkers, don’t they? This is not exactly an inexpensive proposition. Reality check: if the feds had wanted to pull a honeytrap (which there’d be no reason to do, given his mascot-only status at Tor), everything would have been a lot more cut-and-dried.

Threats that work well in a silo don’t necessarily work so well at scale.

Of course, an actual programmer would know that scaling is hard.

Tomorrow, we’ll explore why that is.

Posted in Uncategorized | 15 Comments

Defcon Is Problematic

Defcon is hosted in a desert. This is exclusionary to people from cold regions, who cannot handle the heat.

Defcon is in Las Vegas. This is exclusionary to people who have gambling problems.

Defcon provides attendees with open bars. This is exclusionary to people who don’t drink.

Defcon wifi will get you hacked. This is exclusionary to technically-ignorant people who don’t understand the risks.

Defcon attendees will try to hack your phone. This is exclusionary to people who can’t afford burner phones.

Defcon ATMs are probably being skimmed. This is exclusionary to people who don’t have credit cards and must use cash to pay for things.

Defcon attendees have poor hygiene. This is exclusionary to people who are sensitive to scents.

Defcon goons walk around with sticks of deodorant to give to people who have poor hygiene. This is exclusionary to people who are sensitive to scents.

Many Defcon events are put on by, and marketed to, specific groups. This is exclusionary to people attending alone.

People attending alone can get in to events through social engineering. This is exclusionary to people with poor social skills.

Defcon is attended by 15,000 people. This is exclusionary to people overwhelmed by crowds.

Many employers pay for people to attend Defcon. This is exclusionary to people who work at startups with limited budgets.

Defcon is in North America. This is exclusionary to people who live in South America, Europe, Asia, Africa, Australia, and Antarctica.

The food in Las Vegas is fairly generic. This is exclusionary to people with dietary concerns such as vegetarianism or gluten free.

People accuse Defcon of being hostile to newcomers, and demand changes to make it more welcoming. This is exclusionary to the old-timers who made it what it is, and just want to have a good time with their good friends.

At Defcon, white men try to tell women what to think about the events that are put on. This is sexist and exclusionary to women.

Posted in Uncategorized | 2 Comments