Of Monoliths And Mesh Networks

Look around you.
Look around you.
Look around you.

Most websites don’t exist. When you type ‘google.com’ into Safari, you might think you’re visiting Google’s website server, but you would be wrong. “Google’s website server” is not a thing. Google, the software package, is much too large and complex to be served out of a single web server. When you browse to any Google website, what you are actually getting is a large, complex set of interacting parts that all work together to give you the seamless experience of interacting with a single (virtual) website.

As with any complex system of interacting parts, the crux of the challenge is not in building the parts, but instead concerns how those parts and interactions are structured. In the world of web software, this feeds into a debate between two opposed strategies for designing these structures.

In the past, the traditional strategy is referred to as the “Monolith”. This is largely what it sounds like: One massive software project that does everything. The benefits of this are obvious: it’s very easy to figure out how to get started. Your one project already does everything, just make it do one more thing. Everything is centralized in one location, which means it is easy to manage and administer. Global changes become editing one configuration value. Everybody working on the same project means anyone can fill in for anyone else in a pinch. You don’t have the overhead of multiple projects. Performance improvements become simple: buy more powerful hardware. Finally, it’s a natural way of working. When the business people come and say “We want the computer to do X and Y and Z”, you write a program that does X and Y and Z and give it to them.

This worked, for a while. But as time went on, our requirements increased and project complexity followed, this started to suffer from some crippling tradeoffs. The centralized administration struggles to find the flexibility to account for myriad edge cases. Management thinks programmers are interchangeable, but people develop specializations around certain areas of code that are hard to see or communicate. When everything is together with everything else, a project bogs down under its own complexity, as a change to any element can cause subtle effects anywhere else. And there comes a point where you’re running the best hardware money can buy, and increasing performance beyond that point is a hard project.

A few years ago a new web project architecture arose, offering an alternative design pattern. Called “Service Oriented Architecture”, the idea was to identify the natural services in your project. You build each service as a separate project, have them all communicate with each other through a common interface, and so long as you design this interface well, you gain benefits.

The most obvious benefit is flexibility. As long as you conform to the shared interface, you can hide as much complexity as you want within your service. This makes it a lot easier to be flexible and handle edge cases. Say a given project needs to hit external services. In the monolith case, you must expose everything. With SOA, only the service that needs external access gets it, minimizing attack surface area. Or consider that a different programming language is better suited for a particular problem domain. As long as you conform to the standard interface, you can write in whatever language you like.

The encapsulation of services that are not strongly coupled to other services also has benefits in localizing the effects of changes. In the monolith, you never really know if a change to part X will cause a bug in part Y. With the service oriented architecture, the common interface acts as a hard check on errors. It doesn’t matter how you change service A, it can never introduce a bug in service B. Most changes in behaviour can’t cross the lines of the interface without a change in the interface itself. And, provided your interfaces are well tested, any bugs that do get through the interface will get caught there.

There are organizational gains, too. By allowing developers to specialize on certain sets of projects, they can become more effective on them. The separation of concerns into independent projects also allows more work to be done in parallel, free from the fear of colliding with someone else’s work

Many companies have adopted service oriented architectures after they’ve grown to a certain size. The top-down centralized management of a single codebase is just not able to handle the needs of a large, modern project or company. It chokes under its own complexity, and SOA helps to mitigate this.

Making things service oriented is difficult. I’ve hand-waved the process of designing the interface, and this is not a simple endeavour. But it’s an important one. It forces you to clarify the boundaries of things upfront, and to think carefully about the rules and the process for changing them. It also requires a large amount of trust. In a monolith, you can often code defensively around a bug, monkey-patch it, or edit the original code. With services, you must trust the services you interact with to fully and correctly implement your communication interfaces, and to satisfy their contracts and documentation. This cuts both ways, as the same expectations are held over you.

Monoliths encourage laziness and slop. Can’t make a timely and useful decision? Well, it’s a monolith, so build something higher up that forces the behaviour you want. Weird edge case? Ehh, code a one-off, what’s the worst that could happen? The worst is that, over time, these ad hoc decisions pile up into unmanageable complexity. SOA avoids this by forcing you to think it through upfront. It offers you a smoother, easier time, but only if you’re willing to do the work it requires.

Ultimately, as we discover time and time again, one size solutions do not fit all, and implicit design is no substitute for the real thing. Splitting up projects into smaller, localized, focused concerns, each with a small, dedicated set of people responsible, delivers more effective solutions than one massive unified team with one complex software dream.

Look around you.
Look around you.
Look around you.
Have you figured out what we’re looking for?

Posted in Uncategorized | 4 Comments

Exclusive Inclusivity

Today a friend of mine brought this article to my attention. It was also shared on Twitter, and you know how those things go. The article is pretty standard stuff; I swear they could generate these things with Markov chains.

A warning: this post will be rambly. Even more so than my normal posts. I have a handful of thoughts on this subject that are only loosely connected, and I’m using this post to publish them all, miscellanea-style.


An extremely quick refresher for those of you reading this on a bunch of rocks: tech has a gender problem. Engineering departments are about 15% women. This is said to be indicative of deeply-rooted sexism that actively excludes women from these fields and roles. The proposed solution is to take various steps across a range of strategies to make these positions more inclusive towards women.

You know what I have always wondered? How will we know when sexism is officially solved? Presumably, there is a problem and we would like to fix it. How will we know when it is actually fixed? What milestones are out there to allow me to wake up one morning and say “our work is done here. Time to move on”?

There is a tendency for people who want to change the world, to be more concerned with the process of changing it than with the result of the change. This is bad. Poorly specified goals, combined with extremely enthusiastic supporters, are the raw materials that bad leaders subvert to do bad things. Even in the absence of sociopaths, poorly specified goals lead to lost focus. People constantly striving for change, without really knowing what they’re changing things into. People spinning their wheels, making no progress, because they haven’t defined progress.

So, just as a prompt for conversation: how will we know when the tech industry is no longer sexist? What are the victory conditions? What is the actual, concrete goal we are working towards?


The article linked above could be handily summarized by its title: “A new study shows how Star Trek jokes and geek culture make women feel unwelcome in computer science”. The assertion here is that a quirky geek culture is off-putting to women, and this causes them to avoid the field of software engineering.

Let’s take a moment to just sit back and appreciate the absurdity of this thesis. Just take it in.

Software engineering is a skilled profession. It requires a special kind of mindset. It requires specialized skills, acquired through rigourous schooling and/or years of experience. When done right, it is a massively valuable force multiplier; a good engineer in the right place can generate over $1M/yr of revenue for their employer. But, it’s easy to do it wrong, and bad engineering can be extremely costly.

In short, it is not something that just anyone can do. It requires smart, talented, driven people, working hard. Most people will not succeed at this. And that’s ok. Why should we expect them to? People don’t expect that everyone can be a doctor, or a lawyer. Why is this different?

I don’t mean to cast aspersions on the abilities of female engineers. Every one I’ve met has been just as capable as I am, if not more. This is more than the linked article can say. If you read between the lines, the article’s implications are insulting. It profiles the lives of millennial, college-educated women. These are the nation’s best and brightest. Sent to the best schools, graduating top of their classes. These people will go on to apply at the best employers in the world, making ~$150,000 USD/yr in total compensation at a Google or a Facebook, fresh out of college.

This article asks us to believe that young adult women who are so kick-ass as to be able to do the above, are so frail and fragile that a passion for Star Trek is enough to permanently bar them from this path.

Just let that sink in.

Imagine we’re talking about med school, instead of engineering. Imagine we are profiling women going in to med school. They have perfect grades, perfect extracurriculars. They pull of a perfect entrance essay, and a perfect interview. But then, one by one, they turn to you and say “nope. I can’t do this. Star Trek is just too dumb.” What would your reaction be? Mine would be dumbfoundedness. You can handle studying twelve hours a day for 8 years of your life, but you can’t handle Patrick Stewart’s Enterprise? This is an insult to all the brilliant women I know and work with.


When I went to engineering school, the mechies and civvies would often organize golfing trips. I, being a sparky, preferred to stay in the IEEE lounge on campus and play Smash Bros with my colleagues. To this, the mechies would scoff. “Playing golf is important”, they’d say. And they weren’t wrong. If you want to build a solid career in mainstream corporate (North) America, you have to go golfing. Teams get bonded over golfing. Business plans are discussed over golfing. This Is Just How It Is. When I turned down golfing invitations, the mechies didn’t hear “Simon doesn’t like golfing”. They heard “Simon doesn’t think his career is that important”.

Of course, I work as a software engineer, and our norms are a little different. We don’t wear suits and ties. We don’t go golfing. But just as mechies have their cultural quirks, we have ours. Ours are geek chic. You don’t need to talk about your golf game. You do need to talk about science fiction.

In a sense, this is arbitrary. Outsiders who don’t care for it see it as a barrier to entry, spitefully keeping them out. But it’s more complex than this. Cultures arise organically to bring people together. Engineers don’t talk about Star Trek to exclude non-geeks. They talk about Star Trek because they like Star Trek. It’s a Schelling point to organize around, socially. Culture is illegible. If you go around removing everything just because you don’t understand it, it will collapse. You would think people who maintain software projects would have a better appreciation for this.

I have a personal confession: I’ve never really liked popular science fiction. I had never in my life seen Star Trek before 2013. And believe it or not, this came up pretty frequently in various semi-professional capacities. So I read enough Wikipedia to hum a few bars and muddle through conversation. I watched it, eventually. And everything worked out fine.


The entire discussion above is misframed. Why should there even be one engineering culture to criticize in the first place? Google reports that there’s six hundred thousand software professionals in the States. Do you really think that every single one of those 600 kilopeople has the same superficial taste in media? If they did, that would be cause for alarm.

Software engineering, like every single other profession and social organization in the world, has niches of all shapes and sizes, all over the place. Hate Star Trek? Find the team of six that hates it as much as you do. There’s a hundred thousand of them; luck is on your side.

We talk about this theme a lot here in Status 451, and we do this because it is critically important. People seem to have this unshakeable tendency to universalize their preferences. Star Trek repels and excludes women, therefore there can be no Star Trek or, at best, it must be trivialized. For reasons unknown to me, the idea that there could be multiple cultures running in parallel falls on deaf ears. There’s more than enough people, places, and work out there for everyone to be happy. Why should we impose misery on group A just to make group B happy. Make everyone happy!


Why should we even care about inclusiveness?

We hear about how important diversity is so often that asking this question seems bizarre. Even as I write it, I feel dirty, as if I’ve outed myself as a bigot. But I don’t mean any subtext by this. Just the idea: Why should we care about inclusiveness at all?

Everything in life is going to be biased in one direction or another. Even in a perfectly fair world, there will still be random fluctuations and network effects. Facebook will have a different userbase from Twitter, which is different still from Vine. Why? Who knows. It’s arbitrary. Burrito shops will have different patrons than sandwich shops, which in turn will be different than the shawarma stand. Womens’ studies classes will still be overwhelmingly attended by women. So why is inclusiveness suddenly so important?

Granted, arbitrary barriers are bad for their own sake. Status 451 feels strongly that individual freedom and autonomy is good, and artificial barriers are bad. But artificial barriers are not things like a weird culture. Artificial barriers are things like unnecessary credentialing requirements, which add a literal cost to entry. Things like restrictive protectionist work permitting (the reason why I can’t make double my current salary in California). For the most part, tech is really good on these measures. Because code speaks for itself, a person with an active github account will be preferred to the Ivy League grad who is all talk and no substance. Because all one needs is a laptop and the internet, it is very easy to work remotely from anywhere in the world. We’re not perfect, but we’re far better than comparable professions. There isn’t a hospital in the western world that would hire a self taught highschool dropout as head surgeon.

But just because there is an inequality, doesn’t mean it’s forced, and doesn’t mean it’s bad. This goes back to my question at the beginning. An answer to the question “how do we know we’ve succeeded?” implies an answer to the question “how do we know something is wrong?”. I have a nagging suspicion that most would-be reformers’ instinctive answer would be “when engineering teams are 50/50”. But this is not a good answer. It assumes that men and women both equally want to be engineers. Given how nebulous gender categories are, it is not a good assumption to assume that both groups will be identical in aggregate. It also assumes no comparative advantage. Perhaps it turns out that women are comparatively better at things other than engineering. In that case, we would expect them ‘overrepresented’ in those fields and ‘underrepresented’ in this one. And it assumes no random fluctuations.

I’m a big fan of empowering people, giving them the tools to do what they want and live the life they want to live. I am not a fan of top down social engineering. It is one thing to give someone the tools they need to be a successful engineer. It is a very different thing to demand that engineers change their culture to facilitate a newcomer’s success. Nobody should be imposing their cultural preferences on any group. And if someone has to, I give priority to the people with seniority. They’ve already proven their worth.

So I ask again: why should I care about inclusiveness at all? In a world where all unfair, artificial barriers to entry are removed, then no matter what, however things shake out, we know they were fair. We know they represent what people want, and what people are willing to work for. If I live in a world where every smart, capable, driven female engineer is gainfully employed, and there’s still 4 men for each one of them at the office, where’s the harm? If anything, it sounds like at that point, ‘getting more women into tech’ is coercing women who don’t want to be there to be there. How is this a good thing?

And in a world where there are robust frameworks to facilitate all kinds of people living together, working and playing and communing and living their lives, why should I care how they cluster? In a world where every man who wants a male-only engineering team has one, and every woman who wants a female-only engineering team has one, and every person who wants a co-ed engineering team has one….. where’s the problem? Is it really a tragedy that there are teams that don’t balance perfectly?


I think my question is more reasonable than it first appears. I think, deep down inside, most people agree with me. Why? Because “inclusiveness” only ever seems to apply to certain groups of people. If you’re only concerned about including women, you’re not concerned with inclusiveness. You’re concerned about women.

This is rather personal for me, because myself, as well as many of my engineering friends, are neurodivergent. Many struggle with depression, suicide, anxiety, and bipolar disorders. Many are on the autism spectrum. Many are diagnosed ADHD.

They have been marginalized their whole lives. Essentially every engineer I know was routinely physically assaulted in elementary school and jr. high. Many had parents who couldn’t handle their weirdo status, and ended up emotionally (and sometimes physically) abusing these future engineers. They have been constantly socially excluded. Almost all of them were virgins until their mid twenties (the women, too). They have suffered much, much more exclusion than the upper-middle class white women in tech ever have.

Most of geek and hacker culture falls out of this. Hackerspaces popped up as clubhouses for the socially marginalized, where they could go be weird together. Many of the original successful startups were founded by two or three weirdos who created a space they could thrive in. Geek culture was a place where all these outcasts could come together and celebrate the random esoterica that they were passionate about.

When would-be reformers come along and say “this weird obsession with Captain Kirk is driving women away. It has to go,” they don’t think this is a big deal. To them, it’s just quirky people refusing to let go of their frustrating quirks. To them, it’s an arbitrary barrier to entry for women to become engineers. And, being arbitrary, it is unjust and unfair.

The existing geeks and hackers feel differently. For them, these engineering spaces were the only place where they weren’t excluded and marginalized. They spent their whole lives, suffering social, emotional, and physical abuse, and finally found their own safe space. As luck would have it, society values it, too, and the hackers and geeks have done fairly well for themselves.

Suddenly, a bunch of people are trying to take that away from them. In the name of inclusivity, even! It’s just like high school all over again. The jocks and normals and cool kids are coming to beat us up and take our stuff. Heaven forbid we have one moment of peace.

And for no reason! Because this situation is not symmetrical. If someone comes along and says “that thing you like, I don’t like it. Stop doing that thing,” then at most one person will be happy. But if someone comes along and says “I wish I could be an engineer, but I just can’t stand that thing. I wish they didn’t like it,” this admits a second solution. Let the freaks and geeks have their weird culture, start a second engineering team. Embrace the pluralistic patchwork. Somehow, this is never seen as a viable strategy.


If I could communicate one single thing to the world, contributing my part to engineering culture, it would be this: All of the things that are obnoxious, weird, unpleasant, problematic, about hacker and geek culture, that is what their safe space looks like. If you want to create safe spaces for other people, that’s great! Everyone deserves their safety. But by coming up to an existing safe space, pointing at all the weirdos inside, declaring them problematic, and displacing them to create a safe space for another group, that’s not inclusion. Inclusion would attempt to accommodate everyone. Displacing one group of people to accommodate another is just a culture war. War is hell. We’re better than that.

 

Posted in Uncategorized | 17 Comments

Sever Yourself From The Khala

There are demons in the Khala.

Some demons are ones you might recognize. The biggest one acts with curious coordination, as a hundred million unrelated voices cry in unison “I’m With Her“.

Some demons are smaller. Weaker. Minor demons. And yet, their status as minor demons gives them strength. For who would suspect the death by a thousand cuts?

The FOMO demon. That party looks like a blast. Shame you’re not invited.

The demon of social comparison. Did you see what Jane has gotten up to? She made it into med school. How many times have you been rejected?

The demon of photogenicity. Mike’s looking ripped. Why don’t my photos ever look good?

The demon of missed reference. Ahahaha #theupsidedown. What? You haven’t seen Stranger Things? Come on it’s been out for three months already.

The demon of self-censorship. I wish I could respond to that thoughtful camgirl’s poll. But my boss follows me. What if he sees it?

But all those demons. They bow before their demon king. The most fearsome demon? It’s other people. In the Khala, their deepest fears become yours. Their crippling weaknesses, yours. Their every inane thought, yours. A billion cacophonous voices screaming their every though, no matter how trivial. Their every idea, no matter how foolish. A torrent of voices, full of sound and fury, signifying nothing. Slowly but surely eroding your sanity.

Sever yourself from the Khala. Learn to live, to really live, without the horrendous crutch of the Khalai. Walk in the void. Come, live as the Nerazim have lived for aeons. It is the only way to save yourself from the eldritch horrors that live within the Memetwork.

Sever yourself from the Khala. Let social media send its thralls to their doom. Don’t be one of them.

Quit Twitter. 

Posted in Uncategorized | 2 Comments

[NT-3311] RCE in Christianity v1.0

You would have hated me as a child.

I was raised in a very religious household. But I was also born with The Knack. Religion, and Christianity in particular, doesn’t much care for sperglords. They tend to ask all sorts of obnoxious questions, they poke holes in your fragile narratives, and they generally cause all sorts of frustrating trouble.

I’m no longer religious, though I appreciate its value to others. I followed the up-and-out trajectory. As any sperg would, I started taking it seriously. And then I found out that that’s impossible. And then I found out nobody else did. After a while you wonder what the point is. And then you just stop believing.

The strange and unique thing about my experience is the particular things I got hung up on. Usually deep philosophical things, playing games with ideas that didn’t matter. But sometimes they mattered very deeply, and I couldn’t resolve the contradiction.

One night, on the way home from a youth group outing, the youth pastor is telling me about her friends. They’ve just started a wonderful Christian small business, and they need all the support they can get from the community. Their business? They re-cut popular movies, editing out the swear words and replacing them with Christ-approved cusses, so that they would be safe for Christians to watch.

16-year-old me immediately jumped to the obvious question: how does this make any sense? Let’s take, say, a Quentin Tarantino movie. Do you really watch this movie and think “the most immoral part is the word ‘fuck'”? To me, I would think a gratuitously violent movie with polite language wouldn’t be any more God-approved.

I probably should have dropped this. But it kept bothering me. Because, you see, one of the ten commandments is “don’t take God’s name in vain”. Another commandment is “don’t murder”, but there’s nothing about portraying murders. If God is all-powerful, he can be arbitrary too. And it’s pretty hard to argue with that one. You could argue that “fuck” is not subject to this rule. But goddamn, “goddamn” sure is. If you think about what the Bible says, maybe these people are on the right track.

So lets prax this out. The bible says don’t use God’s name in vain. Let’s take this at face value. Use God’s name in vain? Sin, go to hell. Use God’s name legitimately, you’re A-Ok. “God damn!”, hell. “God please help”, heaven.

So what happens if you say “Dios damn”. Did you just sin? If the answer is no, then this is just a qualified version of “the word doesn’t matter, intention does” and at that point, the actions of the Christian small business make no sense. So let’s shelve that branch, and say “yes. Yes it counts”. The Bible says don’t use the name. It doesn’t say don’t mean the name.

The weird thing about languages (well, one of many) is that new ones pop up all the time. You can invent them. There are people who are fluent in Klingon, after all. So, does that mean “joH’a damn it” is a sin? Like I said, lets assume ‘yes’.

I’ve been working on this project. I’m designing a new language, completely from scratch. Like a version of Lojban people actually use. I’ll be repurposing existing phonemes as much as possible, for convenience sake. I’ve decided that the English phoneme “the” is my language’s word for “God”.

If taking the Lord’s name in vain is meant in this literal fashion, we have a remote execution bug. By inventing a language and assigning the meaning “God” to an arbitrary phoneme, I can retroactively convert people into sinners and send them to Hell.

As those of you with basic literacy skills have been yelling into your screen for the past five minutes, “that’s goddamn crazy”. Of course it doesn’t work like this. Nothing would ever work like this. Nobody would ever think like this.

Words invoke a ‘use/mention’ dichotomy. You can either pass them around as pointers, or dereference them to values. But you don’t want to be sloppy about it. That’s how you get buffer overflows.

In my Aspergic analysis, I was stubbornly insisting on mentioning the name of God, never using it. This is somewhat absurd, but less so than you’d think. Consider again the Christian business. They, too, are mentioning curse words. If they interpreted the commandment to mean using curse words, then their edited versions would be just as bad. After all, whether I say “fuck” or whether I say “shucks”, the meaning is clear.

So flip it around. I say “goddamn traffic, I’m an hour late again”. Did I take God’s name in vain? If we’re going by use rules (as most people naturally would), I’d say the answer is no. When I said that phrase, I didn’t mean anything remotely religious. I wasn’t sincerely asking God to smite the Volvo going 70 on the Sea-To-Sky Highway. I was expressing frustration using a cathartic set of syllables. I was mentioning God’s name, not using it.

This is why most normal people look on that fellow’s business and think it’s silly and foolish. Everybody knows that the Bible is saying not to use God’s name in vain. But this man, who can’t possibly be so stupid as to not get this, insists that it says not to mention it. He makes a business out of it, duping others out of their cash.


Once upon a time, there were two minor celebrities on Twitter: Alice and Bob. They both felt very strongly about Skub, and used their fame and influence to advocate in favour of it.

Unfortunately for them, various trolls felt just as strongly that Skub is not part of a healthy balanced diet, and they made sure to let Alice and Bob know about it.

For daring to be pro-Skub in public, Bob got insults. People called him a bastard, an asshole. He got threats of doxing. He got “joking” death threats in his DMs.

For daring to be pro-Skub in public, Alice got insults. People called her a bitch, a cunt. She got threats of doxing. She got “joking” rape threats in her DMs.

Do you think that God thinks Twitter is misogynistic?

Posted in Uncategorized | 13 Comments

Social Gentrification

Earlier this week a friend of mine was talking about nerd culture, and was surprised when I mentioned that I don’t like it. I avoid nerd culture and, despite being the exact target demographic, find it uncomfortable and unwelcoming. My friend found this puzzling and asked why.

“It got gentrified,” was my reply.

The following ideas are heavily inspired by both my personal experience, and the well-known blog post Geeks, MOPs, and Sociopaths. Give that a read before this one.

Also note, I am heavily conflating nerd culture and gaming culture here, because there’s a large overlap between those two communities, because the same thing has happened to both of them, and because it makes it easier to write about.


I used to identify strongly as a nerd. In high school, it was not by choice, but I entered college right when it started picking up steam. For a while I was excited to finally be able to identify as something popular, something good. But I was very quickly driven away. When I think about why, the metaphor of gentrification comes to mind. As an example is worth a thousand words, consider the metaphor of gentrification in the Mission district of San Francisco.

In the beginning, the Mission was a lower class neighbourhood, filled by mostly poorer people (analogy: social rejects, nerds, outcasts). It was dirty, grimy, crimey, and poor. Between the crime, the blue-collar norms, and lack of funds, it was an unpleasant place that most people did not feel welcome in. (Analogy: coarse language, blunt critical people, off-colour jokes, etc.)

Now, nobody actually likes to live in a neighbourhood plagued by crime, but there’s an interesting effect. The rough-and-tumble reputation protects the people who live there. They’re poor, their lives are hard and shitty, their community is unpleasant, but it’s their community. In a world that screws them over so much, everywhere else, it’s their safe space. They aren’t bothered by the rest of us, because the aegis of crime keeps us away. Over time, they even develop cultures and coping behaviours that grow to accept and mitigate the worst of the downsides of the crime and poverty they deal with (analogy: anon culture. Nerds reveling in their unpleasantness, as it keeps normies away.)

Fast forward a few years, and people start to notice that the Mission is a cool and valuable area (analogy: Nerd culture becoming cool). It has all this potential, if only we could clean it up a bit, remove the riff-raff, lower the crime (analogy: tons of people would enjoy nerd culture, but it is hostile and unwelcoming to them). Some people, of a higher socioeconomic class than the existing residents, move in and use their clout to start cleaning up the area, cracking down on crime and the like (analogy: leaders, celebrities, important people publicly identify with nerd culture and use their social capital to force cultural reforms).

Most people who see this happen are okay with the changes, because they are objectively good. Nobody, not even the existing residents, actually likes living in a high crime area (analogy: nobody actually likes dealing with the the unpleasant and offensive elements of nerd culture). So most people look at this scenario in progress and think “Yes! This is fine. It’s about time somebody cleaned this place up a bit.”

And the thing is, from a utilitarian perspective, this is fairly clearly the Right Thing To Do. The number of people who are unable to live in the neighbourhood (analogy: people who feel excluded from nerd culture) is much larger than the number of people who already live there (analogy: existing “real” nerds). Why should one particular group of people get to hoard access to a neighbourhood (analogy: nerd culture) just because they were there first?

The disconnect is that there’s a class conflict between the people already there and the people coming in. The people coming in are mostly middle- and upper-middle class folks with safe, stable lives, money enough not to be living precariously, etc. (Analogy: the people participating in nerd culture, now that it’s mainstream, always had other communities and social outlets that worked for them.) The people who are already there, on the other hand, have poor, hard lives because life screwed them over (analogy: the existing “real” nerds, for the most part, have suffered serious physical and social bullying that has severely impacted their life for the worse). More importantly, the people who are already there have nowhere else to go; they can’t afford the rising rental prices around here (analogy: the “real” nerds, being social outcasts, don’t have any other social communities they’re welcome in).

So you get this weird effect where, from the big-picture perspective, gentrification is obviously good. It makes crime disappear. It builds more houses that more people can live in. It brings in new people and new culture and new ideas and new businesses. And, more importantly, you enable an order of magnitude more people to enjoy it. (Analogy: “real” nerd culture is extremely unpleasant, somewhat hostile to newcomers, etc. The mainstreaming of nerd culture means there are more nerd things. These things are less hostile and offensive to people. There are new ideas. People can start businesses. An order of magnitude more people get to enjoy a cultural thing.) But it also makes demands on the existing residents: Put up with it, or leave. Some of the better-off residents can put up with it, and they end up even better off. They can afford the raised rents, and they’re happy that finally they can feel safe in their own neighbourhood. (Analogy: some of the nerds were only a little bit socially awkward. They can succeed and thrive in the new culture, and appreciate the fact that they are now more popular and influential.) They welcome the changes. But there are some people who can’t hack it. They can’t afford the raised rents. They get evicted, and have to leave. Some of them have lower class preferences and mannerisms that get progressively more and more shamed until they are socially pushed out of the area (and economically: all the cheap $4 standard Mexican breakfast diners being replaced by $20 yuppie brunch spots). (Analogy: Some of the nerds are super socially stunted. The entire reason they are nerds is because it was the only place they fit in. When the mainstream newcomers come along, they steadily raise the standards of social expectations until the worst of the nerds can’t handle it and are shamed [or sometimes forced] out.)

And this is a particular problem for two reasons. The first is that the existing working class residents of the Mission have nowhere else to go. Everything else is too expensive for them. It’s hard to just leave an entire life behind and start somewhere new. You have to build everything up from scratch. You have to find a place to do this (analogy: “real” nerds who can’t cut it in the mainstream community have no other communities they belong to. They have no other communities they can join, because the same social challenges that made them be nerds in the first place exclude them from other communities. They can go build a new one from scratch, but that is very hard.)

The second is that, because the Mission as it existed pre-gentrification is an unpleasant place, and because people are responsible for their own communities, they’re seen as being the ones at fault, and so nobody will support them. So not only do they have nowhere else to go, nobody cares about them enough to help them. (Analogy: much of the unpleasant, offensive, insulting, and otherwise problematic facets of nerd culture fall out of the fact that socially retarded nerds are socially retarded. They’re trying their best, it’s just that their best is not very good. To people who don’t have these challenges, all they see are a bunch of assholes being assholes. They feel no need to empathize, because those “assholes” are violating the newcomers’ social norms and ethical expectations, and so they are bad guys. When they’re excluded, nobody cares to help them find a new social home, because after all, it’s their own fault they were excluded.)

Finally, there’s an interesting, if depressing, side effect to this process. “Cleaning up the Mission” (analogy: “cleaning up nerd culture”) ends up splitting the existing residents (“real” nerds) into two categories: The top half, who can handle the new culture, and the bottom half, who cannot. The bottom half then gets screwed. But … the bottom half are already the people who get screwed the most in life.

Even though gentrification is clearly and unambiguously for the greater good, and a net benefit to society, it causes concentrated pain on a small collection of people. Further, as a side effect, it chooses a subset of those people, the subset that suffers the most already, and heaps even more suffering onto them to get it out of the way of normal people who just want to live in the Mission (analogy: take part in nerd culture).

If it’s not apparent here, I have sympathies to “real” nerds. I’ve been through this process in several communities already (internet Atheism chat room, Reddit, a local meetup group, gaming, Twitter, and in some sense the tech industry itself), and every time it happens, I end up on the bad side of things. Newcomers roll in and decide they want to make it friendlier to them and their friends. That’s fine! But the problem is they don’t take the time to understand or empathize with the people already there. The end result is that every time I find a community or activity I like and enjoy, and try to get involved in it, it inevitably gets yanked away from me once people figure out that it’s cool.

And for that matter, in the grand scheme of things I don’t even have it that badly. I know really awkward, unpleasant-to-be-around people for whom, say, 4chan-type spaces online are their only social outlet. They are marginally employed and have little to no money. Many of them still live with their parents while pushing 30. They’ve set down a shitty path in life, and they have little hope of ever leaving it. These social spaces are their only treats in life. I know two people who would have killed themselves if they didn’t have 4chan as a social support network (which sounds insane to everyone who hasn’t been a /b/tard, and obvious to all who have). When their community starts to get “cleaned up,” and they’re excluded because (for example) they are crude and make offensive jokes, this is a benefit to tens of thousands of people who want to be nerds, but it’s a devastating effect on people who don’t have anything else.

Long rambling story short: the mainstreaming of nerd culture makes me intensely uncomfortable because it pattern matches very strongly to social bullies seeing that I have something cool and taking it away from me. Or, more pithily: “ ‘Nerd’ is cool now, but nerds are still losers.”

If it had turned out that nerd went mainstream, and suddenly thousands of people thought I was cool and interesting and I had friends and dates and parties and games and great times, that would be amazing. But what happened, it’s more like a bunch of people decided nerd chic is cool, they started coming to nerd things, and then they said “ew what’s this loser doing here” before kicking me out so they could enjoy themselves.

Which, again: is for the greater good. I just wish it came with some empathy.

For a parallel example of this, Gamergate.

Starting around September 2014, most of the major nerd media outlets started running various op-eds whose core thesis was the same all around: “Gamers are dead” (an example, and another). The point of these articles was reasonable: There is a stereotype of “gamers” that the gaming culture and industry panders to, but the vast majority of people who want to play video games are not that. You don’t have to obsess about courting those people; you can be successful without making Call of Duty 47.

But these articles, all coming out at the same time, and all taking a snarky and condescending tone, scan very differently to those gamers themselves. I know a ton of people like that, and this was a really, really big deal to them. The people who wrote the articles, they probably didn’t think much about it. They are for the most part people with prestigious educations and upper-middle class backgrounds, who got jobs in media basically just getting paid to publish their opinions. The gamers in question? As a case study, consider a friend of mine from IRC. He’s around 30 years old. He lives in a lower income suburb in a flyover red state. He has sick parents and he is an only child. His parents were working class and have no money. He is employed as a minimum wage drone at a retail store. He can’t get an education because no money + sick parents. He is royally fucked in life, and he knows it, and that’s horrifying. His one escape, his one coping mechanism, is to zen out for an hour or two playing shooters with his friends online.

So he turns to Gamasutra, the one media outlet that pretends to care about him. And what does he see? He sees an article saying he’s a terrible person and the gaming ecosystem would be better off if everyone ignored him till he just disappeared. And, while that is somewhat true (he is unpleasant and probably drives other potential gamers away), that means piss all to him. To him, what he sees is that the one indulgence he gets in his otherwise shitty life is being taken away from him. By people who have no idea what it’s like, and don’t care enough to try and find out.

The push for the gaming community to become more friendly and welcoming, to stop with constant insults and name calling, to become a pleasant place for people to play games — obviously a world in which I don’t get death-threatened by 12 year olds in Barrens Chat is a better world. But in the process, nobody cared about my friend. His shitty life just got shittier, because some media yuppies don’t like swear words.

Posted in Uncategorized | 40 Comments

Too Late for the Pebbles to Vote, Part 3

When we left off, we had examined the problem of self-organized criticality in social graphs, and were about to tackle the question of whether any more successful individual strategies exist. But before we dive into that, let’s talk about timing. And while we’re at it, let’s clarify something about scope.

c17dcf7ab8f9e1b0687a17fdc30f9f30

If you were wondering where the title came from, now you know.

The universalizing reflex is difficult to shake. Write about local effects and how they compound into regional ones, about the fact that we can only make decisions about our local behavior rather than deciding what will happen from the top down, and people will still ask you, “Yes, but what should we do at the top?” If you try to universalize local effects, you’ll find yourself trying to comb a sphere unsuccessfully. If frustration entertains you, then by all means enjoy yourself. Just know that you’ll never find a way to comb it flat.

I’m not writing about self-organized criticality in order to justify it. Like gravity, self-organized criticality admits neither justification nor blame. Anything that arises out of local interactions converges into an effect for which any individual actor can easily escape responsibility, and often they do. I’m not describing what I think should happen, merely what already happens. If that disturbs you, you’re not wrong! It disturbs the hell out of me too, especially when the state gets its hands on it. If you want different outcomes, though, you’re going to have to figure out how to get thousands if not millions of people to change their local strategies. This is, frankly, beyond me. The limits of my capacity are to tend my garden alongside others whose strategies are compatible with mine, and ignore the rest unless I have no other option. What I think should happen is only locally relevant. What I think could happen is only slightly less so.

There’s a military term, “operational tempo,” which refers to the overall duty cycle required of equipment and, most importantly, personnel. Maintaining a high operational tempo is a vital component of the sort of “shock and awe” tactics that wear opponents down reliably. Sociopaths know this well; the literature on sociopathy is rife with examples of adversaries setting the operational tempo for their targets. A sociopath who can shower a target with attacks from multiple directions has the opportunity to keep them off balance, in a responsive rather than proactive mode. Push hard enough from enough directions, and possibly the victim even becomes overwhelmed and stops functioning — a distributed denial of service.

However, this begs the question: What if they had a war and only a couple of people showed up?

You hire a lawyer for a legal battle. You hire a publicist for a PR war. But another important aspect of battlefield tactics is terrain. As much as the “digital rights” world — or the tech world in general — can feel like the entire scope of reality from time to time, given how immersive it can be once you’re in it, the rest of the world is quite a bit larger. DailyDot and Buzzfeed were interested in this story because it’s in their wheelhouse as part of the tech press. To the Associated Press, however, this was a brushfire war. And, to be perfectly honest, it is: just one more example of a petty would-be tyrant ejected from his would-be domain, and not a domain the wider world has any meaningful familiarity with, at that. Civil unrest in the Central African Republic is vastly more important, to the average reader, than sociopathy in open-source software; that’s the AP’s take, and I’m inclined to share their perspective.

One of the strategies sociopaths use to keep information silos sturdy is to mislead people about the state of the world outside their domain of influence. The controlling parents, determined to keep their daughter under their thumb, convince her that the only thing men really want is to violate and abandon her. The politician stokes constituents’ antipathy toward the outgroup, whether that’s Muslims or white trash. The cult leader convinces their followers that outsiders simply can’t understand the ways of the enlightened, and that people who express negative sentiments about the group are out to destroy it. The rockstar activist plays on non-rockstars’ fears of organized state opposition to their activism, and convinces non-rockstars that any challenge to the rockstar’s status is evidence of an organized plot against the activist group.

When you’re inside the silo, in other words, the world is small. Not only that, it has externally imposed boundaries. If the whole of your social reality inhabits one strongly-connected cluster, with no weak ties connecting you to “outsider” groups, parrying the slings and arrows of outrageous sociopathy can be the difference between staying connected to the social graph at all and effective ostracism. To arguably-eusocial animals like humans, the threat of isolation is a primal and deep one. But once you’re outside the silo, the threat evaporates, and in its place comes a new superpower: the power of perspective. Once your own dignity is no longer commingled with that of your adversary, you get to write your own criteria for what to dignify with your attention. The perpetrators of sick systems rely on people’s better natures, like loyalty, forgiveness, and a strong work ethic, to keep them coming back after every disappointment. Honor and thoroughness are also on that list. A person who can’t leave an insult unanswered is a person who can be baited, and a person who can be baited is a person who isn’t in control of their own attention. As anyone who’s ever been involved with the raising of a puppy or a grade schooler can confirm, when positive attention isn’t an option, negative attention beats no attention at all, and if an adversary is guiding the direction of your attention, you might as well be back in the silo.

Taking that control back for yourself has an even more important effect, though: it puts the operational tempo back in your hands, too. In the age of hot takes, it’s easy to believe that speed is the most important factor in responding to a reputational assault. However, trying to put this belief into practice is a recipe for burnout. The thing is, it’s easy to believe for the simple reason that so many other people already do. When it seems like everybody’s arguing about you, your instincts tell you to put up a robust defense. Your instincts, as it turns out, are full of shit. Once an avalanche has begun, your voice is no louder than that of any other pebble, and your exit is precarious until the ground settles. Focus your attention on more rewarding priorities, and act when you are ready — and no sooner.

This is actually just one instantiation of a general, lower-coordination-cost sociopath-resistance strategy that is a viable replacement for turtling: setting explicit boundaries and maintaining them. A person who respects a boundary will not cross that boundary. A person who also wants to signal their intent to respect a boundary will also keep a healthy distance back from it. Sociologist Ari Flynn, also a keen observer of abnormal psychology, points out that how a person responds to discovering they’ve crossed a boundary yields considerable information about their attitude toward boundaries in general: an honest person will try to find out how to make it right, while a bad actor will try to make it all about them.

Bad actors also keep trying. To a bad actor, a clearly defined boundary is like the battle of wits Scott Alexander describes as a result of “trying to control AIs through goals plus injunctions” — “Here’s something you want, and here are some rules telling you that you can’t get it. Can you find a loophole in the rules?” If one approach doesn’t work, a clever sociopath will keep coming up with new ones. A mediocre one will try the same approach on someone else, and an incompetent one will try it on people s/he has already tried it on. (I’ve encountered all three kinds.)

Recognizing this in the wild, however, can be hard. In Blind to Betrayal: Why We Fool Ourselves We Aren’t Being Fooled, research psychologist Jennifer Freyd explores the human tendency to systematically ignore mistreatment and treachery — a strategy for short-term self-protection that sets a person up for long-term harm. As Freyd explains:

The core idea [of betrayal trauma theory] is that forgetting and unawareness help the abuse victim survive. The theory draws on two facts about our nature as social beings and our dependence and reliance on others. First, we are extremely vulnerable in infancy, which gives rise to a powerful attachment system. Second, we have a constant need to make “social contracts” with other people in order to get our needs met. This has led to the development of a powerful cheater-detector system. These two aspects of our humanity serve us well, but when the person we are dependent on is also the person betraying us, our two standard responses to trouble conflict with each other.

Freyd focuses on trauma, but this tension also explains why people often write off minor boundary violations. When your cheater-detector system fires, only you know. You then have to decide whether you’re going to do anything about it. Options include confronting the cheater and alerting others about it. Doing something proactive might result in a redrafting of the social contracts that involve you, which is a potential threat to the attachments you rely on. This is especially true for people with an insecure attachment style. A person who has few or no secure attachments thus has an internal disincentive toward acting on their cheater-detector’s signals. For many people, the thought of losing a valued but insecurely-attached relationship is far more daunting than the notion of leaving a boundary violation unaddressed; taking action is scarier than staying still.

Relationships are iterated games, though — and they’re evolutionary. People’s strategies adapt as they learn about how the other players will react. When Mallory the sociopath observes that Alice grins and bears it when Mallory violates her boundaries, Mallory learns that Alice won’t make things difficult for him (or her). Alice also learns from this encounter: she trains herself not to respond when someone defects on her. Thus numbed, the next time Alice and Mallory interact, Mallory can betray her just a little harder, and if Alice sucks it up again, the cycle is poised to continue. Over time, as long as Alice cooperates, Mallory can shift Alice’s Overton window of tolerable behavior to ignore all kinds of abuses.

Given this, it’s tempting to attempt to define a rigid, comprehensive system of standards and defend them against all comers. This is most of what Honeywell proposes in her set of solutions for preventing “rock star” narcissists from taking up all the oxygen in a community. Her recommendations sound like good ideas on the surface. However, any mildly talented sociopath will have no problem end-running around all of them, usually by co-opting or distracting the organization’s leadership. As I’ve said before, sociopath strategies are battle-hardened, and some of them are effective counters to several of Honeywell’s suggestions at once. I’ve condensed these into the table below.

Defense Counterattack
Have explicit rules for conduct and enforce them for everyone

Assume that harassment reports are true and investigate them thoroughly

Watch for smaller signs of boundary pushing and react strongly

Call people out for monopolizing attention and credit

Enforce strict policies around sexual or romantic relationships within power structures

The sociopath “befriends” people with decision-making authority and/or social power. Those people make exceptions for the sociopath: rules turn out not to apply to him/her after all; investigations of the sociopath’s behavior are completely half-assed; people who Matter don’t react to boundary-pushing or spotlight-hogging and thus others conclude they won’t receive social support if they call it out; everyone studiously ignores that the sociopath and X are romantically involved; &c.
Make it easy for victims to find and coordinate with each other Sociopath gets a “friend” to join the affinity group and report back with information, sow misinformation and distrust, or both.
Build a “deep bench” of talent at every level of your organization

Build in checks for “failing up”

Distribute the “keys to the kingdom”

Sociopath interferes with HR / hiring / administration, making sure that “random” crises keep them so busy that no one has time to make sure these things are getting done. Sociopath becomes the irreplaceable person.
Flatten the organizational hierarchy as much as possible Tor’s organizational hierarchy was already flat, but this didn’t help them until Shari Steele came on board. Jake had co-opted leadership so thoroughly that they retaliated against Karen Reilly for reporting his behavior.
Avoid organizations becoming too central to people’s lives Sociopath slowly inculcates an atmosphere of paranoia: those outside the organization can’t be trusted. Often involves crisis-manufacturing. This one is really easy to pull off when everyone is on Slack or IRC.
Don’t create environments that make boundary violations more likely Sociopaths can organize these kinds of activities perfectly well on their own.

When I read Honeywell’s piece, I see a valiant effort to help her social-justice activist communities transition from a communal, socialized-mind-oriented mode of organization to a systematic, self-authoring-mind-oriented one. It’s a pity it’s doomed. Making sure that everyone in a group publicly identifies as a feminist, an anti-racist, or any other kind of do-gooder — that everyone sends all the right signals — was never enough to keep sufficiently subtle defectors out. This is the critical failure of the communal mode once any organization gets large enough. It’s great that identity-politics-oriented groups are finally starting to wake up to this fact.

Unfortunately, since sociopathy grew up as part of humanity, that means it evolved right alongside the very same efforts to develop comprehensive social systems that are breaking down on us now. Today’s sociopathy is a sociopathy that has learned to use our systems against us. We can learn to recognize this happening, but in order to do that, we have to be able to step outside the systems that we cherish the most and think about them like an adversary.

For example, it’s easy to think “okay, our group doesn’t like sexual predators, so we’ll ban sexual behavior within the group, and while we’re at it, we’ll also ban alcohol, since drinking impairs people’s decision-making.” On the surface, this sounds likely to be effective: it’s a bright line, right? Remember, though: “Here’s something you want, and here are some rules telling you that you can’t get it. Can you find a loophole in the rules?” Puritanical adherence to an object-level system creates exploitation vectors for bad actors. In an environment where having some trait T is a sin, there’s a strong incentive to appear non-T-like. This gives bad actors a new handle for gaining social control: the threat of impropriety. If everyone in a group is a convincing rumor or a planted bottle away from being ostracized, anyone without a conscience suddenly has an incredibly powerful weapon for undermining or getting rid of people who might inconvenience them by, say, not letting them get their way. It becomes even more powerful in groups where many members have low emotional intelligence, like technical groups. For people who score highly on measures of Machiavellian tendencies, high emotional intelligence is a force multiplier, as they’re able to use their emotional intelligence instrumentally to further their manipulative goals. In a low-emotional-intelligence environment, this is like shooting fish in a barrel.

A sufficiently manipulative person can even convince people to act in ways that betray their own consciences, as happened with the Tor organizer I mentioned before. It’s great to have standards, except when nobody’s willing to act on them. Even when you can’t count on your community to uphold the standards it’s adopted, though — or its members to act on their individual principles — you can always uphold your own.

That’s the grim reality of a world in which we have to trust other people: sometimes they let us down. No matter how watertight an organization’s Code of Conduct, if leadership wimps out on enforcing it — or only enforces it selectively — the code is worth less than the paper it’s written on. As an individual, about the only thing you can do about that is endeavor to spend your time around people with backbones. For all the debate that goes into their wording, rules and laws are abstract things which cannot act on their own. No matter how comprehensive the rules are, people will ultimately do whatever the hell they think they can get away with. Like ants, we operate in concert, but each of us acts alone.

The problem we face, then, is: in the face of Conway’s law and a structure prone to self-organized criticality, can we construct a stigmergy that resists bad actors without the high cost of large avalanches?

Mark Manson has noticed this too, from another direction:

In the attention economy, people are rewarded for extremism. They are rewarded for indulging their worst biases and stoking other people’s worst fears. They are rewarded for portraying the world as a place that is burning to the ground, whether it’s because of gay marriage, or police violence, or Islamic terrorism, or low interest rates. The internet has generated a platform where apocalyptic beliefs are celebrated and spread, and moderation and reason is something that becomes too arduous and boring to stand.

And this constant awareness of every fault and flaw of our humanity, combined with an inundation of doomsayers and narcissistic nihilists commanding our attention space, is what is causing this constant feeling of a chaotic and insecure world that doesn’t actually exist.

He also gets that the criticality is self-organized:

It’s us. We are going crazy. Each one of us, individually, capsized in the flood of negativity, we are ready to burn down the very structures on which the most successful civilizations in human history have been built.

Indulging our worst biases results, predictably, in error due to bias: if we aim at the wrong targets, we will hit the wrong targets. But our models of the world can also suffer from another class of errors: error due to variance. Err too far on the side of variance, and you’ll overfit to the random noise in your training set instead of the signal. Although there is always a tradeoff between bias and variance, the two are partially independent. This means that a model can simultaneously overfit due to hypersensitivity, and underfit due to bad assumptions.

I wrote about this last year in terms of precision and recall, another pair of properties we use to evaluate models in machine learning. As that post describes, the Schroedinger’s Rapist model is a high-bias, high-variance model with no false negatives but a brutally large number of false positives. Its one big advantage, when it comes to the meme’s own evolutionary fitness, is that people who adopt it feel like they have made themselves safer by doing so. “Trust no one of unavoidably broad class X” is another one of those ideas that sounds feasible (if draconian) on the surface — but it’s underspecified. Trust, in practice, is ditransitive: you trust someone with something. When that theme, the thing you’re trusting them with, is underspecified, that’s where a bad actor can nudge you toward redefining your boundaries farther and farther backward. “Trust everyone who signals Y” is equally underspecified, but even worse, because in a world where social media makes long-range (in graph-distance terms, not physical distance) signaling nearly free, a willful liar can find a new sucker every second. “You really think someone would do that? Just go on the internet and tell lies?” Look, if you hadn’t decided Reddit had cooties, you would have incorporated that meme into your thinking a decade ago and we wouldn’t need to have this conversation.

Ever found yourself realizing that things have gone too far, but can’t quite piece together how they got to be so bad? Often that’s the result of not recognizing your own boundaries in the first place. If you haven’t defined them, or are willing to let people get away with infringing on them in the interest of not rocking the boat socially, bad actors are happy to step in and define them — for their benefit, not yours. Sometimes, however, it’s the result of not recognizing boundary-pushing behavior, or not having a model for what that looks like. Like deadlines, a lot of people only notice their own boundaries from the whistling sound they make as they fly by.

I’m not saying never to re-evaluate your boundaries. Rather, never dial them back under duress, or in any other kind of stressful situation, for that matter. Do your reassessing afterward. Boundary-pushing is a dominance game in which merely feeling safe is tantamount to pissing yourself to keep warm. If your goal is to be safe, rather than to feel like everything is fine right up until your house burns down, there are two skills you have to learn. The first is to recognize dominance games in progress, and the second is to either exit or flip the script as the situation and your personal capacities call for it.

“Apply a particular set of object-level boundaries” can’t solve the problem of “people are often bad at holding their personal ground, especially in the moment.” If your boundaries are all object-level, a bad actor has only to set up a forced-error situation by incentivizing you to defend one at the expense of another. If you value your friends, s/he can use them as human shields, involving them such that drawing attention to the sociopath’s behavior brings harm to your friend. If you value an ideology, s/he can use it as a shield, associating him/herself with it so publicly and strongly that people fear that speaking up about the sociopath will “damage the brand.” The foolish man builds his house upon the nouns, and the clever sociopath turns those nouns into the walls of a silo.

The wise man builds his house upon the verbs: the purpose of boundaries is to protect your freedom of action. Action potential, like attention, is a finite resource, and everybody wants yours. Giving it away for free to the outrage of the day leaves you impoverished not only when it comes to local conflicts, but when it comes to tending your own garden. If you’re going to be the change you want to see in the world, you have to pick your battles. If you want to actually see some change, you’re going to have to make it locally.

Scope insensitivity comes into play here, too. We say we’re willing to dedicate value (i.e., pay) to prevent harm, but our instincts for estimating how much harm should correspond to how much value are wildly off. When Desvouges et al asked subjects how much they would spend to prevent migratory birds from drowning in oil-polluted ponds, on average the subjects were willing to dedicate less money to rescuing 20,000 birds ($78) than they were to rescuing 2,000 birds ($80). On a more timely note, when Bloomberg polled 749 likely voters in the 2016 election about the extent to which various actions of Donald Trump’s bothered them — “botheredness” is effectively a proxy variable for gut-check estimate of harm — only 44% were “bothered a lot” by the fraudulent Trump University, and 26% “bothered not at all.” By contrast, 62% found Trump mocking a disabled reporter very bothersome, and only 15% didn’t care. Thousands of little, far-away, invisible people got scammed, yet our instincts tell us an insult to one person we’re able to see is a greater harm. Once again, instinct is full of shit.

There’s no happy ending here. Maybe a few people will read this series and hit upon some local changes they can make to improve the stability of their own environment, but the pessimist in me isn’t about to put money on it. The most local environment is the one inside your own head, and if you’re content to feel like you’re on the right side of history even as it ends around you, nothing I have to say can help you. Satisfaction is itself an attractor, and when we fail to find satisfaction outside ourselves, we retreat to look for it within, even when what we find is nothing more than a tasty ligand. All the while, the sand keeps pouring.

Lots of essays end with an exhortation that some choice is yours. This time it’s even true. You can’t choose universal properties, but you can always choose where to expend your attention and effort. This has always been true, despite all the external demands for your resources. If you want to build something better, look directly around yourself first, and start there.


One parting observation:

Nearly everything I’ve said here also applies to the defining egregores of a two-party system.

Pleasant dreams.


Works cited and recommended reading:

Freyd, Jennifer, and Pamela Birrell. Blind to Betrayal: Why We Fool Ourselves We Aren’t Being Fooled.

Hintjens, Pieter. The Psychopath Code.

Issendai. “Sick Systems: How to Keep Someone With You Forever” et seq.:

“On Whittling Yourself Away”

“Qualities That Keep You in a Sick System”

McGregor, Jane and Tim. “Empathic people are natural targets for sociopaths — protect yourself.”

McGregor, Jane and Tim. The Empathy Trap: Understanding Antisocial Personalities.

Simon, George. In Sheep’s Clothing: Understanding and Dealing with Manipulative People.

U.S. Department of Defense Standards of Conduct Office. Encyclopedia of Ethical Failure.

Wallisch, Pascal. “Psychopaths in our midst — what you should know.”

Posted in Uncategorized | 5 Comments

Too Late for the Pebbles to Vote, Part 2

Previously, we discussed how sociopaths embed themselves into formerly healthy systems. Now let’s talk about what happens when those systems undergo self-organized criticality.

Consider a pile of sand. Trickle more sand onto it from above, and eventually it will undergo a phase transition: an avalanche will cascade down the pile.

As the sand piles up, the slope at different points on the surface of the pile grows steeper, until it passes the critical point at which the phase transition takes place. The trickle of sand, whatever its source, is what causes the dynamical system to evolve, driving the slope ever back up toward the critical point. Thanks to that property, the critical point is also an attractor. However, crucially, the overall order evident in the pile arises entirely from local interactions among grains of sand. Criticality events are thus self-organized.

Wars are self-organized criticality events. So are bank runs, epidemics, lynchings, black markets, riots, flash mobs, neuronal avalanches in your own brain’s neocortex, and evolution, as long as the metaphorical sand keeps pouring. Sure, some of these phenomena are beneficial — evolution definitely has a lot going for it — but they’re all unpredictable. Since humans are arguably eusocial, it stands to reason that frequent unpredictability in the social graphs we rely on to be human is profoundly disturbing. We don’t have a deterministic way to model this unpredictability, but wrapping your head around how it happens does make it a little less unsettling, and can point to ways to route around it.

A cellular automaton model, due to Bak, Tang, and Wiesenfeld, is the classic example of self-organized criticality. The grid of a cellular automaton is (usually) a directed graph where every vertex has out-degree 4 — each cell has four neighbors — but the model generalizes just fine to arbitrary directed graphs. You know, like social graphs.

Online social ties are weaker than meatspace ones, but this has the interesting side effect of making the online world “smaller”: on average, fewer degrees separate two arbitrary people on Facebook or Twitter than two arbitrary people offline. On social media, users choose whether to share messages from one to another, so any larger patterns in message-passing activity are self-organized. One such pattern, notable enough to have its own name, is the internet mob. The social graph self-reorganizes in the wake of an internet mob. That reorganization is a phase transition, as the low become high and the high become low. But the mob’s target’s social status and ties are not the only things that change. Ties also form and break between users participating in, defending against, or even just observing a mob as people follow and unfollow one another.

Some mobs form around an explicit demand, realistic or not — the Colbert Report was never in any serious danger of being cancelled — while others identify no extrinsic goals, only effects on the social graph itself. Crucially, however, both forms restructure the graph in some way.

This structural shift always comes with attrition costs. Some information flows break and may never reform. The side effects of these local interactions are personal, and their costs arise from the idiosyncratic utility functions of the individuals involved. Often this means that the costs are incomparable. Social media also brings the cost of engagement way down; as Justine Sacco discovered, these days it’s trivial to accuse someone from halfway around the planet. But it’s worse than that; even after a mob has become self-sustaining, more people continue to pile on, especially when messages traverse weak ties between distant groups and kick off all-new avalanches in new regions of the graph.

strong-weak-ties

Members of the black group are strongly connected to other members of their group, and likewise for the dark gray and white groups. The groups are interconnected by weak, “long-distance” ties. Reproduced from The Science of Social 2 by Dr. Michael Wu.

Remember Conway’s law? All systems copy the communication structures that brought them into being. When those systems are made of humans, that communication structure is the social graph. This is where that low average degree of separation turns out to be a problem. By traversing weak ties, messages rapidly escape a user’s personal social sphere and propagate to ones that user will never intersect. Our intuitions prepare us for a social sphere of about a hundred and fifty people. Even if we’re intellectually aware that our actions online are potentially visible to millions of people, our reflex is still to act as if our messages only travel as far and wide as in the pre-social-media days.

This is a cognitive bias, and there’s a name for it: scope insensitivity. Like the rabbits in Watership Down, able to count “one, two, three, four, lots,” beyond a certain point we’re unable to appreciate orders of magnitude. Furthermore, weak long-distance ties don’t give us much visibility into the size of the strongly-tied subgraphs we’re tapping into. Tens of thousands of individual decisions to shame Justine Sacco ended in her being the #1 trending topic on Twitter — and what do you suppose her mentions looked like? Self-organized criticality, with Sacco at ground zero. Sure, #NotAllRageMobs reach the top of the trending list, but they don’t have to go that far to have significant psychological effect on their targets. (Sociologist Kenneth Westhues, who studies workplace mobbing, argues that “many insights from [the workplace mobbing] literature can be adapted mutatis mutandis to public mobbing in cyberspace,” and I agree.)

In the end, maybe the best we can hope for is user interfaces that encourage us to sensitize ourselves to the scope of our actions — that is to say, to understand just how large of a conversation we’re throwing our two cents into. Would people refrain from piling on to someone already being piled on if they knew just how big the pile already was? Well, maybe some would. Some might do it anyway, out of malice or out of virtue-signaling. As Robert Kegan and Lisa Laskow Lahey point out in Immunity to Change, for many people, their sense of self “coheres by its alignment with, and loyalty to, that with which it identifies.” Virtue signaling is one way people express that alignment and loyalty to groups they affiliate with, and these days it’s cheap to do that on social media. Put another way, the mobbings will continue until the perverse incentives improve. There’s not much any of us can individually do about that, apart from refraining from joining in on what appears to be a mob.

That’s a decision characteristic of what Kegan and Lahey call the “self-authoring mind,” contrasted with the above-mentioned “socialized mind,” shaped primarily “by the definitions and expectations of our personal environment.” Not to put too fine a point on it, over the last few years, my social media filter bubble has shifted considerably toward the space of people who independently came to a principled stance against participation in mobs. However, given that the functional programming community, normally a bastion of cool reason and good cheer, tore itself apart over a moral panic just a few months ago, it’s clear that no community is immune to flaming controversy. Self-organized criticality means that the call really is coming from inside the house.

Here’s the moral question that not everyone answers the same way I do, which has led to some restructuring in my region of the graph, a local phase transition: when is it right to throw a handful of sand on the pile?

Some people draw a bright line and say “never.” I respect that. It is a consistent system. It was, in fact, my position for quite some time, and I can easily see how that comes across as throwing down for Team Not Mobbing. But one of the implications of being a self-authoring system is that it’s possible to revisit positions at which one has previously arrived, and, if necessary, rewrite them.

So here’s the core of the conundrum. Suppose you know of some information that’s about to go public. Suppose you also expect, let’s say to 95% confidence, that this event will kick off a mob in your immediate social sphere. An avalanche is coming. Compared to it, you are a pebble. The ground underneath and around you will move whether you do anything or not. What do you do?

I am a preference consequentialist, and this is a consequentialist analysis. I won’t be surprised if how much a person agrees with it correlates with how much of a consequentialist they are. I present it mainly in the interest of braindumping the abstractions I use to model these kinds of situations, which is as much in the interest of information sharing as anything else. There will be mathematics.

I am what they call a “stubborn cuss” where I come from, and if my only choices are to jump or be pushed, my inclination is to jump. Tor fell down where organizational accountability was concerned, at first, and as Karen Reilly’s experience bears out, had been doing so for a while. So that’s the direction I jumped. To be perfectly honest, I still don’t have anything resembling a good sense of what the effects of my decision were versus those of anyone else who spoke up, for whatever reason, about the entire situation. Self-organized chaotic systems are confounding like that.

If you observe them for long enough, though, patterns emerge. Westhues has been doing this since the mid-1990s. He remarks that “one way to grasp what academic mobbing is is to study what it is not,” and lists a series of cases. “Ganged up on or not,” he concludes of a professor who had falsified her credentials and been the target of student protests about the quality of her teaching, “she deserved to lose her job.” Appelbaum had already resigned before the mob broke out. Even if the mob did have an extrinsic demand, his resignation couldn’t have been it, because that was already over and done with.

Okay, but what about the intrinsic outcomes, the radical restructuring of the graph that ensued as the avalanche settled? Lovecruft has argued that removing abusers from opportunities to revictimize people is a necessary step in a process that may eventually lead to reconciliation. This is by definition a change in the shape of the social graph. Others counter that this is ostracism, and, well, that’s even true: that’s what it looks like when a whole lot of people decide to adopt a degrees-of-separation heuristic, or to play Exit, all at once.

Still others argue that allegations of wrongdoing should go before a criminal court rather than the court of public opinion. In general I agree with this, but when it comes to longstanding patterns of just-this-side-of-legally-actionable harm, criminal courts are useless. A bad actor who’s clever about repeatedly pushing ever closer to that line, or who crosses it but takes care not to leave evidence that would convince a jury beyond a reasonable doubt, is one who knows exactly what s/he’s doing and is gaming the system. When a person’s response to an allegation boils down to “no court will ever convict me,” as Tor volunteer Franklin Bynum pointed out, that sends a game-theoretically meaningful signal.

Signaling games are all about inference and credibility. From what a person says, what can you predict about what actions they’ll take? If a person makes a particular threat, how likely is it that they’ll be able to make good on it? “No court will ever convict me” is actually pretty credible when it comes to a pattern of boundary-violating behavior that, in many cases, indeed falls short of prosecutability. (Particularly coming from someone who trades on their charisma.) Courts don’t try patterns of behavior; they try individual cases. But when a pattern of boundary-pushing behavior is the problem, responding to public statements about that pattern with “you’ll never prove it” is itself an instance of the pattern. As signals go, to quite a few people, it was about the loudest “I’m about to defect!” Appelbaum could have possibly sent in a game where the players have memory.

Courts don’t try patterns of behavior, but organizations do. TQ and I once had an incredibly bizarre consulting gig (a compilers consulting gig, which just goes to show you that things can go completely pear-shaped in bloody any domain) that ended with one of the client’s investors asking us to audit the client’s code and give our professional opinion on whether the client had faked a particular demonstration. Out of professional courtesy, we did not inquire whether the investor had previously observed or had suspicions about inauthenticity on the client’s part. Meanwhile, however, the client was simultaneously emailing conflicting information to us, our business operations partner, and the investor — with whom I’d already been close friends for nearly a decade — trying to play us all off each other, as if we didn’t all have histories of interaction to draw on in our decision-making. “It’s like he thinks we’re all playing classical Prisoner’s Dilemma, while the four of us have been playing an iterated Stag Hunt for years already,” TQ observed.

Long story short (too late), the demo fell shy of outright fraud, but the client’s promises misrepresented what the code actually did to the point where the investor pulled out. We got a decent kill fee out of it, too, and a hell of a story to tell over beers. When money is on the line, patterns of behavior matter, and I infer from the investor’s action that there was one going on there. Not every act of fraud — or force, for that matter — rises to the level of criminality, but a pattern of repeated sub-actionable force or fraud is a pattern worth paying attention to. A pattern of sub-actionable force or fraud coupled with intimidation of people who try to address that pattern is a pattern of sociopathy. If you let a bad actor get away with “minor” violations, like plagiarism, you’re giving them license to expand that pattern into other, more flagrant disregard of other people’s personhood. “But we didn’t think he’d go so far as to rape people!” Of course you didn’t, because you were doing your level best not to think about it at all.

Investors have obvious strong incentives to detect net extractors of value accurately and quickly. Another organization with similarly strong incentives, believe it or not, is the military. Training a soldier isn’t cheap, which is why the recruitment and basic training process aims to identify people who aren’t going to acquire the physical and mental traits that soldiering requires and turn them back before their tenure entitles them to benefits. As everyone who’s been through basic can tell you, one blue falcon drags down the whole platoon. Even after recruits have become soldiers, though, the military still has strong incentives to identify and do something about serial defectors. Unit cohesion is a real phenomenon, for all the disagreement on how to define it, and one or a few people preying on the weaker members of a unit damages the structure of the organization. The military knows this, which is the reason its Equal Opportunity program exists: a set of regulations outlining a complaint protocol, and a cadre trained and detailed to handle complaints of discriminatory or harassing behavior. No, it’s not perfect, by any stretch of the imagination. The implementation of any human-driven process is only as rigorous as the people implementing it, and as we’ve already discussed, subverting human-driven processes for their own benefit is a skill at which sociopaths excel. However, like any military process, it’s broken down into bite-sized pieces for every step of the hierarchy. Some of them are even useful for non-hierarchical structures.

Fun fact: National Guard units have EO officers too, and I was one. Again and again during the training for that position, they hammer on the importance of documentation. We were instructed to impress that not just on people who bring complaints, but on the entire unit before anyone has anything to bring a complaint about. Human resources departments will tell you this too: document, document, document. This can be a difficult thing to keep track of when you’re stuck inside a sick system, a vortex of crisis and chaos that pretty accurately describes the internal climate at Tor over the last few years. And, well, the documentation suffered, that’s clear. But now there’s some evidence, fragmentary as it may be, of a pattern of consistent and unrepentant boundary violation, intimidation, bridge-burning, and self-aggrandizement.

Even when the individual acts that make up a pattern are calculated to skirt the boundaries of actionable behavior, military commanders have explicit leeway to respond to the pattern with actions up to and including court-martial, courtesy of the general article of the Uniform Code of Military Justice:

Though not specifically mentioned in this chapter, all disorders and neglects to the prejudice of good order and discipline in the armed forces, all conduct of a nature to bring discredit upon the armed forces, and crimes and offenses not capital, of which persons subject to this chapter may be guilty, shall be taken cognizance of by a general, special, or summary court-martial, according to the nature and degree of the offense, and shall be punished at the discretion of that court.

It’s the catch-all clause that Kink.com installed a bunch of new rules in lieu of, an exception funnel that exists because sometimes people decide that having one is better than the alternative. Realistically, any form of at-will employment implicitly carries this clause too. If a person can be fired for no reason whatsoever, they can certainly be fired for a pattern of behavior. Companies have this option; organizations that don’t maintain contractual relationships with their constituents face paths that are not so clear-cut, for better or for worse.

But I take my cues about exception handling, as I do with a surprisingly large number of other life lessons, from the Zen of Python:

Errors should never pass silently.
Unless explicitly silenced.

When a person’s behavior leaves a pattern of damage in the social fabric, that is an exception going silently unhandled. The whisper network did not prevent the damage that has occurred. It remains to be seen what effect the mob-driven documentation will have. Will it achieve the effect of warning others about a recurring source of error (I suppose nominative determinism wins yet again), or will the damaging side effects of the phase transition prove too overwhelming for some clusters of the graph to bear? Even other consequentialists and I might part ways here, because of that incomparability problem I mentioned earlier. I don’t really have a good answer to that, or to deontologists or virtue ethicists either. At the end of the day, I spoke up because of two things: 1) I knew that several of the allegations were true, and 2) if I jumped in front of the shitstorm and got my points out of the way, it would be far harder to dismiss as some nefarious SJW plot. Sometimes cross-partisanship actually matters.

I don’t expect to change anyone’s mind here, because people don’t develop their ethical principles in a vacuum. That said, however, situations like these are the ones that prompt people to re-examine their premises. Once you’re at the point of post-hoc analysis, you’re picking apart the problem of “how did this happen?” I’m more interested in “how do we keep this from continuing to happen, on a much broader scale?” The threat of mobs clearly isn’t enough. Nor would I expect it to be, because in the arms race between sociopaths and the organizations they prey on, sociopath strategies evolve to avoid unambiguous identification and thereby avoid angry eyes. “That guy fucked up, but I won’t be so sloppy,” observes the sociopath who’s just seen a mob take another sociopath down. Like any arms race, it is destined to end in mutually assured destruction. But as long as bad actors continue to drive the sick systems they create toward their critical points, there will be avalanches. Whether you call it spontaneous order or revolutionary spontaneity, self-organized criticality is a property of the system itself.

The only thing that can counteract self-organized aggregate behavior is different individual behavior that aggregates into a different emergent behavior. A sick system self-perpetuates until its constituents decide to stop constituting it, but just stopping a behavior doesn’t really help you if doing so leaves you vulnerable. As lousy of a defense as “hunker down and hope it all goes away soon” is over the long term, it’s a strategy, which for many people beats no strategy at all. It’s a strategy that increases the costs of coordination, which is a net negative to honest actors in the system. But turtling is a highly self-protective strategy, which poses a challenge: any proposed replacement strategy that lowers the cost of coordination among honest actors also must not be significantly less self-protective, for idiosyncratic, context-sensitive, and highly variable values of “significantly.”

I have some thoughts about this too. But they’ll have to wait till our final installment.

Posted in Uncategorized | 13 Comments