Autism and war crimes: Turing's moral character in The Imitation Game

Last night I attended a packed screening of The Imitation Game. My thoughts on the movie are below, but tl;dr: I thought the film was great. If you have any interest in mathematics, cryptography, or the history of computing you will love this film. But this isn't just a movie for nerds. The drama of the wartime setting and the arresting performance from Cumberbatch make this film entertaining and accessible to almost everyone-- despite the fact that it's a period war drama with almost no action or romance and doesn't pass the Bechdel test.

Of course, as a philosopher I have questions and criticisms. But don't let that confuse you: go see this film. Turning history's intellectual heroes into media's popular heroes is a trend I'd like to reinforce. 

Turing's story is timely and central for understanding the development of our world. I'm happy to see his work receive the publicity and recognition it deserves. Turing is something of a hero of mine; I spent half my dissertation wrestling with his thoughts on artificial intelligence, and I've found a way to work him in to just about every class I've taught for the last decade. I know many others feel just as passionately (or more!) about his life and work. I have been looking forward to this film for a long time and my expectations were high. I was not disappointed. The Oscar buzz around this film is completely appropriate.

Spoilers will obviously follow. There are minor inaccuracies in the film: Knightley mispronounces Euler's name; Turing's paper is titled "Computing Machinery and Intelligence", not "The Imitation Game"; the Polish bombe machine was eventually named Victory, never Christopher. But I'm not so interested in that sort of critique. 

I'd instead like to talk about two subtle but important themes in the film: first, Turing's eccentric behavior is depicted in such a way as to strongly suggest that he was on the autism spectrum. Second, the film raises issues of Turing's involvement and moral culpability in the war that are not resolved. Although I think the film (in both the script and Cumberbatch's performance) successfully incorporate these themes into a convincing character, I'm not sure how much this character reflects on the historical Turing. 

Just to be clear, there's more to the movie than what I'm focusing on here, and in any case I'm not the kind of movie snob that would complain about embellishing the drama a bit for the screen. But I was already pretty familiar with the story of Turing before seeing the film, and these two themes stood out as deserving further investigation.

Turing and Autism

The film shows Turing both as a child and an adult being told that he's different from other people, and that these differences make him unique and capable of doing extraordinary things. Turing explicitly synthesizes this lesson during the film's very brief discussion of artificial intelligence. In the course of a police interrogation, Turing argues that we talk of "thinking" in a way that admits of great differences between thinkers and may even extend to the thinking performed by copper wire and steel. I'll talk more about this scene at the end. The central lesson of the scene, that Turing is different from 'normal' people, is also found in the repeating and somewhat clunky motto of the film: "Sometimes it is the people no one imagines anything of who do the things that no one can imagine." The phrase functions like a "with great power comes great responsibility' line in the Turing origin mythology portrayed the film, told to him as a child and repeated by him to Knightley's Clarke as an adult. The superheroic set up of Turing's genius makes his untimely end all the more tragic. 

But Turing's abilities aren't merely depicted as extraordinary or eccentric, they are depicted as explicitly cognitively atypical. As a child Turing is shown obsessively separating his peas and carrots into neat piles, a behavior for which he is bullied by his peers. He avoids eye contact and refuses to engage in pleasantries with his colleagues, resulting in many awkward social situations. He has few friends and has difficulty making friends. He doesn't get jokes. He works obsessively and in isolation, seeming to care more for his machines than his own health. He stutters frequently, especially when he gets passionate. I was watching the movie with a psychologist who agreed that the performance in the film was suggestive of autism spectrum behaviors. 

I had never considered whether Turing was autistic before the film. Of course autism was not diagnosed in Turing's time, but some research online shows that several people have attempted retrospective diagnoses of Turing. For instance, this book devotes a chapter to Turing as a case of Asperger SyndromeThis article in the Guardian from 2012 about Turing's childhood and family life states very matter-of-factly that Turing would have been diagnosed with Asperger Syndrome today. We should be clear that since the DSM-5 Asperger Syndrome has been folded into the autism spectrum, so no one is being diagnosed with Asperger Syndrome after 2013. In any case, both sources seem rather light on evidence supporting the diagnoses, drawing the conclusion simply from Turing's somewhat eccentric habits and interest in mathematics. 

The best resource I found on Turing and Autism was this blog post, which discusses an article published in a psychology journal in 2003 doing a retrospective diagnosis from biographical details of Turing's life. The post lists the evidence cited in support in that paper. Unfortunately I don't have full academic superpowers and can't find a copy of the original article. But if the post is accurate, the case seems rather thin and doesn't fully support the portrayal Cumberbatch provides. I'm compelled to believe the conclusion of the blog post:
The difficulty with making historical diagnoses is that there's no opportunity to ask further, more targeted questions. What happened if Turing didn't get his nightly apple? Did it bother him, or did he eat some other fruit?
A proper diagnostic interview might uncover further evidence that would provide a more compelling and watertight case for diagnosis. Even so, Turing's case highlights the subjective nature of diagnosis. This is particularly true around the edges of the autism spectrum where, as Lorna Wing put it, Asperger syndrome "shades into eccentric normality".
Attempts to diagnose Turing arguably reveal more about our current fuzzy concepts of autism than they do about Turing the man. And they make plain why we're still a long way from understanding the enigma that is autism.

If the diagnosis of autism is questionable of the historical Turing, it raises some issues about the performance in the film. Turing's social isolation in particular seems overplayed. The historical Turing had a decidedly more active social and family life, as evidenced by, for instance, his letter to a friend in distress or the memoir from his brother. Turing was undoubtedly unique and eccentric, but the film's presentation of Turing as socially and intellectually isolated and autistic is stronger than the historical record seems to support. 

In Cumberbatch's hands, Turing's isolation manifests as a speech impediment and an apparent inability to grasp the basic dynamics of human engagement. He requires simple instruction and repeated correction from colleagues in order to navigate a social spaceA few times in the film he seems incredibly savvy about people's intentions and opinions, as when he convinces Knightley to come to Blechtley. But most of his actions are insensitive to the social context and the feelings of others. At several points he justifies an apparently cruel decision by saying that it is "the only logical choice", making him sound more Vulcan than human. And for this behavior, he's called a "monster", "inhuman", and a "fragile narcissist" by his only friends in the film.

In other words, the film directly ties Turing's asocial behavior to his moral character. Thus, presenting Turing as autistic has an influence in how we interpret that character. His portrayal in the film is of someone who is coldly rational, calculating with complete emotional detachment to anything besides his machines. Since the movie depicts him as almost single-handedly defeating the Nazis he doesn't come off as an evil person. Indeed, the movie credits him for potentially saving 14 million lives and two more years of war. In his darkest depression at the end of the film, the camera still looks up to him, and the narrative turns on the slightest of his gestures. He's clearly presented as a hero whose actions hold great power and ultimately result in the greater good. 

Nevertheless, the film's Turing comes off as principled to a fault, rational to the point of compromising the respect and support of his colleagues. He is recognized by his closest friends as capable of extreme cruelty for the sake of his pet projects. Which brings me to the second theme.

Turing and War

When he first arrives at Bletchley, Turing tells the Commander that he's "agnostic about violence". In fact, Turing repeats a theory of violence several times in the film: "Humans find violence deeply satisfying." This is not idle philosophy; the insight is shown has having direct utility in Turing's life. This is how Turing explains both his childhood bullying and the techniques he used to survive it. This is also how he rationalizes the violence done to him and his machine by his skeptical colleagues and superiors at Bletchley. Turing is the film's hero, but his relationship to violence makes his character complex and challenging. 

It bears repeating: this film is a period war drama. The enemy in the film is identified explicitly as time, and is represented occasionally by the relentless onslaught of the Nazi war machine. Turing's triumphant victory is in cracking the enigma code at Bletchley, where the heart of the movie lies. Some time in the film is spent with Turing as a boy, and the film ends tragically, as does Turing's life, with his persecution by an intolerant culture. So there is lots of room to reflect on the cruel treatment he and other gay men experienced at the time. Coupled with Clarke's struggle to be taken seriously as a brilliant female mathematician, the film makes a powerful statement on the deep hostility in our culture towards gender and sexual identity.

But the central drama of the story is not Turing's sexuality or his tragic end, it is about Turing's involvement in the war. Turing's character in the film is depicted as nearly asexual, more interested in math than interpersonal relationships. Although Turing does identify as gay, his sexuality is almost completely absent from the film. That this character would have the street smarts or physical drive to seek out a prostitute is outside the scope of the film and this gives us little room for evaluating Turing's character in this action. This film does not put Turing's sexuality on trial, nor should it have, but it leaves us with ultimately a very narrow slice of the life of this complex human being. 

Similarly, although Turing is shown in the grips of a deep depression at the end of the film, he is never depicted as explicitly suicidal. There are no representations of him even contemplating suicide, much less acting it out. His one encounter cleaning an arsenic, the chemical that will eventually take his life, shows him taking at least basic safety precautions and instructing others to do the same. Although the reasons for his depression are made clear, we're not given any insight into his decision in taking his own life. So the film also restricts us from using this behavior to evaluate Turing's character. 

Instead, we're left only with Turing's relationship with his wartime colleagues in the context of the British blockage on which to evaluate his character. The Guardian recently ran an article complaining that the film portrays Turing as a traitor by concealing the presence of known Soviet spies at Bletchley. I think the complaint is kind of silly; at best, the film shows Turing being caught up in much larger political battles than were occurring at Bletchley. But the responsibility for the spy doesn't seem to fall on Turing in the film. Instead, it falls on the mysterious MI-6 agent, who claims to have deliberately placed the spy in Turing's working group. It's not clear from the film that a more radically patriotic Turing would have done any different. 

My reaction was almost the opposite: the film shows Turing as more capable of callously engaging in strategic militaristic thinking than I would have expected. Historically, Turing participated in the anti-war movement but was never radicalized, and eventually came to work in support of the war at Bletchley Park. In the film, Turing's choice to work at Bletchley was almost entirely self-serving: it gave him the chance to work on the hardest cryptographic puzzle around and the opportunity to fund the construction of his universal computing machine. Turing's "agnosticism" towards violence allows him to become a wartime opportunist, convincing the military to fund his pet science project. In the end Turing was right and his computer helped win the war, but at no time in the film does Turing appeal to such consequentialist reasoning to justify his actions. 

On the contrary, his actions are always justified by what is "logical", usually expressed in stuttering and exasperated tones strongly suggestive of autism. Two actions in particular demonstrate this aspect of his moral character, and in both cases he is called an inhuman monster and is physically assaulted for the position he defends. 

The first happens shortly after the team cracks the Enigma code. Using freshly decoded messaged they build a map of all the German and British boats, and quickly deduce that the Germans are planning an attack on a civilian convoy populated by family members of Turing's own team. Turing nevertheless prevents them from alerting the military authorities by reasoning that if they seem to anticipate the German threat that their success at cracking Enigma would have been revealed. His team pleads with him about the civilians on the ships, but Turing ignores their pleas. He argues that it would waste years of work to expose their advantage without gaining the strategic advantage. He even calls directly for the Germans to sink the ships, for the sake of his work. For this he is punched in the face, though his colleagues eventually and begrudgingly come to agree with his view. 

This action does have historical basis; the British used the intelligence from Blechtley strategically, and Turing had significant influence over the process. The film shows Turing single-handedly making significant and costly tactical military decisions, and I'm not sure how accurate that is. But Turing was certainly an active agent in the war machine, and this certainly deserves discussion in treating him like a hero. During his interrogation Turing asks, "Am I a war hero? Am I a criminal?" The attentive audience, actively calculating permutations, can't help but hear: "Am I a war criminal?" Presenting a hero in this light certainly makes this film more challenging than it might first appear. 

It is worth noting that Turing's counterpart in the States, John von Neumann, was actively working on the Manhattan Project at the time. The point being made subtly but distinctly by the film is that this history of computing is intimately tied to the history of warfare in the twentieth century. Given the aggressive use of robotics and artificial intelligence by the military today, the film provides an important opportunity for the computer science community to reflect on its continued relationship to the war machine. 

The second of Turing's questionable moral actions occurs later on in the film. After Turing learns of the infiltration of spies at Bletchley and the risks to himself and Clarke, Turing confesses to his fianceé about his sexual orientation. He calls off the engagement and urges her to leave Bletchley. Clarke refuses. She says their relationship is different and she likes it just as it is. She says that they both care about each other in their own way, and that's all that matters to her. Turing responds that in fact he doesn't care about her, and that he was only using her to help crack Enigma. He essentially confirms himself to be the monster they always thought him to be. For this act of cruelty he is again slapped in the face. 

The film suggests we read this exchange as if Turing is lying to Clarke to persuade her to leave. At his engagement party Turing admits to caring for Clarke, and not much about their relationship had changed besides the increasing pressure at Blechtley. The implication is that Turing cares so deeply for Clarke that he's willing to suffer through his cruel actions towards her for her own safety and well-being. In an earlier scene in the film Turing tries to brush off a police investigation through a similar kind of misdirection. 

But this reading of the film would require interpreting Turing as being deeply emotionally invested in his relationships and engaged in a kind of inartful social deception to achieve these ends, and this reading sits uncomfortably with the portrayal of Turing as autistic and cognitively incapable of engaging in such a complex social dance. In other words, we'd have to imagine that Turing is secretly a social genius that is merely manipulating is social environment by imitating an emotional cripple. 

The more straightforward reading of his action is that Turing wanted Clarke gone, and he was willing to be cruel to achieve his ends, regardless of how it made her feel. This reading fits with Turing's coldly calculating character and his opportunistic use of military resources, but it leaves Turing looking like exactly the inhuman monster his friends considered him to be. This also seems to be the only reading of Turing's character compatible with his breakdown at the end of the film, where his emotional isolation seems beyond imitation. 

The result is a difficult and complex portrayal of Turing's moral character, one that is somewhat buried under the more superheroic lionizing frame adopted by the film. Since some tech elite that were given advanced private screenings, we might read the moral of the film cynically as follows: give the eccentric geniuses what they want, even if you think they are doing something monstrous and cruel, because they know better than everyone else and in the end they'll be right. One can see why this message would ring loudly in Silicon Valley. 

I think the more responsible reading of the film is that cruelty and violence are everywhere, even in our heroes. The British government defeating the Nazis is cause for celebration, but that same institution can destroy its most precious intellectual asset for no reason at all. Similarly, Turing can save millions of lives and bring an end to a world war, but will callously insult and alienate his friends and colleagues in the process. 

And this violence has no resolution or ultimate explanation, as evidenced by Turing's childhood friend Christopher dying from an illness without any emotional closure. Turing's death likewise feels meaningless and absurd, a senseless waste. Turing makes several atheistic claims in the film, most notably "Was I God? No. Because God didn't win the war. We did." But of course that same "we" caused the war, and fought in it, and profited from it. Turing's heroism arises not from overcoming the tragedy of war but instead from participating directly in it. Here lies tragedy, but here lies glory too

Coda: Fair play for machines

The film's celebration of Turing's life and accomplishments leaves me wanting to see more. I'd love to see Cumberbatch reprise the role of Turing in a live-action Logicomix, for instance. But I'd also like to see a more direct discussion of Turing's views on artificial intelligence. The brief interrogation scene I think captures the heart of Turing's position, that thinking can take many interesting forms, despite our biases against apparent differences. Although I feel that the portrayal of Turing as autistic was too strong in the film, I think it ultimately works to make the same point quite effectively. However, this lesson is only obliquely related to the question of thinking machines, and audiences may not fully appreciate how strongly Turing understood how the two come together. 

The issue of AI comes up in another form in the film, through Turing's relationship with Christopher, his bombe machine. Turing names the machine after his boyhood friend, and attributes to it agency and intelligence at many points in the film. His breakdown at the end of the film is triggered by the thought of the authorities confiscating the machine. Reaching towards Christopher, he cries: "I don't want to be alone".

The behavior again complicates the historical Turing's views on thinking machines. There's a strand of commentary on Turing's imitation game that focuses on the gendered nature of the original imitation game. On this reading, Turing's discussion says more about his struggle with his own sexual identity than anything about the abilities of machines. The performance in the film strongly corroborates this reading, suggesting that his attachment to machines was more a product of his compromised psychological state than any real attribute of the machine. Christopher the Bombe machine doesn't do anything resembling human intelligence. Whatever personality it acquires in the film is entirely a reflection of Turing. 

If Turing's attachment to his machines is a product of autism, it casts a skeptical light on his conclusion that machines can think, since it appears merely to be the result of his warped brain, and suggests nothing of the attributes of machines themselves. Turing might have been a genius and a hero, but according to the film he was also a pretentious asocial jerk. If he's urging "fair play for machines", that's just another eccentricity we don't have to take seriously. 

I think this reading would be incredibly unfortunate, given the theoretical centrality with which Turing treated the concept of fair play. After Turing was arrested, he sent a letter to a friend describing his distress:
I'm afraid that the following syllogism may be used by some in the future.
Turing believes machines thinkTuring lies with menTherefore machines do not think
In this letter, Turing is explicitly worried that people will appeal to facts from his personal life and use them as evidence for disbelieving his conclusions about thinking machines. Surely Turing's sexuality is no cause for disbelieving his conclusions, and the same should be said about autism as well, should we accept the film's portrayal. 

Turing's work on artificial intelligence was done in the late 40's, with Computing Machinery and Intelligence being published in 1950, well before his eventual arrest and psychological breakdown. In other words, Turing's work on AI comes at the height of his intellectual powers and public visibility. The fact that a war hero would return from solving the hardest problem in the world and immediately devote his efforts to defending the social status of thinking machines, knowing full well that he would face overwhelming popular disagreement for at least another fifty years, I think says a lot about his ethical character and the conviction of his beliefs. 

Showing Turing as accepting of cognitive diversity is a nice bow to tie on an ethically complicated character in this short film. But as a culture we have yet to fully process the overwhelming influence that Turing has had on the dynamics and values that shape the modern world. I hope that Turing's story is one that we continue to tell. 


Our social networks are broken. Here's how to fix them.


You can't really blame us for building Facebook the way we have. By “we” I mean we billion-plus Facebook users, because of course we are the ones who built Facebook. Zuckerberg Inc. might take all the credit (and profit) from Facebook's success, but all the content and contacts on Facebook-- you know, the part of the service we users actually find valuable-- was produced, curated, and distributed by us: by you and me and our vast network of friends. So you can’t blame us for how things turned out.  We really had no idea what we were doing when we built this thing. None of us had ever built a network this big and important before. The digital age is still mostly uncharted territory.

To be fair, we've done a genuinely impressive job given what we had to work with. Facebook is already the digital home to a significant fraction of the global human population. Whatever you think of the service, its size is nothing to scoff at. The population of Facebook users today is about the same as the global human population just 200 years ago. Human communities of this scale are more than just rare: they are historically unprecedented. We have accomplished something truly amazing. Good work, people. We have every right to be proud of ourselves.

But pride shouldn't prevent us from being honest about these things we build--it shouldn’t make us complacent, or turn us blind to the flaws in our creation. Our digital social networks are broken. They don't work the way we had hoped they would; they don't work for us. This problem isn't unique to Facebook, so throwing stones at only the biggest of silicon giants won’t solve it. The problem is with the way we are thinking about the task of social networking itself. To use a very American analogy, our existing social networking tools suffer from the equivalence of a transmission failure: we can get the engine running, but we are struggling to put that power to work.  We see the potential of the internet, but we're at a loss as to how we can direct all this activity into a genuinely positive social change. What little social organization the internet has made possible is fleeting and unreliable, more likely to raise money for potato salad than it is to confront (much less solve) any serious social problem. Arguably, our biggest coordinated online success to date has been the Ice Bucket Challenge; even if we grant the meme has had a positive impact, what change to the social order has come with it? What new infrastructure or social conscience was left in its wake? In terms of social utility, the IBC was like a twitching finger from an otherwise comatose patient: it may give us some hope, but who knows what else.

Of course, many opportunists have found clever ways to capitalize on the existing network structure, and a few have made a lot of money in the process. The economy is certainly not blind to the latent power of the internet. But as a rule, these digital opportunities are leveraged for purely private gain. The best the public can hope for is that successful digital businesses will turn out cheap services that we can shackle ourselves to like domesticated animals. There have been enough major successes of this model that in the year 2014 we’ve come to accept our fate as unpaid digital domestic labor. There is no longer any hope of using the internet to reorganize the people from post-capitalist consumers into fully empowered digital citizens, because it has become clear that our digital tools have simply been used to standardize the post-capitalist consumer lifestyle on a global scale.

We need to realize that a half a million human bodies walking down a street with cell phones and hand written signs still have more political power than 10+ million strong Facebook groups or Twitter streams. We still live in an age where an afternoon walk with a few like-minded people can outrun the social influence of a digital collective an order of magnitude larger. You might have expected a digital population to overwhelm our naked ancestors, but if anything the opposite has proven true. When TwitchPlaysPokemon rallied 1.16 million people to beat Pokemon in 16 days, everyone who participated recognized that we accomplished an amazing thing. But we also had to acknowledge, without any cognitive dissonance, that each of us could beat the game ourselves in about a day and a half.

Okay, okay, so our social networks are broken, and we haven’t even begun to count the ways. There are niche digital communities accomplishing amazing feats of cooperation, but all of us with all our gadgets are not yet as strong as some of us plain old boring people, doing the things we've been doing for centuries like voting and assembling. Why not?

Our social networks were originally designed to function like an interactive digital rolodex: a system for managing and engaging a list of social and professional contacts. To someone thinking about life in the digital age around the turn of the century, the idea made a lot of sense: how else would we find our friends in a place as wild and disorganized as the internet without a book of contacts?  Social networks today vary only slightly from this original design.  Some networks emphasize interpersonal relationships and others emphasize content engagement, but the differences in networking tools ultimately have little to do with the liveliness of the communities they serve. Users are willing to put up with a lot of UI nonsense in order to engage with the communities they care about. A passionate community might thrive on a poorly designed network, and a high-end design might fail to attract any community at all. From the user’s point of view, these communities are attractive for two reasons: its members and their interests. Who is on this network, and what are they talking about?

So if we’re being honest with ourselves then this is the unflinching truth: the growth of social networking happened despite the tools we’ve built, not because of them. We are social creatures; we want to share ourselves with each other. In the age of industry and capital, satisfying this need to share had become almost impossible. When our digital tools offered the promise of overcoming our alienation and reconnecting with each other we jumped at the opportunity. We became refugees fleeing failed states on wifi. As digital immigrants we have suffered through the privacy violations, UI disasters, and the untold hells of political irrelevance that are common to all immigrant stories. And we’ve done it for nothing more than for than the promise to connect with each other, if only to share a picture of our pets.The idea that any one company or service would take credit for the epic digital migration we’ve collectively accomplished over the last decade is ludicrous; we’re the species who figured out how to communicate through tin cans and string. The growth of social networking is what happens when you give the internet to enough huddled masses yearning to breathe free.

But it turns out that we don’t use our social networks like an interactive rolodex. In fact, the relationships we used to have with the people in our rolodex tend to be one-dimensional and alienating in exactly the way we we came to these digital spaces to avoid. Instead of a list-management tool, people’s online behavior appears to require something more like a living room, or (depending on where and how you live) your porch or kitchen table or stoop: a space to visit with each other; where we can showcase our triumphs, complain about our problems, share our hopes, gossip about our friends, and discuss the happenings of the day; where the atmosphere is jovial and hospitable and supportive. In short, we are trying to build a home, in the midst of a community of homes, together with the people we want in our lives. In some homes you can talk about politics or religion, in others you can’t; in some you’ll be subject to hundreds of photos of vacations and babies and pets, and in others you’ll find the accumulated markings and detritus of a real life lived. A relationship planner with multimedia messaging is nice, but what we really want are digital living spaces where we can be together as a community.

It shouldn’t surprise anyone that we’d approach the task of community-building in this way: by carving out spaces for ourselves and our friends around focal objects. This is how we humans have always developed our communities: not by managing lists of people through which to channel our communication, but by organizing spaces that can accommodate all of our activity. Communication is but a tool in service of that cooperation. I’m not just talking about a “digital commons”, in the sense of a public space for cooperating beyond the purview of one’s home. We don’t even have our own spaces managed right; we are still shackled to these monolithic centralized services that own and manage our relationships according to their grand designs. In such an atmosphere we have trouble motivating even our friends and family to action, much less something as quixotic and ethereal as “the public”. Expecting these digital communities to engage seriously in politics is like expecting toddlers to engage seriously in politics.

So let’s be concrete. Now that we’ve all gathered around the digital hearth and can hear each other speak, let’s think again about what is missing from this generation of social networking tools, and what we need to see in the next.  


Today’s social networks are centralized. Our homes are decentralized.

The kind of activity and commotion that’s common in one person’s home might be intolerable in another’s. That’s fine; we’re different people entertaining different communities, and we build our homes to accommodate our specific community needs. Any social network must be as sensitive to these variations as we are. A network under central management is forced to ignore the differences between us and homogenize our interactions to maintain order at large scales. This leveling of variation is a necessary feature of any centrally-managed network, and it can have a number of unfortunate consequences for the communities we can form on them. Some standardization is good, even healthy. But the details matter, and the wrong standards can destroy a community. The larger the network, the more likely any central management will simply be insensitive to community-level needs.

Putting our data in the hands of a central network authority also makes it all the more likely that the information will be released without our consent, either deliberately or accidentally, and this alone can be a deciding factor in whether and to what extent a person will participate in a network. But the problem with centralized network management is even more fundamental than privacy. When we share something with a friend on a centralized network, we’re also implicating the central management in that exchange. It is because central management plays a role in every network exchange that they are in a position to violate our privacy in the first place. This ubiquitous presence can become a dominant influence on our interaction, making our relationship develop according to the needs and interests of the network managers, which may diverge arbitrarily from our own. The effect is a little like trying to manage your home with a state official looking over your shoulder and archiving all your activities, filtering not just for legality but also for targeted advertising. As digital immigrants we’ve come to accept that we’re being overseen, but we should also realize that these are not the conditions under which do our best work. When unknown third parties with unknown interests are not only present in our interactions, but can radically disrupt the structure of those relationships without notice, then we’re not very likely to devote serious time and effort to cultivating digital spaces to meet our cooperative needs. As a result, our networks remain flimsy, makeshift, liable to blow away at any second--these are no conditions in which to build a home that we can do anything meaningful in.

Building a community of homes means building spaces that can self-organize in response to the needs of our various overlapping communities without oversight and central control. There is no center to our vast network of friends; there is no vantage point from which to micromanage our relationships but our own. The point is not that our networks cannot be managed; the point is that we need to be the management. We need a network where our data remains ours, and where the terms and conditions of our social lives are set exclusively by us.

With today’s networks, my identity is an option in a pull-down menu. In our homes, we develop who we are through what we do and who we do it with.

A rolodex is a centralized leveling tool: a person’s critical details are made to fit on a small standardized card in a roll of functionally identical cards. It is left to the user to construct the network from these details: to evaluate the strength of the relationship, the relative importance they might have for our projects, and they way they fit into the larger fabric of our social lives. Today’s social network continues the tradition of encouraging people to fit cookie-cutter identities to maximize advertising revenue. No consideration is paid to how these constraints on identity formation might impact our ability to form and sustain a vibrant community. This helps to explain why people mostly use online social networking to manage relationships they began offline, where they have more direct control over their identity and reputation. Exclusively online-only relationships typically take much longer to develop familiarity and trust, simply because we are witness to substantively less activity from the other. Talking to grandma online is easy enough because who we are and what we mean has already been established elsewhere. The same familiarity isn’t available generally: a random internet person could be anyone and want anything. This cannot be the basis for social cooperation.

Functional social networks develop through the construction of differentiated reputations. We each have different strengths and weaknesses, and by working together we learn how we each fit into all our overlapping projects. By forcing us into pre-fab identities, we lose the ability to track how we might best cooperate, or how our identities evolve as a result of what we’ve done together. Instead, we’re left to cobble together a pale imitation of reputation from what little data we have access to, in terms of likes, shares, and followers (or their equivalents), as if the quality and utility of our work depended only on the number of people who saw it. There’s nothing wrong with followers and likes per se, but when these are our only resources for organizing we end up with bizarre distortions of a healthy community. In such an environment we tend overvalue the activity of celebrities and become suspicious of everyone else, simply because we have no other common resources for making finer distinctions. None of these tools reveal how our networks might be put towards our various social ends, because ultimately it is not our ends these networks serve.

Building a community of homes requires building identities with reputations we control through the work we do and the communities we engage with. When we control our identities, and when the feedback we receive reflects the value of the work that we do, then we will we finally feel the responsibility and commitment that only a functional community can generate. We need a social network that can provide the tools for managing our reputations across the many diverse communities we engage with, that understands how these reputations change with context, and how our collective strengths can be stitched together to compose a much greater whole.

Today, a successful social networking campaign achieves virality. A successful home achieves a successful life.

We have no other tools for judging the success of our activity online except in terms of raw audience size. In this degenerate capacity we can conceive of no other strategic goals but virality: spreading a message quickly and widely. The goal of virality admits up front we are powerless to effect change ourselves. Instead, the best we can hope is that prolific exposure through synchronized spamming will bring the message to the feet of the people with the resources to do something about it. As digital immigrants our own voices do not carry far enough, and we are in no position to do anything about the cries of our neighbors. So instead, we’ve relegated ourselves to being the messenger in our own social networks, delivering notes between the already-powerful and pretending to live in the same communities as them.

In a functioning community, the strength of the signal tends to correlate with the urgency of the message. The messages that spread the fastest are usually the biggest emergencies requiring the most immediate attention. The messages that spread the widest tend to be the information people need for coordinating their activities across great distances. But most of our cooperation is local and not terribly urgent, and therefore doesn’t depend on raw signal strength. Viral appeals to our collective attention cannot be the only tool in our kit for getting our messages across.

Although our attention is among the most precious of our limited resources, we nevertheless produce it continuously and nearly without effort. We eagerly give it away to the things we find interesting and worthwhile without expecting anything in return. It is by paying attention that we imbue our world with structure and meaning; this is ultimately what we are all here to do. Our collective attention is distributed across an enormous variety of projects and communities, and that distribution reflects our self-organized division of labor and value: what we consider worthwhile enough to spend our time doing. We use that distribution to decide where we will spend our attention next, and through this collective management of attention we are capable of organizing all of our productive social systems. So when we engage each other on existing social networks, when we each like and share according to our own interests and tastes, we expect that the resulting community will reflect some consensus of our participation-- that the network will be “better”, according to the standards of “better” as indicated by our contributions.

Existing social networks don’t function that way at all: our engagement is harvested for advertisers, and whatever feedback it generates is lost in the noise of the greater economy. There’s no reason whatsoever to hope that our networks will develop in response to our activity and values, because we know that they are responding to other values and using our activity for other purposes. They’ve hijacked the spaces we’ve selected for our homes and they are exploiting us for all we’re willing to give. Meanwhile, all the attention we pay goes to waste, utterly failing to secure the expected return on investment, having been traded away for dollar of ad space. We still dismiss hashtag campaigns as slacktivism, as if our impotence were a character flaw. The truth is we’re doing the best with the tools we’re given, and ultimately these social networks were never built to work.

Building a community of homes, one that really works for all of us, requires a whole new approach to the economy of attention, one that understands how the organization of the system emerges from the activity of its many distinct parts. We need new tools for networking, not just to make connections but to hook the right communities up in the right way so that we can all accomplish what none of us could alone. The digital homes described above do not yet exist; we have yet to build them. As digital immigrants we’ve been tossed between halfway homes for years, so the significance of this challenge might not have fully registered. Partial solutions exist, but only piecemeal and scattershot across the available networks; no solution has met these problems with the elegance and comprehension necessary to bring social networking into a new era.

But that’s about to change. 

People are obviously thinking about the next generation of social networking, and for the last few weeks I’ve been working with a team of developers on a distributed networking service built on the block chain, one that bakes security, reputation, and community management directly into the basic feature set. We’re set to announce within the next few days, when I hope to tell you much more about the details of the project. Until then I hope the comments here give some insight into our philosophical approach to the design. 

If you've made it this far, I hope you stay tuned to hear more. 

You can engage a public GDocs version of this essay here.


Bruno Latour is talking about Gaia

// A few weeks ago I saw Bruno Latour give a talk called "Gaia Intrudes" at Columbia. I've struggled with the term "Gaia" since I came across Lovelock's Gaia Hypothesis while studying complex systems a few years ago. On the one hand, Lovelock is obviously correct that we can and should treat the (surface of the) Earth and its inhabitants as an interconnected system, whose parts (both living and nonliving) all influence each other. On the other hand, the term "Gaia" has a New Agey, pseudosciencey flavor (even if Lovelock's discussion doesn't) that makes me hesitant to use the term in my public discussions of complexity theory, and immediately skeptical when I see others use it. Since my skepticism seems to align with the consensus position in the sciences, I've never bothered to resolve my ambivalence about the term.

And to be completely honest, while I admired Latour's work (he's mentioned in my profile!),  going into this talk I was also a little skeptical of _his_ use of the term. I've been thinking pretty seriously about the theoretical tools required for understanding the relationship between an organism, its functional components, and its environment, what  and I have been calling "the individuation problem". As far as I can tell, not even the sciences are thinking about this problem systematically across the many domains and scales where it arises. That same week I had written a critique of Tegmark's recent proposal for a physical theory of consciousness; my core critique centered on his failure to distinguish the problems of integration and individuation. So to hear that Latour was approaching the discussion using the vocabulary of Gaia made me apprehensive, if not outright disappointed. I was worried that he would just muddy the waters of an already fantastically difficult discussion, and that it would make my interest in actor network theory all the more obscure and profane to the communities of scientists I wanted to be talking to. 

But my skepticism was entirely misplaced. Latour knows exactly what he's doing, and he's thinking about these issues in precisely the right way. This talk is fantastic. Watch it. It is one of the best treatments of the individuation problem I've come across. Latour dismantles the mereological presumptions that underlie both our sciences and our politics, and explains why our deep conceptual confusion over the relationship between parts of a system (organs and organisms, organisms and environments) lie at the root of our biggest social challenges today (climate change, digital politics). 

Latour starts by noting that etymologically, "gaia" and "geo" are different forms of the same word: γῆ, the Greek word for "earth". We have no problem talking about geology or geography or geopolitics, but if you ask geo-* scientists if they study gaialogy (or gaiagraphy or gaiapolitics) they have the exact same skepticism and hesitance I felt coming into the talk. 

Latour traces this skepticism to both a scientific and a political/theological worry about holism: the term "gaia" evokes an image of Nature as a kind of goddess-being that sits over and above its inhabitants. On the other hand, "geo" functions to refer to the more-or-less stable background on which some activity takes place, and therefore doesn't implicate any holism."Geo" isn't a god-figure orchestrating our politics; geopolitics describes the politics that takes place on the global stage, but the stage is not itself an actor in the politics. So "geo" is compatible with reductionist science, and "gaia" is not. 

From the scientific perspective, this image of Gaia as holistic-Nature-entity flies in the face of the strict reductionism that is characteristic of the sciences: we cannot talk about the whole except by way of describing the mechanical operation of its parts, so at best the use of "Gaia" adds nothing to our existing scientific theories. At worst, "Gaia" sneaks in a kind of natural theology: that nature is somehow being "steered" from above. Both suggestions are anathema to scientific methodology. 

Moreover, there is a long tradition in political theology of appealing to a Nature-entity in a providential way: Nature does not steer blindly, but aims at it's own preferred optima as a self-correcting, self-regulating system. Politically this image of nature is used in both positive and negative ways: on the one hand, Latour talks of climate activists appealing to "what nature wants" or "what is best for nature". On the other, we are deeply skeptical of any such holistic aim; thus, we talk about the "tragedy of the commons", where there is no systemic regulation over and above the actions of the parts. 

However, Latour gives a careful reading of Lovelock to show that he (Lovelock) was conscious of these worries and careful to develop the metaphor of Gaia to explicit address them. Latour emphasizes that Locklock was a reductionist: there is no entity called Gaia over and above the cooperation of is parts. Gaia just is that system constructed from that cooperation, and so it resists being taken as an entity independent of that cooperation. Margulis puts the point in Darwinian terms: you can't evaluate the fitness of an organism independent of some environment, so organisms and environments must together be the unit of biological significance. The point can be put more generally: the function of any component must be understood in terms of the system it is a component of

The upshot is that the system isn't something over and above its parts but is constituted by them. In other words, the "geo" assumption that we can distinguish the actors from the stage is wrong. Latour reads the term "Gaia" as an attempt to bring the system to the forefront without treating it as an independent, providential Nature-Godhead, but instead as something more mundane: a system to which we all constitute and to which are all actively contributing. 

On Latour's view, what the criticisms of Lovelock and "Gaia" reveal is a paucity in our conceptual apparatus. We're locked into a "two-level" way of thinking: either we're talking about the operation of parts, or we are talking about wholes conceived of holistically. The former is the domain of strict reductionist science, and the latter is the domain of providential political theology. We have no way to talk about systems of interconnected parts whose cooperation constitutes a whole. Latour wants to embrace the talk of "Gaia" as a genuinely novel attempt to address exactly this conceptual paucity. I still don't think I like the word.

[24:50] "Whenever you stop talking about the individual *in* an environment, then it means you are for providential "nanny" Gaia, which is the all-powerful thermostat. We are so deprived of an alternative, that whenever we begin to say "we don't want an organism *in* an environment, when the environment is transformed by the organism", then immediately people hear "it is a holistic argument, and I'm sure it's wrong."

[35:17] "There is nothing global about Gaia. It is a whole where the very notion of what is a whole and what is a part has changed."

[47:30] "The common is impossible when it is thought of as a whole. The common is possible precisely because it is an extraordinarily difficult set of skills to compose. But if you have a two level standpoint, it means there will never be any alternative; this is the tragedy of the commons. It doesn't matter if we talk about biology or economics or society, the problem is the same. The problem isn't in the data, the problem is in the way we borrow a definition of association from politics and theology." 

I don't know that I like the term Gaia any better, but hearing this talk definitely resolved any ambivalence or discomfort I had about the use of the term or this thread in complexity theory. I think Latour's analysis is completely on target. 

As I said, I've been thinking about these issues myself for a long time. I've personally grown fond of talking about organisms at different scales, which I think can systematize the discussion across the sciences. If I'm uncomfortable with "Gaia", it's because the term points to a logically possible organism, but not any specific one. From the framework of organisms, "Gaia" represents something like "the most complete organism on Earth", and I'm not exactly sure which that is, or what standard we're using to judge "completeness"; I don't think organisms can be so easily individuated so as to have any one "Gaia", so unless we're talking about some specific organisms I'm not sure the term is useful. 

A few notes:

1) Latour's analysis is completely critical, and without positive recommendation (aside from a theatrical play he produced in conjunction with this research). He never says "here, use this vocabulary instead". His own vocabulary seems inconsistent: sometimes talking in terms of parts/wholes and other times calling holism a "poison". He seems to equate "systems" with "wholes", which I think is a mistake. So let me help: there are no parts or wholes; mereology is alchemy. There are only components and the organisms they compose, and this relationship cannot be expressed with a container metaphor. The proper vocabulary for talking about the relationships among components is network theory-- not ANT, but graph theory in the style of Bechtel and Baez

2) I suspect that our two-level tendency in this case is related to Dennett's discussion of the design and intentional stances. In fact, I think Dennett might be wrong to mark three levels of distinction; I think it's probably only two levels: design and intention, with the physical stance just being a special case of design. On this reading, the design stance is the reductionist stance, understanding the operation of a thing in terms of the mechanics of its parts. The intentional stance is the holistic stance, treating the system as an entity-in-itself, with all the systemic dispositions that might entail. Reworking Dennett's discussion of the intentional stance around a two-level distinction would take some work, but it might also go some way to explain why we struggle with the conceptual apparatus for talking about systems and components. In other words, this might not merely be a conceptual failure; it might be a genuine cognitive bias. We might have a Gaia-blindspot.

3) In any case, Latour's discussion suggests that working out these conceptual confusions will go at least some way towards resolving our biggest challenges today. If we saw the climate not as some independent system hanging over our heads but as one that directly arises from the actions we take, then maybe climate change wouldn't be so difficult from a policy perspective. If our political system were designed to not just represent our interests but to respond directly to them, maybe we'd be able to organize our political infrastructure to take action on such policies. 

In other words, I'm being optimistic because of contemporary French philosophy. Yowza.