The human factor — why data is not enough to understand the world
A couple of years ago, staff at a Google “tech incubator” called Jigsaw made an important breakthrough: they realised that while their company has come to epitomise the power of technology, there are some problems that computers alone cannot solve. Or not, at least, without humans.
Jigsaw, wrestling with the problem of online misinformation, quietly turned to anthropologists. These social scientists have since fanned across America and Britain to do something that never occurred to most techies before: meet conspiracy theorists face-to-face — or at least on video platforms — and spend hours listening to them, observing them with the diligence that anthropologists might employ if they encountered a remote community in, say, Papua New Guinea.
“Algorithms are powerful tools. But there are other approaches that can help,” explains Yasmin Green, director of research and development at Jigsaw, which is based in an achingly cool, futuristic office in Manhattan’s Chelsea district, near the High Line. Or, as Dan Keyserling, Jigsaw chief operating officer, puts it: “[We’re using] behavioural science approaches to make people more resilient to misinformation.”
The results were remarkable. Previously, groups such as anti-vaxxers seemed so utterly alien to techies that they were easy to scorn — and it was hard to guess what might prompt them to change their minds. But when the Jigsaw team summoned anthropologists from a consultancy called ReD Associates, who listened with open-minded curiosity to people, it became clear that many of the engineers’ prior assumptions about causation in cyber space were wrong.
For example, the techies had assumed that “debunking” sites needed to look professional, since that was what they associated with credibility. But conspiracy theorists thought that “smart” sites looked like they were manufactured by the elite — something that matters if you want to counter such theories.
So these days Google’s staff is trying to blend anthropology with psychology, media studies and, yes, data science to create tools that might “inoculate” more internet users against dangerous misinformation. “We can’t do this just based on what we assume works. We need empathy,” says Beth Goldberg, Jigsaw research project manager, who was trained in political science but has now also acquired anthropology skills.
Will it fix the issue? Sadly not by itself, given the deep-seated societal roots of the problem. Nor will a dose of anthropology magically remove the anger that many people feel about the power of tech giants, and the sometimes irresponsible ways in which they have behaved. But the experiment has already had one benefit: it has made some Google techies understand what they don’t understand with their data tools — and why techies sometimes need “fuzzies”, or people with qualitative, not quantitative, analyses. As Twitter co-founder Jack Dorsey has observed, Silicon Valley would have probably built a much better internet and social media world if it had employed social scientists alongside computer scientists at the outset.
This is not just a tale about tech, however. Far from it. The real issue at stake is tunnel vision. Today most professions encourage their adherents to adopt intellectual tools that are at best neatly bound or at worst one-dimensional. Economic models, by definition, are defined by their inputs, and everything else is deemed an “externality” (which was how climate change issues used to be perceived). Corporate accountants are trained to relegate things not directly linked to profits and losses (such as gender ratios) into the footnotes of company accounts. Political pollsters or consumer surveys often operate with pre-determined questions.
These tools are often very useful, if not indispensable. But they have a flaw: if the wider context outside that economic model, company, political poll or Big Data set is changing, that bounded tool and neat quantitative analysis might not work. Pinning all your faith on an economic model alone, say, is like walking through a dark wood at night with a compass and only staring at the dial; no matter how brilliant that compass may be, if you do not look up and employ some lateral vision you will walk into a tree. Context matters.
And that is where anthropology can help, particularly as we grapple with pandemic-sparked disruptions and contemplate how we might live and work in the future. For at the heart of this endeavour is a basic truth: even in a digitised world, humans are not robots, but gloriously contradictory, complex, multi-layered beings, who come with a dazzling variety of cultures. We cannot afford to ignore this diversity, even after a year in which we have been cloistered in our own homes and social tribes; least of all given the fact that global connections leave us all inadvertently exposed to each other. So in a world shaped by one AI, artificial intelligence, we need a second AI, too — anthropology intelligence.
Anthropology might seem like an unexpected place to find fresh 21st-century ideas. The word derives from anthropos, Greek for “human”, and one of the first quasi-anthropologists was the Greek scholar Herodotus, who became curious about the different cultures of tribes in the fifth century BC Greco-Persian wars and tried to analyse them.
The discipline was established in its modern form by 19th-century Victorian intellectuals who wanted to study the far-flung colonial subjects of the European empires. Since these intellectuals were heavily influenced by Charles Darwin’s theory of evolution, and part of an imperialist power structure, their analyses were usually overtly racist — to the enduring shame of modern anthropologists. An entity known as the Cannibal Club, established in London in 1863, epitomised this dark past: although Cannibal Club members said they were searching for the essence of “mankind” by peering at so-called “primitives”, this research was primarily directed towards proving the supposed superiority of white men.
However, in the 20th century, the discipline underwent two dramatic intellectual U-turns: instead of fostering imperial racism, it tried to become a beacon of anti-racist thought; and instead of just studying supposedly “exotic” cultures in far-flung lands, anthropologists turned the lens on western cultures too.
The trigger for this volte-face was that anthropologists began to leave the safety of their ivory towers — or colonial verandas — and went to live among the people they studied. An intense German-born American academic called Franz Boas was one of the first. In the 1880s he was stranded — by accident — among the Inuit in the frozen north, and that cultural immersion left him concluding that “the more I see of [Inuit] customs, I find that we [Europeans] really have no right to look down upon them contemptuously . . . [since] we ‘highly educated’ people are relatively much worse”.
It was a shocking concept at the time, and Boas struggled for years to find an academic post in New York before founding the anthropology department at Columbia University. The Nazis later burnt his books. But in the 20th century this vision of “cultural relativism” — to use the phrase coined by Margaret Mead, one of Boas’s disciples — spread. And today, to cite the Canadian anthropologist Wade Davis, they like to view their craft as “the antidote to nativism, the enemy of hate [and] vaccine of understanding, tolerance and compassion that can counter the rhetoric of demagogues”. Or, as the Swedish anthropologist Ulf Hannerz puts it: “Diversity is our business.”
The second U-turn — studying western cultures — arose from cultural relativism. Once you accept that all cultures are apt to seem weird, or “exotic”, to someone else, it makes sense to use the same tools in familiar settings too. After all, as another anthropologist, Ralph Linton, noted: “The last thing a fish would notice would be the water”; it is hard for us to evaluate our own cultural assumptions. Familiarity creates blind spots, and outsiders can see things that insiders ignore. The goal of anthropology, then, is to be an insider-outsider — to have empathy for a culture and a sense of critical detachment.
This insider-outsider perspective can be invaluable — as I know from my own career. Thirty years ago I did doctoral work on anthropology at Cambridge university, and spent a year in a mountain village doing research in the (then) Soviet Republic of Tajikistan. Subsequently, I became a journalist and tried to flip the lens, using the same methodology to look at worlds that might seem more familiar to FT readers: credit derivatives, American corporate life, the White House, Silicon Valley and my own world of the media. It was often revealing. Focusing on rituals, symbols, social boundaries and what anthropologists call “social silences” (ie what people don’t talk about) helped me to see some of the financial risks that were developing in credit derivatives before 2007, as well as the risk of a Silicon Valley “techlash”.
Other anthropologists have used the same skills in all manner of different settings, ranging from General Motors, JPMorgan, Japan Airlines, the US military, the British health service, Japan’s central bank, the American nuclear industry and the German tech scene — to name but a few. And these studies proffer answers to a dazzling range of questions. Why do masks stop pandemics? Why do Uber drivers hate AI tools? Why do consumers really buy dog food? Why do financiers find it hard to work from home?
Frustratingly, such studies are not at all well known outside the discipline. And even when companies have employed anthropologists to offer advice, these messages are sometimes discounted, particularly when anthropologists try to study the “familiar” (ie how western companies work), rather than “strange” (ie how somebody else might behave). It is easier for Google executives to embrace the idea of using anthropologists to observe conspiracy theorists than to turn the lens on themselves. Powerful elites rarely want to stare at themselves critically — and the “problem” with anthropology, notes Lucy Suchman, a professor of anthropology at Lancaster University, is that “it often makes people uncomfortable”.
But this is also why it is needed. And it would be nice to think — or hope — that the pandemic has created more willingness to do this. After all, the shock of the lockdown has already prompted policymakers to embrace some once-unthinkable ideas and shown corporate leaders why they need lateral — not tunnel — vision to evaluate risks. Indeed, one way to interpret the rise of environment, social and corporate governance and “stakeholderism” is that many corporate leaders recognise the need for a wider lens. The pandemic has also shown us that in a globalised world it is dangerous to ignore or deride other cultures when we are all so tightly entwined. We need more empathy for strangers to survive and thrive.
And lockdown has created another type of wake-up call too: in the past year we have all been forced to re-examine the daily rituals, social boundaries and unstated cultural assumptions that we used to ignore. And as we return to “normal” in the next year (hopefully), we will also need to work out what cultural and social patterns we want to preserve in a more digitised world.
The answers may yet surprise us — even (or especially) among techies. A couple of years ago, for example, I watched a group of computer engineers hold a meeting in a drab hotel on Edgware Road, London, which discussed whether or not to introduce new internet protocols to counter hacks on western utilities such as energy systems.
For hours, they debated an anti-hacking protocol with the unwieldy name “draft-rhrd-tls-tls13-visibility-01”. Then came the moment of truth: a white-bearded engineer named Sean Turner solemnly addressed the crowd: “Please hum now if you support adoption [of this tool].”
A collective hum, like a Tibetan chant, erupted, and then Turner asked those who opposed the move to hum as well. A second — far louder — sound erupted. “So at this point there is no consensus to adopt this,” he declared. The protocol was put on ice.
This might seem odd; after all, the Internet Engineering Task Force is the group that built the internet and computer geeks appear to live in a “rational”, maths-based world. But the IETF has embraced this “fuzzy” ritual in recent years because the techies like being able to sense the mood of the entire group via humming — and get the type of multidimensional information that simple “yes-no” votes cannot reveal.
Indeed, these engineers are so attached to this ritual that they were very upset when they lost the ability to hum together during the Covid-19 lockdown — and although they tried to replicate what they liked about group humming with computer code, they realised it was impossible.
So at some point, when in-person IETF meetings resume, the geeks will almost certainly start humming together again. When they do, this — like Jigsaw’s study of conspiracy theorists — will be another reminder of a fundamental and ultimately reassuring factor of modern life: there are some things that can only be analysed, solved or predicted by humans.
Gillian Tett is chair of the FT editorial board and editor-at-large, US. Her book ‘Anthro-Vision: How Anthropology Can Explain Business and Life’ is published on June 8 by Penguin Random House in the UK and Simon & Schuster in the US
Follow @FTLifeArts on Twitter to find out about our latest stories first
Letters in response to this article: