The late 2018 congressional hearing of Google CEO Sundar Pichai felt like a replay. Replace him with Facebook’s Mark Zuckerberg or Twitter’s Jack Dorsey from earlier hearings in the year, and a familiar narrative starts to emerge: senators asking the most basic of technological questions, tech leaders struggling to explain the social repercussions of their products, both sides waiving their accountability to their users and public.
Yet, while it’s easy to berate these senators for their technological ignorance, many of the rest of us are equally ill-informed. Technology is moving so fast, becoming so complex, that it seems like alchemy. We are living in a “Black Box Society” — as law scholar and author Frank Pasquale argues in his book of the same name — in which corporations abuse secrecy for profit, pushing us further into an incomprehensible world. If there’s one lesson to learn from these congressional hearings, it’s that technology itself isn’t the villain, but the opaque systems in which those who make it operate certainly could be.
Technologies don’t exist in a vacuum, but are shaped, guided and exploited by the powers that be. “No technology can be considered separately from the context in which it’s produced, and no science can be seen separately from that context,” says Kanta Dihal, a research associate at the Leverhulme Centre for the Future of Intelligence, an interdisciplinary AI research centre based in Cambridge. “Taking humans out of the loop — just saying it’s not in our hands — misrepresents the way technology is developed. The way to prevent a bleak future is to interrogate the people who are making the technology and see why they are deploying it — what they’re plugging into it.”
Dihal’s perspective is one that should be kept in mind as humanity develops, arguably, its most powerful technology ever: artificial intelligence. Described by Jack Clark, a policy director at the research institute Open AI, as an “omni-use” technology, artificial intelligence will impact almost every area of society. Indeed, to talk about AI in the domain of super-intelligent machines is to confine it to a narrow frame of understanding, as well as to undermine the social impact it’s already having in day-to-day life. From border patrols to police departments, healthcare to immigration, the effects of these systems are already very real.
As artist and technologist James Bridle wrote in his book, New Dark Age:
If we do not understand how complex technologies function, how systems of technologies interconnect, and how systems of systems interact, then we are powerless within them, and their potential is more easily captured by selfish elites and inhuman corporations.
Google is one of AI’s greatest pioneers. Pichai himself has said that all efforts in the company’s future will be “AI first”, indicating that the world is at an inflection point with this technology. It has, to its credit, developed some of the most successful AI applications today, ranging from image recognition, translation, speech recognition, and expert board-gaming.
That last achievement came in March 2016, when AlphaGo — an AI system developed by researchers at Google-owned DeepMind — defeated professional Go player Lee Sedol in a game that shocked the world. It was recounted by Wired reporter Cade Metz as both sad and beautiful to watch. The 37th move of the second match, in particular, was described as non-human by Fan Hui, another champion Go player watching the game on the side. “I’ve never seen a human play this move. So beautiful,” he said. Move 37 was so unexpected and original, that Lee Sedol left the room to compose himself from shock.
AlphaGo went through another breakthrough when it upgraded to AlphaGo Zero, a kind of tabula rasa system that taught itself to play Go from scratch — without data from human players — by competing against itself. Though this is an extraordinary achievement, it doesn’t suggest any significant progress towards more generally intelligent machines. The rules of Go are still coded by humans, and AlphaGo Zero is still limited to the narrow task of playing a single strategic game, even if it can be creative within it. Nevertheless, Simon Beard, a researcher at the Centre for the Study of Existential Risk, says it presents a “paradigm shift” in the capability of machines.
Until recently, systems were built with predefined rules and expertise in one task — the most famous was IBM’s Deep Blue, which beat Garry Kasparov at chess in 1997. But in the past 10 years, algorithms known as neural networks — a subset of machine learning which mimics the way the human brain processes information — took off. This was largely thanks to an increase in computational power and data processing that allowed these layers of interconnected “neurons” to learn, identify patterns and make decisions without being explicitly programmed.
Beautiful as move 37 may be, Fan Hui’s judgement of it as non-human hints at the increasing alienness of AI in the wake of neural networks. We can see the move AlphaGo made, but we don’t understand how or why it made it. Even experts don’t always understand how these neural networks make their decisions. Beyond the context of Go, this opacity is a major cause for concern.
Beard says: “The basic fear is that this gives the intelligence the capacity to start running away from human ability to understand and control what’s going on. That doesn’t mean that it will do anything bad. Probably it won’t. But we worry about it at Existential Risk because of the simple fact that if anything did go wrong, how on earth would we stop it?”
On March 1 2017, a machine fired software engineer Ibrahim Diallo from his job. First, his microchipped pass failed to let him into his skyscraper office in LA. Then, his computer system login was disabled. Eventually, he was escorted out of the building. “I was fired,” he wrote on his blog a year later. “There was nothing my manager could do about it. There was nothing the director could do about it. They stood powerless as I packed my stuff and left the building.”
Although simply the result of an automated system that went awry, it’s shocking how quickly Diallo’s employers decided that they were helpless, ceding decision-making control to a supposedly rational machine. They were forgetting that machines are still the product of imperfect humans. The machine may have fired Diallo, but human error was to blame.
Like humans, AI is capable of a multitude of mistakes, not least because its human developers unwittingly encode it with bias. The result is systems prone to decision-making that can exacerbate social, economic and political inequalities. Indeed, this reproduction of systematic errors is a much more concrete danger to humanity than super-intelligent killer robots — for most experts, the latter is a distraction, and frankly, over-discussed.
AI is only as good as the data given to it, and data is generated by humans. If AI is trained with data from the past, it will go on to reproduce history. This is worrying because, as artist and geographer Trevor Paglen says,
The past is a very racist place. And we only have data from the past to train artificial intelligence.
Racial bias in algorithms is already a well-documented issue. In early 2018 — based on research done into facial recognition systems by Microsoft, IBM and Megvii — The New York Times reported a 99% accuracy for white men, but for those with darker skin, there were “more errors.” Women and people of colour were, therefore, more likely to be falsely identified. In the context of law enforcement, that means these demographics are more likely to be stopped or treated as suspects. Alarmingly, 2018 figures by Big Brother Watch, a UK privacy watchdog, revealed that a facial recognition software trialled by London Metropolitan Police was 98% inaccurate. More recently, Amazon’s facial recognition tool, Rekognition, misidentified 28 members of Congress as people who had been arrested for criminal activity. They were disproportionately people of colour.
Further complicating matters is the issue of representation. The AI community is overwhelmingly made up of white, western men, who unavoidably, and perhaps inadvertently, exhibit their own subconscious biases. This inevitably leads to a lack of diversity in datasets, affecting the output of algorithms. One of the projects at the Leverhulme Centre, called Global AI Narratives, is trying to tackle this by fostering underrepresented voices absent from the global debate. The Centre recognises that without grasping and overseeing these cultural and social power relations, our machines can only represent and exacerbate existing discrimination.
“It’s important that we be transparent about the training data that we are using, and are looking for hidden biases in it, otherwise we are building biased systems,” said John Giannandrea, Google’s AI chief, at a 2017 conference. “If someone is trying to sell you a black box system for medical decision support, and you don’t know how it works or what data was used to train it, then I wouldn’t trust it.”.
There are two potential solutions to ensuring the transparency of AI: “interpreting” the black box and promoting the openness of data upon which it is trained. Interpretability is the technical challenge of explaining AI. One of the first official projects in this field is Explainable AI (XAI) by the Defense Advanced Research Projects Agency (DARPA), the research arm of the US Department of Defense known for catalysing the internet as we know it today. The goal of XAI is to create techniques for machines to explain their processes without compromising performance, allowing greater trust among humans — also known as a “glass box” system.
Openness of data involves looking at the data used to train an algorithm, the methods used in its collection, how it is stored and processed and by whom. It also looks for transparency around process, such as the assumptions and choices made by an AI, and how those choices are determined. It demands that stakeholders and funders of AI research are known to the public, and their motives made clear. A service like this is offered by mathematician Cathy O’Neil, who performs third-party audits on algorithms by examining the software and its developers in order to flag biases in the process.
Ultimately, the goal of transparent AI is accountability. When things go wrong, someone has to answer for those mistakes. Current black box systems make that an impossibility. As it stands, according to the latest report by female-led institute AI Now, “frameworks presently governing AI are not capable of ensuring accountability. As the pervasiveness, complexity, and scale of these systems grows, the lack of meaningful accountability and oversight — including basic safeguards of responsibility, liability, and due process — is an increasingly urgent concern.”
If 2018 was the year of tech’s reckoning, it was also the year of tech’s consciousness-raising. Zuckerberg’s infamous “move fast and break things” motto — once an expression of earnest entrepreneurialism — is now a sober warning that speed often comes at the expense of safety. The public is increasingly alert to the impact of the tools that shape their lives, while those responsible for creating technology grow wary of the ways in which their creations could be weaponised.
Last June, thousands of Google employees signed a petition — some quit in protest — to stop their employer continuing artificial intelligence work on the Pentagon’s Project Maven, an AI initiative to process surveillance footage gathered in counterterrorism and counterinsurgency operations. Google conceded. Days later, workers at Microsoft, Amazon, and Salesforce urged their CEOs to stop providing AI services to the US Immigration and Customs Enforcement and local police departments.
These unrests point to increasing awareness of the very real consequences of AI systems under bad actors. They are a provocation for others to question authorities and a demand for a seat at the table. More importantly, they recognise that power doesn’t lie within AI itself, but with those who control it. If and when things go wrong, “we call it error, not terror,” says Beard. “It’s us getting things wrong. It starts and ends with the people who designed and decided to initiate the system.”