THE THREAT
The fascinating thing about doing research in information security is that over the past thirty years it has been a rapidly expanding field. We started off thirty years ago with only a couple of areas that we understood reasonably well—the mathematics around cryptography, and how you go about protecting operating systems by means of access controls and policy models—and the rest of it was a vast fog of wishful thinking, snake oil, and bad engineering.
There were only a few application areas that people really worried about thirty years ago: diplomatic and military communications at one end, and the security of things like cash machines at the other. As we’ve gone about putting computers and communications into just about everything that you can buy for more than ten bucks that you don't eat or drink, the field has grown. In addition to cash machines, people try and fiddle taximeters, tachographs, electricity meters, all sorts of devices around us. This has been growing over the past twenty years, and it brings all sorts of fascinating problems along with it.
As we have joined everything up together, we find that security is no longer something that you can do by fiat. Back in the old days—thirty years ago, for example, I was working for Barclays Bank looking after security of things like cash machines, and if you had a problem it could be resolved by going to the lowest common manager. In a bureaucratic way, things could be sorted by policy. But by the late 1990s this wasn’t the case anymore. All of a sudden you had everything being joined up through the World Wide Web and other Internet protocols, and suddenly the level of security that you got in a system was a function of the self-interested behavior of thousands or even millions of individuals.
This is something that I find truly fascinating. We’ve got artifacts such as the world payment system to study, where you've got billions of cards in issue, millions of merchants, tens of thousands of banks, and a whole bunch of different protocols. Plus, you’ve got a lot of greedy people who, even if they aren’t downright criminal, are trying to maximize their own welfare at the benefit of everybody else. Realizing this in the late ‘90s made us realize that we had to get economics on board. One of the phase changes, if you like, was that we started embracing social science. We did that not because it was a trendy thing to do to get grants to do multidisciplinary stuff, but because it was absolutely necessary. It became clear that to build decent systems, you had to understand game theory in addition to the cryptography, algorithms, and protocols that you used.
That came out of a collaboration with Hal Varian at Berkeley, who is now the chief economist at Google. In fact, across the tech industry you see that an understanding of network economics is now seen as a prerequisite for business. We now teach it to our undergraduates as well. If they’re going to have any idea about whether their start-up has got any chance whatsoever, or whether the firm that they’re thinking of joining might be around in five years’ time, then it’s useful to know these things.
It’s also important from the point of view of figuring out how you protect stuff. Although a security failure may be due to someone using the wrong type of access control mechanism or a weak cipher, the underlying reason for that is very often one of incentives. Fundamentally, the problem is that when Alice guards a system and Bob pays the cost of failure, things break. Put in those terms it’s simple and straightforward, but it’s often much more complicated when we start looking at how things fail in real life.
In the payment system, for example, you’ve got banks that issue cards to customers—issuing banks—and you’ve got acquiring banks, which are banks that buy in transactions from merchants and give them merchant terminals. If a bank gives a merchant cheaper terminals to save them money, there may be more fraud, but that fraud falls on the card-issuing banks. So you can end up with some quite unsatisfactory outcomes where there’s not much option but for a government to step in and regulate. Otherwise, you end up getting levels of fraud that are way higher than would be economically ideal.
The next thing that’s happened is that over the past ten years or so, we’ve begun to realize that as systems became tougher and more difficult to penetrate technically, the bad guys have been turning to the users. The people who use systems tend to have relatively little say in them because they are a dispersed interest. And in the case of modern systems funded by advertising, they’re not even the customer, they’re the product.
When you look at systems like Facebook, all the hints and nudges that the website gives you are towards sharing your data so it can be sold to the advertisers. They’re all towards making you feel that you’re in a much safer and warmer place than you actually are. Under those circumstances, it’s entirely understandable that people end up sharing information in ways that they later regret and which end up being exploited. People learn over time, and you end up with a tussle between Facebook and its users whereby Facebook changes the privacy settings every few years to opt everybody back into advertising, people protest, and they opt out again. This doesn’t seem to have any stable equilibrium.
Meanwhile, in society at large, what we have seen over the past fifteen years is that crime has gone online. This has been particularly controversial in the UK. Back in 2005, the then Labour government struck a deal with the banks and the police to the effect that fraud would be reported to the banks first and to the police afterwards. They did this quite cynically in order to massage down the fraud figures. The banks went along with it because they ended up getting control of the fraud investigations that were done, and the police were happy to have less work for their desk officers to do.
For a decade, chief constables and government ministers were claiming that “Crime is falling, we’re doing a great job.” Some dissident criminologists started to say, “Hang on a minute. Crime isn’t actually falling, it’s just going online like everything else.” A year and a half ago, the government started publishing honest statistics for the first time in a decade. They found, to their disquiet, that online and electronic crime is now several times the rate of the traditional variety. In fact, this year in Britain we expect about one million households will suffer a traditional property crime like burglary or car theft, and somewhere between three and four million—probably nearer four million—will suffer some kind of fraud, or scam, or abuse, almost all of which are now online or electronic.
From the point of view of the police force, we got policy wrong. The typical police force—our Cambridgeshire constabulary, for example, has one guy spending most of his time on cybercrime. That’s it. When we find that there’s an accommodation scam in Cambridge targeting new students, for example, it’s difficult to get anything done because the scammers are overseas, and those cases have to be referred to police units in London who have other things to do. Nothing joins up and, as a result, we end up with no enforcement on cybercrime, except for a few headline crimes that really annoy ministers.
We’ve got a big broken area of policy that’s tied to technology and also to old management structures that just don’t work. In a circumstance like this, there are two options for someone like me, a mathematician who became a computer scientist and an engineer. You can either retreat into a technical ghetto and say, “We will concentrate on developing better tools for X, Y, and Z,” or you can engage with the broader policy debate and start saying, “let’s collect the evidence and show what’s being done wrong so we can figure out ways of fixing it.”
Over the years I found myself changing from a mathematician into a hardware engineer, into an economist, into a psychologist. Now, I'm becoming somebody involved with criminology, policy, and law enforcement. That is something that I find refreshing. Before I became an academic, in the first dozen years of my working life, I would change jobs every year or three just so I kept moving and didn’t get bored. Since I’ve become an academic, I’ve been doing a different job every two or three years as the subject itself has changed. The things that we’re worried about, the kind of systems that are being hacked, have themselves also changed. And there’s no sign of this letting up anytime soon.
How did I end up becoming involved in advocacy? First of all, there’s the cultural background in that Cambridge has long been a haven for dissidents and heretics. The puritans came out of Cambridge after our Erasmus translated the Bible and “laid the egg that Luther hatched.” Then there was Newton. More recently, there have been people such as James Clerk Maxwell, of course, and Charles Darwin. So we are proud of our ability to shake things up, to destroy whole scientific disciplines and whole religions and replace them with something that’s better.
In my particular case, the spur was the crypto wars of the 1990s. Shortly after he got elected, in 1993, Bill Clinton was pitched by the National Security Agency with the idea of key escrow. The idea was that America should use its legislative and other might to see to it that all the cryptographic keys in the world were available to the NSA and its fellow agencies so that everything encrypted could be spied on. This drew absolute outrage from researchers in cryptography and security and also from the whole tech industry. At the time, people were starting to gear up for what became the dot-com boom. We were starting to get more and more people coming online. If you don’t have cryptography to protect people’s privacy and to protect their financial transactions, then how can you build the platform of trust on which the world in which we now live ends up being built?
A whole bunch of us who were doing research in cryptography got engaged in giving talks, lobbying the government, and pointing out that proposals to seize all our cryptographic keys would have very bad effects on business. This worked its way out in different ways in different countries. Here in Britain we had tussles with the Blair government, which started off being against key escrow, but was then rapidly persuaded by Al Gore to get onboard the American bandwagon. We had to push back on that. Eventually, we got what’s now the Regulation of Investigatory Powers Act. In the process, I was involved in starting an NGO called the Foundation for Information Policy Research, and later on when it became clear that this was a European scale thing as well, European Digital Rights, which was set up in 2002 or 2003.
Europe’s contributions to ending the crypto wars came in the late 1990s when the European Commission passed the Electronic Signature Directive, which said that you could get a presumption of validity for electronic signatures, provided that the signing key wasn’t known to a third party. If you shared your key with the NSA or with GCHQ, as these agencies wanted, you wouldn’t get this special legal seal of approval for the transactions that you made, whether it was to buy your lunch or to sell your house. That was one of the things that ended the first crypto war.
Following on from that were other issues came along, issues concerning copyright, privacy, and data protection. I got particularly involved in issues around medical records—whether they can be kept confidential in an age where everything becomes electronic; where records eventually migrate to cloud services, and where you also have pervasive genomics. This is something in which I’ve worked off and on for twenty years.
In my case, working with real problems with real customers—and in the case of medicine, I was advising the BMA on safety and privacy for a while—puts things in perspective in a way that is sometimes hard if you’re just looking at the maths in front of a blackboard. It became clear looking at medical privacy that it’s not just the encryption of the content that matters, it’s also the metadata—who spoke to whom when. Obviously, if someone is exchanging encrypted emails with a psychiatrist, or with an HIV doctor, or with a physiotherapist, then that says something about them even if those emails themselves cannot be read.
So we started looking at the bigger picture. We started looking at things like anonymity and plausible deniability. And that, of course, is something that people in many walks of life actually want. They want to give advice without it being relied on by third parties.
Out of these political collisions and related engineering assignments, we began to get a much richer and more nuanced view of what information security is actually about. That was hugely valuable. Becoming involved in activism was something that paid off big time. Even though people like my dad will say, “No, don’t do that. You’ll make enemies,” it turned out in the end to have been not just the right thing to do, but also the right thing from the point of view of doing the research.
~ ~ ~ ~
Computing is different from physics in that physics is about studying the world because it’s there; computer science is about studying artifacts of technology, things that have been made by the computer industry and the software industry. If you work in computing, it’s not prudent to ignore the industry or to pretend that it doesn’t exist.
There’s a long Cambridge tradition of working with leading firms. The late Sir Maurice Wilkes, who restarted the lab after the war, consulted for Lyons and then eventually for IBM. My own thesis advisor, Roger Needham, set up Microsoft Research in Europe after he retired. I’ve worked for companies as diverse as IBM and Google, and I’ve consulted for the likes of Microsoft, and Intel, and Samsung, and National Panasonic.
This is good stuff because it keeps you up-to-date with what people’s real concerns are. It gets you involved in making real products. And as an engineer, I feel a glow of pride when I see my stuff out there in the street being used. Six years ago I took some sabbatical time and worked at Google, where the bulk of my effort went into what’s now Android Pay. That’s the mechanism whereby you can pay using your Android phone to get a ride on the tube or to buy a coffee in a coffee bar.
Twenty-five years ago, in fact, I worked on a project where we were designing a specification for prepayment electricity meters. That may be the thing I’ve done that’s had the most impact, because there are now over 400 million meters worldwide using this specification. We enabled, for example, Nelson Mandela to make good on his election promise to electrify two million homes in South Africa after he got elected in 1994.
More recently, when I went to Nairobi a few months ago, I found that they’re just installing meters of our type. And now that they’re all out of patent, the Chinese manufacturers are stamping these out at ridiculously low prices. Everybody’s using them. That’s an example of how cryptographic technology can be a real enabler for development. If you’ve got people who don’t even have addresses, let alone credit ratings, how do you sell them energy? Well, that’s easy. You design a meter which will dispense electricity when you type in a twenty-digit magic number. The cryptography that makes that work is what I worked on. You can get your twenty-digit magic number if you’re in downtown Johannesburg by going up to a cash machine and getting it printed out on a slip and your account debited. If you’re in rural Kenya, you use mobile money and you get your twenty-digit number on your mobile phone. It really is a flexible and transportable technology, which is an example of the good that you can do with cryptographic mechanisms.
~ ~ ~ ~
If computer science is about anything at its core, it’s about complexity. It’s relatively straightforward to write short programs that do simple things, but when you start writing long, complex programs that do dozens of things for hundreds of people, then the things you’re trying to do start interacting, and the people that you’re serving start interacting; the whole thing becomes less predictable, less manageable, and more troublesome. Even when you start developing software projects that involve more than, say, a half-dozen people for a month or so, then the complexity of interaction between the engineers who are building the thing starts becoming a limiting factor.
We've made enormous strides in the past forty years in learning how to cope with complexity of various kinds at various levels. But there’s a feedback loop happening here. You see, forty years ago it was the case that perhaps 30 percent of all big software projects failed. What we considered a big software project then would nowadays be considered a term project for a half-dozen students. But we’re still having about 30 percent of big projects failing. It’s just that we build much bigger, better disasters now because we have much more sophisticated management tools.
The limiting factor here isn’t the number of thousands of lines of code—with better tools, you can code with bigger programs—it’s that the limiting factor is social. If you’re running a business, then it’s your job to make profit for the shareholders. Profit is the reward for risks, so it’s your job to take risks. So if a third of your projects fail, that’s okay. Where it starts to get interesting is when you observe in public sector procurement that only 30 percent of large projects succeed. And yet, ministers and civil servants try to be risk averse and behave in all sorts of strange ways in order to pass the buck and avoid liability. So why do you get this kind of perverse outcome?
When explaining complexity, you can’t explain it purely in terms of process, purely in terms of not using the right development methodology. You also have to understand how people interact in organizations, and how these interact with the kind of things that you’re doing in projects. When we’re doing that kind of thing, we’re not hugely different from the kind of project management that you might see in a shipyard or in a very large construction site. In fact, we do end up having interesting conversations with people whose job it is to sort out big messes in big construction projects. We find that we’ve got lots to talk about! But IT brings its own complexities in terms of network effects and technical lock-in and so on, which make it particularly difficult to manage big projects financially.
~ ~ ~ ~
Back in 1983, the big question was, do you develop software for the IBM PC first, or for the Mac first, or for both? For a while, those of us who were writing software would develop for both or for one or the other. But by the end of 1983 it was becoming clear that IBM was pulling ahead, because Shell, BP, Barclays Bank, and the civil service were all starting to buy IBM rather than Mac. Everybody started writing software for the IBM PC first and for the Mac second, if at all.
The IBM PC took off. In fact, Microsoft grabbed all that money because they were smarter than IBM. They realized that what was locking people in wasn’t the fact that the hardware was made by IBM, but that the operating system was made by Microsoft. IBM had thought that they would block this by offering three different operating systems, but of course one of these became predominant, the market tipped, and Microsoft ended up running off with all the cream.
This is an example of network effects. We now understand that they’re absolutely pervasive in the IT industry. It’s why we have so many monopolies. Markets tip because of technical reasons, because of two-sided markets, and also for social reasons. About ten years ago, I had a couple of new research students coming to me, so I asked them what they wanted to study. They said, “You won’t believe this, Ross, but we want to study Facebook privacy.” And I said, “You what?” And they said, “Well, maybe an old married guy like you might not understand this, but here in Cambridge all the party invitations now come through Facebook. If you’re not on Facebook, you go to no parties, you meet no girls, you have no sex, you have no kids, and your genes die out. It’s as simple as that. You have to be on Facebook. But we seem to have no privacy. Can that be fixed?” So they went away and studied it for a few months and came to the conclusion that, no, it couldn’t be fixed, but they had to be on Facebook anyway. That’s the power of network effects. One of the things that we’ve realized over the past fifteen years is that a very large number of the security failures that afflict us occur because of network effects.
Back in the early 1990s, for example, if you visited the Microsoft campus in Redmond and you pointed out that something people were working on had a flaw or could be done better, they’d say, “No, we’re going to ship it Tuesday and get it right by version three.” And that’s what everybody said: “Ship it Tuesday. Get it right by version three.” It was the philosophy. IBM and the other established companies were really down on this. They were saying, “These guys at Microsoft are just a bunch of hackers. They don’t know how to write proper software.”
But Bill had understood that in a world where markets tip because of network effects, it’s absolutely all-important to be first. And that’s why Microsoft software is so insecure, and why everything that prevails in the marketplace starts off by being insecure. People race to get that market position, and in the process they made it really easy for people to write software for their platform. They didn’t let boring things like access controls or proper cryptography get in the way.
Once you have the dominant position, you then put the security on later, but you do it in a way that serves your corporate interests rather than the interests of your customers or your users. You do it in such a way that you lock-in your customer base, your user base. Once we understood that, that was a big “aha” moment for me back in 2000 or 2001. It became immediately obvious that understanding network economics in detail was absolutely central to doing even a halfway good job of security engineering in the modern world.
~ ~ ~ ~
People talk about malicious AI and these science fiction stories about what happens if the robots take over. In fact, Martin Rees thinks it would be a good idea if the robots take over because then they could fly to the stars, an enterprise for which we don’t live long enough. I’m a little bit more of an engineer than that. My concern is that right now you have people whose abilities, consciousness, and whose perception is enormously enhanced by the use of tools.
I began to realize this in 1996 when I first played with AltaVista, the first proper search engine. I was in the process of helping some lobbying of the government on privacy. We wanted to investigate some companies who appeared to be misbehaving. At the end of an afternoon when I’d figured out, using AltaVista, how to find out everything about these companies, about their accounts, their directors, their directors’ hobbies and interests, I realized that with a search engine I had the same kind of power at my fingertips that last year only the Prime Minister had with the security and intelligence agencies to do his bidding.
Since then, we’ve seen more of the same. People who are able to live digitally enhanced lives, in the sense that they can use all the available tools to the fullest extent, are very much more productive and capable and powerful than those who are still stuck in meatspace. It’s as if you had a forest where all the animals could see only in black and white, and suddenly, along comes a mutation in one of the predators allowing it to see in color. All of a sudden it gets to eat all the other animals, at least those who can’t see in color, and the other animals have got no idea what’s going on. They have no idea why their camouflage doesn’t work anymore. They have no idea where the new threat is coming from. That’s the kind of change that happens once people get access to really powerful online services.
So long as it was the case that everybody who could be bothered to learn had access to AltaVista, or Google, or Facebook, or whatever, then that was okay. The problem we’re facing now is that more and more of the really capable systems are no longer open to all. They’re open to the government, they’re open to big business, and they’re open to powerful advertising networks.
Twenty years ago, I could find everything about you that was on the World Wide Web, and you could do the same to me, so there was mutuality. Now, if you’re prepared to pay the money and buy into the advertising networks, you can buy all sorts of stuff about my clickstream, and find out where I’ve been staying, and what I’ve been spending my money on, and so on. If you’re within the tent of the intelligence agencies, as Snowden taught us, then there is very much more still. There’s my location history, browsing history, there’s just about everything.
This is the threat. This was a threat before Mr. Trump got elected president. Now that Mr. Trump has been elected, it must be clear to all that government having very intrusive powers of surveillance is not something that necessarily sits well with a healthy democratic sustainable society.
~ ~ ~ ~
One of the things that we’re thinking about hard now is the Internet of Things. A lot of people think that the security problems of the Internet of Things are just privacy, and there are plenty of those problems. We’ve seen, for example, a doll being banned in Germany because it’s basically an open mike in your kid’s bedroom, and it’s against privacy law. But the real transformative change with the Internet of Things will be safety. Security will be ever more about safety.
Last year, we did a big project for the European Commission trying to work out what happens to safety regulation in this new world. Europe regulates lots of stuff just like DC does. They’ve got agencies regulating vehicles, cars, trains, planes, medical devices, electricity meters, all sorts of things. How does this change once everything is online? In the old days, car safety was about getting a maker to make a few prototypes, put the crash test dummies in them, bang them against the wall, film the results, analyze them, inspect the software in the engine management unit and the ABS, tick all the boxes, and the car goes into production. So it’s pre-market test and inspection, and it has got about a ten-year cycle.
What’s starting now is that you’re getting software updates in one car after another. Tesla is updating regularly, for example; Ford is starting to update over the air; Toyota says they will by 2019. Within a few years, every car will be updating its software perhaps once a month.
Now this is both good and bad. It’s good because it means that if there’s a safety flaw, you can fix it in the entire car fleet without having to spend billions of dollars recalling them all to the garage. It’s going to be an enormous challenge to the safety regulators because they’re going to have to work at a hundred times the speed—in a time constant of one month rather than a time constant of ten years.
It’s going to bring in enormous complexity in software update because the safety of a car isn’t just one central computer. A car might have a hundred different CPUs in it, and many of the critical subsystems aren’t made by the brand whose badge is on the front of the car. They’re not made by Mr. Mercedes, for example, but by Mr. Bosch or Mr. ZF or whoever. How do you go about managing all that? How do you do the testing? How do you do the integration? How do you see to it that the upgrades get shipped? It’s already hard enough to get upgrades to your mobile phone if it’s a device that’s no longer actively being sold. So we’ve got all these problems.
Why does this matter? If you get a safety flaw in a traditional car—say, the A-Class Mercedes, which would roll if you braked and swerved too hard to avoid an elk, they fixed that—they shipped a service pack and changed the steering geometry. Nobody died, so that’s okay. But if you’ve got a flaw that can be exploited remotely over the Internet—if you can reach out and put malware in ten million different Jeeps—then that’s serious stuff. This happened for the first time in public a couple of years ago when a couple of guys drove a Jeep Cherokee off the road. Then the industry started to sit up and pay attention.
That can also be used as a diplomatic weapon. You want sanctions on Zimbabwe? Just stop all the black Mercedes motor cars that Mr. Mugabe hands out to his henchmen as payment. We raised that with the German government. What would your reaction be to an American demand to do that? Well, it was absolute outrage! So diplomacy comes in here.
Conflict also comes in. If I’m, let’s say, the Chinese government, and I’m involved in a standoff with the American government over some islands in the South China Sea, it’s nice if I’ve got things I can threaten to do short of a nuclear exchange.
If I can threaten to cause millions of cars in America to turn right and accelerate sharply into the nearest building, causing the biggest gridlock you’ve ever seen in every American city simultaneously, maybe only killing a few hundred or a few thousand people but totally bringing traffic to a standstill in all American cities—isn’t that an interesting weapon worth developing if you’re the Chinese Armed Forces R&D lab? There’s no doubt that such weapons can be developed.
All of a sudden you start having all sorts of implications. If you’ve got a vulnerability that can be exploited remotely, it can be exploited at scale. We’ve seen this being done by criminals. We’ve seen 200,000 CCTV cameras being taken over remotely by the Mirai botnet in order to bring down Twitter for a few hours. And that’s one guy doing it in order to impress his girlfriend or boyfriend or whatever. Can you imagine what you can do if a nation-state puts its back into it?
All of a sudden safety becomes front and center. And that, in turn, changes the policy debate. At present, the debate about access to keys that we’ve had with Jim Comey’s grumblings in the USA and with our own Investigatory Powers Act here in Britain has been about whether the FBI or the British Security Service should be able to tap your iPhone—for example, by putting malware on it. People might say, “Well, there’s no real harm if the FBI goes and gets a warrant and taps John Gotti’s phone. I’m not going to lose any sleep over that.” But if the FBI can crash your car? Do you still want to give the FBI a golden backdoor key to all the computers in the world? Even if it’s kept by the NSA, then the next Snowden maybe doesn’t sell the golden key to The Guardian, maybe he sells it to the Russian FSB.
We suddenly get into a very different policy terrain where the debates over who gets access to whom, and when, and how, and why, are suddenly sharp. It’s not just your privacy that’s on the line anymore, it’s your life.
~ ~ ~ ~
The Internet, like many other human artifacts, brings more blessings than curses, otherwise we wouldn’t keep it going. We’d turn it off! And like many things, it’s three steps forward and one step back. The industry is rather bad at recognizing the backward steps. It tends to hope that other people will clean up its messes, as previous industries have done in the past.
There are very interesting parallels with the history of the railways, for example, and the history of the canals before that, which eventually needed regulation. You needed regulation saying that the railways had to carry all freight at the same rate, and you couldn’t discriminate and do sweetheart deals to companies that your brother-in-law owned. The early railway barons did that and it was totally exploitative; they managed to extract a whole lot of the value from the areas that they served.
Similarly, you have to have the regulatory arms of government awake and on the job and seeing to it that they defend things like net neutrality, something that appears to have come under threat now with the US administration. Do we want to go back even more to a Gilded Age, where a small number of robber barons manage to extract all the surplus from everybody else? Great, if you’re building vast houses in Newport or selling very fancy yachts, but ultimately that brings social costs. Ultimately, it brings pushback, and ultimately your FDR or whoever the new revolutionary president is has to push back on it big time.
It’s not the elected officials that create the inequality, it’s the nature of the business itself. To build railways, you have to have an Act of Parliament giving eminent domain to the railway company over a strip of land, otherwise the Duke of Roxburgh holds you up to ransom for all the value of a railway line between Scotland and England, because he owns that strip of land. You have to have eminent domain. But once you’ve got eminent domain, you’ve got a natural monopoly, and if that can then charge every customer at his marginal willingness to pay, then it can extract all the value and then some.
~ ~ ~ ~
The big project that we have at the moment, the Cambridge Cyber Crime Center, has as its mission making cybercrime research a science. Up until now, basically no research was repeatable. Somebody who was doing a PhD, for example, would go out and collect some data; he might spend a year or two persuading a company that he was sufficiently trustworthy to get access to some of their logs; write some programs; write his PhD thesis; then he’d go off to work for Facebook and the data would no longer be available to anybody. If you looked at his paper and thought you could have done that analysis better, it wasn't possible to get your hands on the data or on equivalent data so that you could run your own analysis on it.
We’ve raised the money and the support from various corporates to have a center that will run for five years with a half-dozen people. It will collect a whole lot of data from different sources, from takedown companies, from big service companies, from registrars, from all sorts of places, and will make this available to academics who are prepared to license its use.
Much of the data is slightly dirty. None of it is really sensitive personal data; we don’t touch things like credit card numbers or anything like that, but much of it has got things like IP addresses that do raise some concerns under the privacy regimes of some countries. So everybody has to sign an NDA. We’ve got standard incoming and outbound license agreements so that companies who are comfortable to do this can let us have the data on the understanding that the only people who will access it will be bona fide academics who have signed an appropriate license agreement and NDA.
This means that for the first time, we’ve got a curated trove of data which will enable people to do research on the same basis as people inside Google, or Facebook, or the NSA who are working on cybercrime. This kind of research will be repeatable, it will be scalable, and it will be open to many different research teams to start competing with each other.
In 2020, when we go and pitch for our next helping of funding, we’ll judge its success not by how many papers we’ve written about frauds and scams and abuse online, but by how many papers other people have written. It’s the creation of this kind of shared resource we believe has held up research on the subject for the past decade.
~ ~ ~ ~
What we have learned over the past few years is that all of the world’s conflicts are acquiring an online element. This is happening with crime, where instead of burglarizing your house someone may steal from your bank account remotely. Many people may think this is an improvement: you’re not at personal risk, and with any luck the bank will make you good. Similarly, conflicts that have a diplomatic or military edge are moving online. We’ve seen the use of online attacks on propaganda in conflicts in Georgia and elsewhere. We’ve seen the NSA and Israelis attacking Iranian centrifuges using malware, which appears to have led to a reasonable outcome in terms of the Iran peace deal. It was certainly less destructive than sending in the warplanes to bomb the towns.
Is the use of cyber conflict instead of armed conflict using planes, and tanks, and drones an improvement? Well, we’re going to have to wait and see. There is the risk that the threshold for starting a cyber conflict will be lower. People will think that they can get away with it, that attribution is hard. They often make mistakes on that. The UK government made a very serious mistake when they thought that they could break into Belgacom in order to wiretap the European Commission, because Snowden blew the whistle on what they were doing. That seriously annoyed people in the European Commission, just at a time when Britain doesn’t really need that. There are hazards involved in lowering the threshold for conflict.
As far as electoral conflict is concerned, we have seen the progressive adoption of social media techniques and messaging, not just in national elections in America and Britain—going right back as Obama’s first election—we’ve seen it in referenda. We saw it in the Scottish referendum in 2014, where pro-nationalist supporters were hounding and abusing people who were in favor of remaining in the UK, because they were more militant and more people were active online.
We’ve seen the same thing in the Brexit referendum in the UK. We’ve certainly seen in the USA the way that techniques were used more effectively by Mr. Trump than they were by Mrs. Clinton.
What that's going to teach everybody is that if you’re in the business of politics, you have to get good at this stuff and you have to get good fast, otherwise you’re out of a job. There’s going to be a lot of rapid and aggressive development of techniques of intrusive surveillance, of psychological profiling of voters, and micro-targeting of political messages. And we don't know what the consequences of doing that will be.
We already know that there’s a tendency for people to clump into a red universe and a blue universe, where you hear only those messages from people that are congenial to you because Mr. Facebook and Mr. Google will direct messages to you that keep you on their site for longer so that you’ll click on more ads. Therefore, we have less of a polis—less of a political space where we can all interact and discuss things with people with whom we don’t necessarily agree all the time.
What happens when that goes micro-targeted? You’ve got somebody who’s interested in gun control and gets messages only on gun control; or somebody who's opposed to gun control; or somebody who's in favor of increasing the retirement age; or somebody who's in favor of decreasing the retirement age, then they get these messages obsessively all the time, at the cost of any broader political debate in society.
Is this perhaps what’s happening with fake news? Who’s to say? We’ve always had fake news, for as long as we’ve had tabloid newspapers, which has been a century. There have been newspaper editors who played the man not the ball. Is this going to become the new normal and, if so, what happens to democracy? These are the sorts of problems that we’re going to be wrestling with for the next decade.