The Bulletin of the Atomic Scientists, 3/4/17
Gone are the days of cipher machines and cleverly disguised pistols – today’s spies need more sophisticated tools. When American intelligence agencies encounter a head-scratcher like, “How do we predict a coup?” or, “How do we improve agents’ pattern-recognition skills?,” they turn to Intelligence Advanced Research Projects Activity (IARPA). Its director, Jason Matheny, has one of the more interesting resumes in a city full of people with interesting resumes. He has a master’s degree in public health and a doctorate in economics, and before coming to IARPA, worked for the Applied Physics Laboratory and the Future of Humanity Institute, and founded an organization that grows meat from cells.
Today his job is to make sure US spycraft stays on the cutting technological edge. Sadly, the IARPA office, just inside the Capital Beltway in College Park, Maryland, contains no Q Branch-style lab where outbound operatives pick up the latest gadgets – in fact, it contains no labs at all, just white boards and drawers full of contracts, according to Matheny. With a modus operandi similar to the one the Defense Advanced Research Projects Agency uses to serve defense needs, IARPA comes up with specific problems to solve, then signs up university labs, private companies, and other institutions to carry out the research, competing against one another to hit milestones.
In the decade since IARPA launched, it has become best known for its research programs in quantum computing and machine learning. Its biggest recent increase in investment, though, has been in biosecurity. That’s one of the three areas of defense technology Matheny worries about most as potential threats; the other two are cybertechnology and artificial intelligence. In this wide-ranging interview with Bulletin contributing editor Elisabeth Eaves, conducted by telephone in December 2016, Matheny also talks about the latest research on whistle-blowers, how to predict a cyber attack, and why IARPA seems surprisingly transparent compared to intelligence agencies of yore.
What is the process by which something goes from an idea in your head to a tool in the hands of a US intelligence agency?
It’s usually not an idea in my head. We have program managers in the government who are themselves outstanding scientists and engineers and have a vision for high-risk, high-payoff research programs that address critical needs for national intelligence. They design a solicitation for proposals that we then put out to the research community in industry and academia. Then we select and fund the strongest research teams, and then they compete against one another on the same set of metrics and milestones. It’s a tournament model for research, which is something that distinguishes IARPA and DARPA [the Defense Advanced Research Projects Agency] from most other research organizations.
What is your relationship with DARPA?
DARPA serves the defense community and we serve the intelligence community. You can think of it as DARPA serving war-fighter needs, whereas IARPA serves national intelligence needs. We work closely with them. We have some DARPA program managers who become IARPA program managers and vice versa. We also serve on each other’s review panels. We very often will decide, “This seems like a research effort that’s more appropriate for DARPA,” or “This seems like a research effort that’s more appropriate for IARPA.” We’re also structured very similarly and use a lot of the same processes. We have term-limited program managers. We do these competitive programs. We exclusively do externally funded research. There’s no research that goes on in-house.
IARPA doesn’t have like a lab of its own?
No, we don’t. Neither IARPA nor DARPA runs a lab, which is always a letdown for people who come and visit. They’re expecting to see a quantum computer in our basement. It’s just a whole lot of white boards and drawers full of research contracts.
Before they come up with ideas for areas of research, how do program managers first learn what intelligence agencies need?
The needs are collected from around the community in a few different documents. There’s something called the National Intelligence Priorities Framework. There’s also something called the Intelligence Community Science and Technology Strategic Plan. Those identify gaps between where some intelligence need is and where our current capabilities exist. The job of a program manager here is to translate that gap into a solicitation for research proposals. The job of the program manager is not to design the research, or come up with the technological innovation that can fill the gap, but instead to find an effective way of getting the best and brightest scientists that are out in the community to solve the problem. We leave the innovation to the people on the outside, but we try to find a way of eliciting that innovation and measuring it so that we can figure out what’s working and what’s not.
Is it hard to attract and retain good program managers? How do you do it?
It’s a continuous job, recruiting program managers, because they can stay here a maximum of five years and most stay a shorter amount of time, sometimes because there’s a term limit on their detail if they’re coming from another government agency. They also get a lot of offers from elsewhere, from industry and academia, because they do a great job and they develop a great network while they’re here. We know going into any program that we’re not going to keep the program manager for very long, so we’re constantly hiring. We hire out of universities. We hire out of industry labs. We hire out of national labs. That’s a big part of my job here.
The variety of different projects that IARPA funds is kind of mind-boggling. Is there one particular area that is getting the bulk of funding and attention right now?
Historically the work that we’re best known for is advanced computing (especially quantum computing), work on machine learning, and work on human judgement. Those three areas are probably the ones in which most people who know us are familiar with our work. We’re still investing heavily in those three areas.
Our biggest increase in investment is in biosecurity, and that is because there is a gap between our current intelligence capabilities and the capabilities we feel we need to address future bio-risks.
Increase over what time period?
The last 12 months.
I read in a previous interview that synthetic biology is one of the things that keeps you up at night. With all your expertise over different areas, why does that stand out to you?
Biology is hard for national intelligence for a few different reasons. I think we’re all confident that the developments in synthetic biology will be a net benefit for society, given the advances it will bring in bio-medicine, bio-materials, and agriculture. On the other hand, some of the same characteristics that make biology so useful also mean risks. Among those is the fact that it self-replicates, so you can turn a small inventory of weapons into a large arsenal. Then there is the fact that biotechnology is widely distributed. The technology and expertise needed to modify infectious agents is much more widely distributed today than it was, say, 10 years ago. That continues to increase. Techniques that were previously limited to grad-student level research are now being taught in high schools, and similarly, the kinds of infrastructure that you need in order to, say, edit genomes is much more widely accessible and much more scalable today than it was even a few years ago.
So it self replicates, and the infrastructure and expertise are widely distributed. This also means that the signatures of biotechnologies make it hard to distinguish between legitimate and malicious use. All three of those characteristics make synthetic biology different from, say, nuclear weapons. Not to belittle nuclear weapons. That’s still also a priority for us. But one positive thing you can say about nuclear weapons is that the materials and expertise are uncommon. They’re harder to hide, and most importantly, nuclear weapons don’t self-replicate. If you leave a nuclear weapon in a room and you come back a month later, there’s still just one nuclear weapon. That means that arsenals are harder to build.
I think that makes biology special. I personally worry more about biological accidents than I do about bioterrorism or biowarfare, because the potential for doing unintentional harm just seems greater, because there are more people who don’t intend to do harm than there are people who intend to do harm. Just playing the odds, it seems more likely that we’ll suffer from a biological accident than an intentional attack.
An example of the kind of thing I do worry about would be accidentally conferring on an organism a characteristic that you didn’t intend. For instance, increased virulence or resistance to existing vaccines or antibiotics or antivirals. Laboratory accidents that result in the release of an organism you knew was a potential threat, just due to mishandling or accidental escapes. Or, accidental modification of organisms that have important environmental functions – that are not pathogens, but play an important ecological role.
IARPA has had projects that do predictive tasks, such as predicting disease outbreak. How does natural disease outbreak dovetail with national security interests?
When we think about the security of the United States or other countries, or our global security, diseases are enormously disruptive. Quantitatively they are the most devastating events that humanity has suffered. The flu pandemic of 1918 killed more than 50 million people in a single year. Diseases affect global and national security in important ways, first through mortality and then also through morbidity, which causes economic distress. They also cause instability, which can lead governments and societies to make poor decisions under stress. Disease is a key national security issue.
IARPA holds a lot of competitions in which you request proposals and issue invitations that seem open to pretty much anyone in the world. How does this approach differ from the way the government used to develop intelligence tools? What are the pros and cons of this kind of openness?
We are unusual in the intelligence community because we’re so outwardly focused. That’s part of the reason IARPA was created – to be an organization that would interact with industry and academia in order to solve some of our hardest scientific and engineering challenges. That’s because our scientific problems are too complex for the intelligence community to solve within our own buildings. We have to go out to academia and industry to solve them. The advantage of that is that we get the best scientists and engineers in the world to help solve our problems, which typically means that we can solve them more quickly and we have a higher probability of solving them at all.
The disadvantage is that you need to determine well in advance whether it’s the sort of technology that you can share with the world. We spend a lot of effort assessing what research we can keep unclassified and what research we can’t. That involves risk assessment. It involves understanding how foreign potential adversaries could use the technology, how the technology could be used against us.
Fortunately, most of those assessments tell us that the research can be kept open. This may be because there isn’t an adversary – for example, our work on detecting disease outbreaks can be kept unclassified because there isn’t an intelligent adversary in diseases that is reading our publications and changing its strategies. If we’re focused on detecting natural disease outbreaks, we don’t have to worry about influenza reading the journal articles. A second reason we can keep research unclassified is because the particular stage of technology development may be too early for a potential adversary to benefit from it. Or we may make the assessment that the benefits of openness far outweigh the costs. Maybe it’s a technology that we depend on more critically than others do, and thus, developing a defense or shoring up the resilience of that technology is going to benefit us far more than it would others. But that is the disadvantage of doing research in the open: We have to invest so much in making sure we’re making the right determination.
Is the reason you need to go outward to get the best people that there’s just more knowledge in the world these days?
I think that’s true. I think there was a line about how Isaac Newton belonged to the last generation in which a single person could know most or all of the scientific knowledge that existed. That probably disappeared two or three centuries ago, but today is even markedly different from 30 years ago. You can’t expect that most of the scientific knowledge that’s relevant and important to national intelligence can be contained and leveraged by the employees of the intelligence community. It’s simply too vast a technological landscape. We’re firm believers in a principle sometimes attributed to [Sun Microsystems Co-founder] Bill Joy, which is that most of the smartest people work for somebody else.
You mean not the government?
I mean not the government, not the intelligence community, not any single organization. There are far more people – far more scientists and engineers – on the outside than on the inside. All other things being equal, if you need more brainpower working on a problem, you’re going to need help.
Besides synthetic biology, what other emerging technologies do you think will become most important to defense and security in the next 20 years?
The three that I spend the most time worrying about are bio, cyber and AI [short for artificial intelligence]. It’s hard not to sound like a Debbie Downer in talking about technologies to worry about, because there are plenty of others, too. There are nuclear weapons, which continue to be a serious threat. There are things like non-nuclear electromagnetic weapons. There is the increasing sophistication of automated technologies to influence public opinion. One can find lots of things to worry about, but the three I spend the most time worrying about are bio, cyber, and AI, or applications of the three.
All three have a few things in common. One is, they’re all dual-use technologies that rely on knowledge and infrastructure that’s widely available, so they don’t have distinctive signatures. Another is, all three have the ability to super-empower individuals. They can be used by states and organized groups, but they also can be used by lone wolves, who are especially challenging for national intelligence, which is structured to focus on nation states for the most part. Another related challenge is that the motivations of individuals can be so diverse compared to those of states or even organized groups. An individual can be a sociopath. An individual can be a malignant narcissist or an apocalyptic cult leader. That means they may not be deterrable, which means you have to find better ways of performing intelligence before they have the ability to act.
We have a global arms-control structure based on treaties and monitoring trade and movement of goods. But the three weapon technologies you mention – bio, cyber, and AI – have ingredients that are cheap and widely available. So how do we control them?
I’m not sure that we can control them using treaties that focus on the delivery of material or information. The technologies are too widely diffuse, and there are too many legitimate reasons to allow the free flow of the underlying technologies. I don’t think that you can prevent, for example, deep-learning algorithms from being distributed freely just because someone could use them to develop automated targeting for commercial drones. [Computer scientist and AI pioneer] Stuart Russell and others have talked about the way in which an individual could possibly, at some point in the future, build his or her own air force out of sophisticated commercial technology. I don’t think we can prevent people from having access to the technology. I think we have to find new ways of assessing the intent of people who could use technologies in particularly destructive ways. I think we also have to find new ways of preventing accidents and monitoring for accidents.
One thing that we fund here – which surprises people because it doesn’t sound like a national intelligence problem, but I think it is – is research on whistle-blowing in technical communities. How can you increase the likelihood that a foreign scientist will report misbehavior by a colleague? Not necessarily intentional misbehavior, but misbehavior that might be unsafe. Perhaps somebody who isn’t following appropriate biosafety protocols. We actually run experiments to try to understand better mechanisms for reporting unsafe behavior in biology.
That’s something that hasn’t been a core national intelligence mission, but as we enter an era when catastrophic accidents can be caused by a single individual, we’ll have to find better mechanisms for drawing on bystander reporting. Globalizing the “see something, say something” message.
Does the program doing research on whistleblowing have a name?
It’s a study called BRITE: Bystander Research in Technical Environments.
You have some great acronyms over there.
Yeah, government agencies kind of overdo it on acronym generation.
What kind of technology is involved in your whistleblower research?
We’re trying a bunch of different things. Right now, one is to just understand the behavioral science. Nobody wants to blow the whistle on a colleague. Being a tattletale is generally frowned upon. How can you increase the acceptability of saying something when you do see something? How can you make it more part of the traditional course of work, that it’s just understood that if you see somebody not handling a biological agent properly, you’re going to report it? We’re running real experiments in real lab conditions in order to understand how we can improve the rate of reporting in order to increase safety.
Are there other places where the latest neuroscience fits into intelligence needs?
Yeah. Our goals with neuroscience are to understand how computation occurs in the brain and how we can build new approaches to computing and machine-learning that leverage insights from the brain. The brain is remarkably efficient. It operates on about 20 watts, which is one one-millionth of what most supercomputers run on. It’s capable of learning certain categories of pattern recognition much, much more efficiently. As a kid, you don’t need thousands of training examples to recognize what a cat is, yet, the state of the art in deep learning requires thousands of training examples. I think there’s a lot to learn from the brain. There’s also a lot to learn about how human judgment can fall into certain kinds of pitfalls, whether it’s cognitive biases or heuristic shortcuts that analysts make because we’re simply wired to make those kinds of shortcuts.We have a program called SHARP that’s focused on improving the pattern recognition ability of a healthy working adult without using invasive neurotechnology. Are there simple hacks that don’t require anything invasive, that can improve the ability of people to think about hard problems? That’s important to figure out, because not only do I need it but we also have analysts who are working on hard, cognitively demanding problems. Figuring out how they can get a leg up from available technologies is something we’re interested in.
Would a drug come into play there?
No, we don’t test any drugs. Some medical organizations have funded research on the use of drugs for trying to improve cognition. Things like modafinil and donepezil and others. But that research suggests you’re not getting a much bigger effect than you would from caffeine, which is already pretty widely used.
We are interested in things like transcranial direct current stimulation with feedback from electroencephalogram (EEG). It’s noninvasive, but it stimulates the parts of the brain that are responsible for pattern recognition.
Your CAUSE program aims to detect precursors to cyber attacks. We hear so much these days about cyber attacks – on governments, political organizations, and private companies – that it can seem like our defenses are not keeping up technologically. Would you agree with that?
I think between programs like IARPA’s CAUSE or DARPA’s recently completed Cyber Challenge, we’re seeing a wave of creative approaches being led by government. There’s a lot that makes me optimistic taking place within government right now. Cybersecurity is a very hard problem, and one in which we will never be able to rest on our laurels because the offensive technologies will continue to get more sophisticated. Some important areas for research – that are getting a lot of attention now, in government and in industry and academia – include building generalizable approaches for automated detection or automated anticipation of cyber attacks, automated patching, and developing systems that are intrinsically more resilient and robust.
What are some of the things that would tip one off that a cyber attack was coming?
For example, in the CAUSE program, one of our research teams noticed that the Mirai botnet exploit was getting a lot of discussion in hacker forums before it was used in that [fall 2016] internet-of-things attack, the distributed denial of service. So one way to try to anticipate is to monitor chatter in places where people talk about new malware.
Another is to look at the economy of malware. The black market for malware is similar to other economies, in that if demand goes up without supply going up at the same rate, then prices go up. You can actually see prices go up in these black markets for things like zero-day [malware] as the number of buyers increases.
As a third example, cyber actors often conduct penetration testing along a network before they launch a major attack. Sometimes the testing shows up as help-desk tickets, because they might cause some anomaly that’s a nuisance to one user. One way of looking for this is to combine all of the help-desk tickets across an enterprise and see if you see these kinds of blips.
Another early signal comes when cyber actors are piecing together an attack strategy, and try to map IP lists that they assemble using web search queries. If you look at trends and search queries, you might be able to spot these sort of spikes in searches for certain IP addresses.
Those are examples of some approaches that are being looked at in the CAUSE program, and there are many more. There are dozens of these kinds of precursors that are hypothesized to be early indicators of cyber attacks. The goal of the CAUSE program is to test them in real time against real attacks using real data. That makes it somewhat unusual.
Our general position at IARPA is to be skeptical of anyone who claims to be able to forecast something, and to test the claim by asking the person to forecast real events before they happen and then keep score. What we found is that a lot of people can claim they would have forecasted an event if their system had been up and running. They’ll present PowerPoint slides showing that when they run their model backwards, it predicts history. It’s harder to predict things before you have the facts.
In recent years it seems like the tech industry has not had as much interaction with government as industries like aerospace and energy. Are the techies of Silicon Valley now getting more engaged in government work?
I don’t know if Silicon Valley businesses are getting more involved with government contracts generally, but I know they’re getting more involved with us. We’ve worked with about 500 organizations. About half of those have been colleges and universities and a little over a quarter have been small businesses. That fraction is increasing as a share of the work that we fund. We’ve invested a lot in trying to do better outreach, in part just by going out and visiting, speaking at conferences where small businesses are likely to be, and running things like prize challenges that are easier for small businesses than, say, federal contracts.
Is trust an issue when you are working with private companies?
It can be. I think the small company wants to know first, will they get to retain their intellectual property, and in our case, they do. A second thing is, small businesses don’t have the front office that has the big federally approved accounting system and the federal-contract specialists. They want to find some lightweight way of being able to do business with government. I think prize challenges are ideal for that, but we also use grants.
The Defense Department has DIUx, the Defense Innovation Unit Experimental, and the National Geospatial Intelligence Agency sends out tech scouts. Do you do that sort of thing?
Where we have an office that’s located in Silicon Valley?
For instance.
We don’t. I think that model is a valuable one. It’s important, especially for cases where you have a need and you’re already confident that a technology exists to fulfill it. It’s already been developed for some commercial use, and you need to pay for it to be adapted to your particular use. That’s a bit different from what we focus on, which is things that might be five to 10 years away from being commercially viable. In our case, we’re paying small startups to do research.
Before I let you go, I want to ask you about some of your previous professional incarnations. You worked at Oxford’s Future for Humanity Institute, which is run by philosopher Nick Bostrom. His book Superintelligence presents what some people who work in artificial intelligence consider an alarmist view. There’s a debate in the AI community over whether we should worry that the robots are going to take over and kill us all, or whether, rather, that fear is really overblown and premature. Where do you stand?
I think there are some legitimate concerns. I don’t think they’re ones that present immediate risks.
The notion of a highly complex system exhibiting behaviors that you didn’t plan for pretty much characterizes every advanced technology that we have experience with. There are always unintended consequences from technology development, and the more complex and powerful the technology, the greater the unintended consequences. I think that it makes sense to be thinking seriously about what the long-term trends are for technology development in AI and machine learning, and to think about what kinds of safety measures we should work on now because we might need them in 20 years. The US Government took this seriously enough to include discussions of safety in both of the recent White House reports, one led by [Deputy US Chief Technology Officer] Ed Felten and the other by the National Science Foundation. Both include discussions of safety that I think are quite reasonable. They include points like, we need to be making investments in transparency and explainability so that we understand why systems behave the way they do. We need to make investments in verification and validation so that we can better understand how a system performs before we introduce it into the wild. We need to invest in things like goal specification and alignment, so that if you have a system that’s modifying itself in some way, the goals don’t drift too far from those you intended.
These safety concerns are very different from popular depictions of AI risks, which often involve comparisons to things like Terminator or Skynet. I don’t think they’re the sort of thing that was motivating Nick Bostrom or others who have been looking at this problem. AI risks are more like digital Flubber than Terminator. The risks would be due to a technology that has been poorly programmed and is doing something that you literally instructed it to do, but didn’t intend.
I think some popular discussions have made it seem as though AI experts don’t worry, and that’s not true. I think they worry about different time horizons. People like Stuart Russell at Berkeley, Andrew Moore at Carnegie Mellon, Shane Legg at Google DeepMind, David McAllester at the University of Chicago, Murray Shanahan at Imperial College London – these are researchers who say that on net, they believe that AI is beneficial to society, but that precautions need to be taken in order to avoid potentially catastrophic risks, and that we need to start investing now in AI safety that we’ll need at some point in the next couple of decades.
You founded an organization, New Harvest, that develops cellular agriculture, the science of making animal products like meat and eggs from cell cultures. Do you think cellular agriculture has implications for resolving any of our environmental problems?
Agriculture is one of the areas we monitor for bio-risks. What are agriculture’s resource requirements, and how might those resources be affected by natural hazards like droughts? Or by pollution of waterways? And how vulnerable is agriculture to emerging threats, whether that means diseases that are natural in origin or malicious?
Having some alternative technologies that we can leverage for agriculture makes sense. Cellular agriculture is certainly one of those alternatives, but there are others. One thing I find encouraging is that the research community and the venture capital world and philanthropy have started to look at agriculture again. I think for decades it was viewed as having plateaued. You had the Green Revolution. You had Norman Borlaug’s innovations that saved tens of millions of people. Since, though, there hadn’t been as much interest in driving agricultural technologies in order to improve food security or environmental protection. I’m encouraged to see a lot of new agricultural technologies on the horizon.