Nadya Bliss: “Bound to fail.” Those are some powerful words. I have heard those words quite a few times in my life. Sometimes they take a slightly different form. A popular version is “you can’t do that,” or “you’re not supposed to do that.”
The first time I remember hearing those words, I was five or six trying out for ballet in the former Soviet Union. They could tell that I wasn’t going to be particularly tall. I didn’t really look like the ballerina in the making. I didn’t end up doing ballet. But I also think that was the last time I let those words stop me.
When I was little, I wanted to be a mathematician. In the Soviet Union being a mathy girl wasn’t weird or discouraged. But I realized things were culturally quite different when my family moved to the United States when I was a teenager. Yes, a great time to love math and change countries. As a high schooler I realized computer science allowed you to leverage much of the mathematical rigor in ways that often let you see the impact of your work in a tangible and a beautiful way.
In my high school programming class, I was one of very few girls. When I majored in computer science at Cornell, I was often one of four in a 200-person class. When I decided to do my master’s and bachelor’s in four years, many of my friends thought I was crazy. I probably was a bit. I survived and landed a dream job as a staff scientist at MIT Lincoln Laboratory, a national laboratory developing technology to address national security challenges.
There, I ended up being the youngest group leader in the more than sixty year history of the lab. I founded the Computing and Analytics Group and led large-scale research initiatives to address computational challenges for the Department of Defense and the intelligence community.
When I came to ASU, I decided it was important to write up my close to a decade worth of research in graph theory as a dissertation. And so I completed a PhD in about a year and a half while working full-time first as an assistant vice president in knowledge enterprise development, and then as the director of the Global Security Initiative.
Many times along the way, there would always be many—often incredibly well-meaning—who would often say that all of this is impossible. Or that it can’t be done. Or that no one has done it. Quite frankly, for me that simply fuels the fire. Don’t get me wrong. I realize today that ballet probably would not have been for me. And focusing on math was a much better choice. But from then on, I have always made sure that the choice was made by me, and not for me. I haven’t had what one would consider a traditional academic career. Yet I have always focused on taking the most innovative research and applying it to the most challenging problems in security. Those components together, innovation and impact, are what drives me and quite frankly have driven me for decades.
Today we face many highly complex challenges both nationally and internationally. From security of our information networks, to planning for and managing natural disasters, to emergence of new infectious diseases, to social and political conflict throughout the world, these challenges are messy, and highly interconnected. As an example, cyber security touches on pretty much everything in today’s society. A rather simple vulnerability like not checking the validity of a web form input could potentially allow a compromise of our election databases.
As another example, our energy delivery infrastructure requires resilience to both cyber attacks and natural disasters. At stake are often confidential information, economic losses, damage of equipment, and power outages leading to greater socioeconomic impact, just to name a few. Similarly it is impossible to talk about new epidemics without considering both environmental factors and travel patterns of our citizens.
So we often try to simplify. We try to make these problems somewhat more tractable. I’m here to claim that it is precisely this desire to remove complexity in fear of failure that often prevents us from being ready to face these challenges.
So let’s get back to those words. Bound to fail. Those actually come from the first sentence of the abstract of a research paper from 1973 titled Dilemmas in a General Theory of Planning by Rittel and Webber. Why this paper? The context for those words is that the authors claim that you can not address these messy interconnected problems with science and engineering. In fact, they define these problems as “wicked.” Not in an evil sense, and not because I’m from Boston, but as compared to tame. As described in the paper, a few of the properties of these problems include lack of well-scoped definition, no ability to test if the solution is the right one, and the fact that testing the solution has the potential to change the problem.
What does all this mean? Let’s consider something like securing the Internet. We can’t really start from scratch. We can’t make a fully-secure processor without removing all functionality. And any solution we do deploy has the potential to set up a sequence of unintended effects. An example of such an effect could be potential loss of privacy as data collection is increased to provide better predictive power for a compromise of someone’s identity. Or, an introduction of a piece of software that tests validity of a code that could potentially slow down the application and lead to very frustrated users.
How about another example? Emergence of social and political instability. Again, not something that can be completely eliminated and often root causes can be difficult to identify. As both established and emerging economies grow, they stress our food, energy, and water systems, causing competition for resources and contributing to resource insecurity. How do we disentangle radicalization, resource insecurity, and economic pressures? How do we know that our development programs provide relief to areas in the world that are struggling?
Does that mean that all is hopeless? Are we bound to fail? I absolutely do not think so. You probably knew I was going to say that. But how does an engineering college-trained computer scientist who spent over a decade engineering technology for national security make progress on something that has been declared unsolvable by STEM (Science, Technology, Engineering, and Mathematics) techniques?
First, we have to try. It is imperative that we increase the engagement of engineers and scientists in these messy problems. And not just engage but have the STEM disciplines work with policymakers, social scientists, political scientists, along with many others. It is absolutely impossible to address any of these problems with a single discipline. Often people think that mathematicians and computer scientists and engineers are narrow in their thinking and encourage simplification. But instead I’m standing here telling you to embrace it. Not only that, I would actually claim that computer scientists specifically are well-suited to this task. We’re taught to formally appreciate complexity at a very early stage in our training.
I also think that computer science is inherently collaborative and interdisciplinary. If we want to build an algorithm to do something of impact, we shouldn’t do it alone. My personal research is in analysis of graphs, or the mathematical structures that can encode relationships or connections between entities and concepts. So from where I’m standing, not only are the wicked problems tameable, we can leverage what we know from graph theory to help us on that path. So a way to effectively manage complexity but not ignore it is to account for the interconnectedness of these problems. It is true that addressing all of the messiness as once it’s impossible, but that should not prevent us from making progress.
Second, we can observe that at the core of these challenges is the notion of planning. It is even in the title of the original paper: Dilemmas in the General Theory of Planning. Instead of responding to a disaster regardless of whether it is a cyber breach, a natural disaster, or an epidemic, how do we plan for it? How do we become proactive instead of reactive in making our world more secure? This framing allows us to make measurable progress, progress towards better analytic and decision systems that account for the messiness of the real world without oversimplification.
As an example, we can develop anticipatory models of disease spread that are coupled to changing climate patterns. That is a challenging task. Data models for disease and climate often come in inherently incompatible scales and formats. But if you bring together hydrologists, climate experts, disease experts, and computer scientists, you can start to not just anticipate where the next epidemic may arise but plan for appropriate healthcare infrastructure to manage it.
In another effort at Global Security Initiative, we’re working on developing tools to anticipate instability through analysis of trade networks. In 2011, a drought in China’s wheat growing regions contributed to a revolution in Egypt, partly because of trade interdependencies. What we’re working on is developing an anticipatory methodology to identify other regions that could be susceptible to similar events. It turns out that patterns of trade provide insight into regional stability. As a matter of fact, we can see patterns of trade for countries that are considered stable, and those are drastically different from the patterns for the countries that are not. But what is even more significant is that the tools we’re developing can be used by a planner to potentially enable proactive intervention.
In cyber security, a proactive approach is a must. New vulnerabilities are constantly being discovered and built into brand new attacks that can break into sensitive databases or take down servers. Attacks are often bought and sold for a large amount of money on Dark Web forms, online meeting places that can’t be reached with standard web browsers. Researchers in our Center for Cybersecurity and Digital Forensics scrape data from the Dark Web forums where exploits are sold and analyze them. Last year our research team found and never-before-seen attack before it was deployed in the wild. This gave a chance to the community to really plan the defenses for it.
Finally, we have to accept that none of us can do this alone. I have always wanted to do research precisely because I wanted to make a difference. It may seem like spectral graph theory is pretty esoteric of a field. And yet in all of the examples that I’ve talked about, understanding connections between different elements of a problem provides a way to see how the puzzle pieces fit together.
In addition to understanding connections, we see a few other common themes. Diversity of time scales to understand how historical events have impact on anticipating and planning for the future. Large, complex data sets coming from a variety of sources, and they need to bring together disciplines that traditionally do not work together. These commonalities allow us to apply what works in one area to others, thus making progress on what may seem unsolvable. They also allow us to fully embrace the complexity of the entire security landscape without compromising our goal of impact. But if our goal is research with impact, failure, especially of the kind where you learn something and you get up and you keep going, is not a bad thing. It makes us tougher. It teaches us how to be better humans. And it allows us to make progress towards a more secure world.
Oh and one more thing. My 5 year-old daughter is currently doing ballet. Thank you.