Henning Schulzrinne: So I’ve been involved in the Internet technical community since the early 90s. So primarily in my academic role as faculty at Columbia and previously as a researcher at Bell Labs and and a German research lab here in Berlin, actually. And secondly, more recently as a staff member of the Federal Communications Commission. An so in that role I’ve been participating in traditional academic research primarily in the networking realm, but also working within primarily the Internet Engineering Task Force on standards development for Internet applications, primarily real-time applications.
Intertitle: Describe one of the breakthrough moments or movements of the Internet in which you have been a key participant.
Schulzrinne: The topics I have worked on probably the most are as I said the real-time Internet applications on voice over IP and real-time streaming applications. So voice over IP, delivery of phone calls over the Internet. And that led to a number of political developments that are now fairly commonly used in the industry. So this is the Real-time Transport Protocol that transports audio and video content across networks. And now it’s often used for audio and video on telephony within in enterprises but also increasingly on the wide areas. So there’s a number of voice over IP providers as well as what are known as 4G, or voice over LTE, systems that use that type of technology.
And then a corresponding protocol that is used to control the session, the Session Initiation Protocol, SIP, that’s commonly used again in the enterprise space. Many of your new IP PBXs that are used as kind of your desktop phones in offices, they typically use that, as again on mobile phone carriers, as part of the Internet Multimedia Subsystem, IMS, in that.
I’ve also worked on a number of applications in public safety, in how do you support emergency calls such as 112 or 911 in a new all-IP environment.
Intertitle: Describe the state of the Internet today with a weather analogy and explain why.
Schulzrinne: It’s really hard to answer that in generalities because the Internet has become such a diverse ecosystem. And it’s probably much more productive to think of it as as not like a single entity but again, like an ecosystem, where parts of the ecosystems are quite healthy, and others…not so much. So, let me try to just give a few examples of that. Because we’re now seeing really that when we talk about the Internet we’re really talking about two somewhat separate things: the technology, and the global infrastructure.
The technology, namely the protocols and other software artifacts and so on that use Internet protocols but may not actually be used on the Internet. They may be used in private networks, in data centers, in enterprises, in homes, without necessarily touching the Internet. I think that development has been robust and continues to progress pretty rapidly, where the major problems are probably in terms of robustness and reliability, and security-related problems as well. But the technology seems to be able to keep pace with demand.
The other one is the Internet as a network that you connect to, exchange date on, communicate with other people on. And that again…I think that in many countries and many regions, things are moving ahead quite nicely. Speeds are improving, the availability on mobile devices is dramatically increasing. But we also have simultaneous challenges. Just to name a few, again the security challenges that increasingly make it difficulty particularly for individuals and small businesses to know what information is truly secure and private, what their bank account or their private data, medical data is at risk. And also at a larger scale for enterprises, being exposed to theft of their intellectual property—and I’m not talking about music here and videos climb, primarily. I’m talking here about blueprints, and chemical formulas, and customer lists, and all the other things that companies maintain private in order to maintain their competitive position. That I think is a major challenge simply because it doesn’t seem possible for ordinary individuals to keep up with the deficiencies in both protocol design and implementation to have reasonable certainty that the tools they use won’t be used against them.
There’s also about more larger-scale challenges, namely the suppression of Internet freedoms in a number of countries. Issues of privacy. How do we balance free access to information and services on mobile devices with my desire to maintain private information as private.
Intertitle: What are your greatest hopes and fears for the future of the Internet?
Schulzrinne: Let me talk about security as one. First of all I think it’s important that I don’t want to just fall into the trap to say the Internet is insecure, because that’s not really a helpful statement. It doesn’t differentiate enough between the various components. Because I would look at that in pieces, namely one piece is the underlying technology; the second piece is the implementation—software primarily and hardware to some limited extent; and thirdly the operational practices. And there are problems in all areas but they’re very different problems.
I think there generally has been for at least a decade a fairly profound awareness on the design and engineering side that A, you need to design protocols for a hostile environment, and we have reasonable ideas on how to do that. I would say that at least most protocols that have been designed somewhat recently or have been enhanced recently all have good to acceptable security mechanisms built in. So it is not so much a problem that Internet protocols are insecure, though there are some that certainly could use strengthening, particularly in the routing side and again on the access side more in the LAN protocols.
But the other areas are far less encouraging, namely in the implementation side we seem to have difficulty on two counts, namely A, routinely we’re designing reliable systems (software engineering) often because it is not immediately obvious when something is insecure (It works just fine.) until somebody attacks it.
And secondly on how to test it and how to deincentivize people from building insecure systems. Currently, there seems to be a problem that many software developers, particularly smaller ones but certainly not limited to those, seem to have difficulty…I don’t know if it’s an engineering problem or a management problem, to put enough resources into creating secure systems. Designing by good engineering practices, testing, and in particular relying not just on internal testing but also on external testing. We are used to in other areas where safety and security are at stake— Think of vehicles or electric toasters. We have certifying bodies because we don’t want to rely on the manufacturer themselves, as diligent as may be, to completely trust them that they will know whether they did a good job. So we have entities like the Underwriters Laboratory for electrical equipment, the TUV in Germany and other countries for safety, on just about anything, whether it’s elevators or cars or umbrellas, that have any type of even remote security or safety implication. We don’t do that for a software and it is fairly obvious that that isn’t really working.
Just to give you one example that I’ve encountered in my work, in my current line of work. In the United States we have a system called the Emergency Alert System, EAS, which is used to alert TV viewers on imminent threats to life or property. So think storms, or flash floods, tsunamis, all of those. So every TV station and cable system is obligated to have a device that allows a public safety authority to submit a request to send out a broadcast saying to take cover, to take appropriate actions. So it’s obviously very important that this is a reliable system.
Until maybe five years ago, these systems weren’t not connected to the Internet at all. There were some master stations that would broadcast it and then they would retransmit it down the line. More recently for convenience and operational purposes, they have designed systems that use Internet-connected devices.
Recently in the past five years, these TV stations have for convenience and operational efficiency’s sake installed boxes that connect on one side to the Internet, and on the other side intercept the TV signal so that they can inject a crawler on the bottom of the screen, and audio into the TV signal, because emergencies could happen any time, even when there is no engineer on staff, for example.
Well unfortunately, these are fairly specialty devices and whoever designed those didn’t do a whole lot of testing. They violated just about every guideline known for designing secure systems, so what happened was somebody discovered you could search for those—you could Google them on the Internet. You just searched for the logging string. And then they used a default password which you could also easily Google just by looking at the manual. And they injected in about a dozen TV stations, primarily smaller TV stations, a fake emergency alert about zombies emanating from the ground and that the population should take cover.
Obviously kinda funny the first time around? but could easily be misused. So in our case this happened just before the State of the Union address of the president of the United States Senate, so there was grave concern that somebody would use that to sow panic like report a false terrorist attack that would occur.
And so that was an example where somebody had designed a system not thinking that these would be connected to the Internet, that people would not change the default password, and that there would be no other security protections in place. And there’s many of these smaller systems. These could be home routers, it could be electric meters, it could be car systems, where there doesn’t seem to be a true appreciation as to the dangerous that could occur if somebody gets access to those. And we don’t seem to have a good way of dealing with that.
The third aspect, I’ll briefly talk about the operational aspect as the third consideration, is it used to be that many computing system—or most of them, probably—were operated by trained system administrators that at least had some professional awareness. Their skill level probably varied, but at least many that worked in that field had education in computer science or maybe even some security training. But nowadays, many if not most computers are operated by individuals that have no technical training whatsoever, and they shouldn’t have. And this is true for home networks, it’s true for small business networks—I mean your dentist, your baker type of thing. Everybody has a computer, generally connected to the Internet. Like your doctor’s office probably has one for electronic medical records. And none of those are operated by trained system administrator. So it is very easy for these amateurs to make mistakes in operating those type of systems.
Again, we’ve designed systems not really well anticipating the kind of users that would really use them, thinking that they would—or maybe not even thinking—that they would be used in the same way that they were in the 1980s and 1990s.
That doesn’t mean we should train everybody to be a system administrator. That just just doesn’t work. We need to design systems that are secure out of the box. You just can’t make them insecure without a lot of effort. And we haven’t really succeeded and that’s been far too difficult. The type of technologies that people use like passwords and so on are becoming increasingly user unfriendly. And they’ve become increasingly unmanageable. And that’s what I see as one of the challenges now to make it easy to both build secure systems, and to operate secure systems.
One particular one is that the barrier to entry to creating new businesses, new content, has dropped dramatically. In the last decade or so it is not possible for a much wider variety of individuals to not just consume content— You could always do that—radio, TV, and all that have existed for a century. But you have now the possibility that ordinary individuals without a large budget or maybe even large deep technical skill sets could create very interesting content of all kinds. So just examples, the Khan Academy for training materials. Individual small local groups that could distribute videos. Web sites and web applications that could be build. Apps on smartphones. All of those are now accessible to many more individuals than they were even a relatively short while ago. And that I think has probably been the greatest enabling capacity of the Internet, not so much just as a distributor of high-cost, highly-produced content. That’s always been available. But as a means for distributing low-cost, low-effort—much more democratic if you like—content, both for cultural as well as just plain business uses, as well as educational.
Intertitle: Is there action that should be taken to ensure the best possible future?
Schulzrinne: One of the things I’ve been involved in in the Federal Communications Commission is to ensure an open Internet. Namely, almost by physical design not everybody— Well, everybody can, or most everybody can create content and applications. It is very difficult for most people to operate their own network. You just can’t string you own fiber or run your own cell towers. And so the number of operators in almost every country in a particular region tends to be very small; a handful, even if you count wireless operators. Typically you have your copper-based provider, your fiber or coax-based provider, and them maybe a small number—three or four—wireless operator, cell operators.
Because of the cost, billions of dollars to build a network, we can’t really rely purely on competition to ensure that users have access to legal content that they want to get access to, create content that they want to create, because in some cases both for content that they want to access and for content that they want to create, may well compete with other business ventures that a network provider has. Most of network providers, at least in the US for example, also distribute their own video content. They may have applications of their own. They certainly have had voice applications, for example. That’s very common for almost every network operator. And so if they have incentives to give themselves an advantage in order to compete with other providers of content and applications.
So I believe it continues to be important to have rules and mechanisms in place so that providers cannot discriminate against providers of applications and content, because in many cases that is essentially our primary means of accessing information of all kinds. That remains a long-term challenge, how to do that in ways that doesn’t unduly interfere with expansion of a network, doesn’t unduly increase cost. So in the US we have found, as one current mechanism, the FCC Open Internet Order, which spells out some of the conditions kind of at a high level how that should work out. But other regions and countries such as Europe are still trying to find their way to find that balance.
Intertitle: Is there anything else you would like to add?
Schulzrinne: One of the other challenges that I see is as the network has become in both good ways and bad ways a commodity, namely we all rely on it, it’s something that we notice mainly when it’s not around as in, “I can’t get Internet access. What’s going on here?” We expect it in every hotel and every airport, certainly in most homes, schools, wherever. One of the things that is I think in some danger is a robust research infrastructure. If you look at many of the major providers of hardware and software and services, used to all have significant-sized research labs.
Just to give you one example that I heard recently, Nokia—obviously primarily they would do both do network infrastructure and handsets—used to have 600 researchers in their lab. They’re now down to sixty. Verizon in its previous incarnations used to have large research labs in multiple different facilities that did not just short-term but long-term research through their Bell Atlantic and other research labs facilities. Telcordia, the same thing. They all used to have long-term research. They’ve largely discontinued that. There’s only really a relatively small number of companies that still do networking-related research that has more than just a six-month time horizon.
Universities continue to develop as a vibrant research community. But it can’t be universities by themselves. Particularly because for a variety reasons funding is no longer nearly as available as it used to be, both funding through governments as well as because of the downsizing of corporate research activities, funding available through corporate sponsorship. If we don’t have a vibrant research community, the problems that I alluded to earlier—security, accessibility, the usage for content creation, will all suffer. We won’t notice it because we won’t notice directly; we won’t notice what we’re missing since we don’t see it. But if we don’t that, I think it will be much harder to solve those problems, because in many ways, those type of research efforts have often created artifacts that were widely distributed, had low cost to acquire, which means lots of people could use those and adopt them. They tended to be non-proprietary. There tended to be an emphasis on on making sure that it was available. And if you don’t have that anymore, if you just have very small-scale, venture capital-style research going on, we’re missing out on something.
I think it’s partially the competitive pressures. Namely research, almost by its definition, doesn’t just accrue benefits to whoever does it. It’s really hard to keep research secret so that nobody else benefits. You can do that in some areas such as pharmaceuticals, where the output is a single drug that is easily patented and you have a twenty-year protection horizon on that and it’s very difficult for somebody else to replicate exactly that prescription drug.
But if you look at networking or computer science research in general, most of the ideas you generate are…they’re hard to contain. They just distribute themselves, so to say, through students, through publications, and all the normal mechanisms—which is a good thing. We want that to happen. But it had from a purely local economic optimization mechanism, it’s easy to say, “Hey, somebody else should do the research. I just get the benefit.” But if everybody does that, you don’t get any research done anymore.
And in the old days, we always had, and this was like more an accident than anything else or any planning, we either had a strong government funding which isn’t concerned about those issues and they don’t really care, except maybe on a national level as to who benefits from research. Which in itself is a problem when you have now some people say, “Well, let the other countries—” this is in the US, “Let the other countries do the research and we’ll just basically build the stuff.” Or we just do shorter-term development work.
The other problem, or the other issue, is that in those environments you don’t really have a set of people who can continue to do that research, because some of the areas have become kind the go-to areas. Big data, say graphics in some cases. So we don’t have quite the same student population that we had available. It’s partially also because there aren’t as many research jobs out there that people in the industry would go to. I mean, people don’t have— When they start master’s or PhD program, they want to have some assurance that they will find a job afterwards, and industrial research was often a very attractive destination because people recognized that only a very small fraction could become faculty so what else are you gonna do? And industrial research offered an opportunity for a creative outlet and so on. So that is this kind of feedback loop that’s not working very well right now, and it’s not clear how we can get out of this, given that government funding in general for research both in Europe and the US isn’t increasing, to put it very politely. And we have a separate decrease which then diminishes the supply of talented students who want to participate in that research.
Henning Schulzrinne profile, Internet Hall of Fame 2013