Carl Malamud: Internet Talk Radio, flame of the Internet.
Malamud: This is Geek of the Week and we’re talking to Noel Chiappa, originally one of the architects of the Proteon router, former staff member at MIT. I assume you were a student at MIT before that?
Noel Chiappa: Yeah, I was a student for a couple of years, and then I took all the computer science courses that interested me, had no interest in taking all the other math and physics and seventeen other courses. At basically the end of my junior year I went to… I was interested in operating systems at that point, actually. I went over to the group that had done all the operating systems work at MIT in the computer science lab there, and said “Gee, I want to come,” you know, “do stuff with you guys.” And it turned out they were just in the middle of being done with operating systems and getting into networking. That was 1977.
And they were working on a prototype 1 megabit-per-second ring together with some guys at UC Irvine, Dave Farber’s people. And it was a PDP-11 interface and they needed somebody who could program PDP-11s to do interface diagnostics. And of course they were all Multicians and they all knew Honeywell 6180s and Multics and they didn’t have any PDP-11 people. So the deal was that I would get to do some operating system stuff if I wrote their network diagnostics for their PDP-11 local network interface. And somehow I got into networks and you know, however many years it is later, sixteen years later I’m still doing networks. It’s just one of those things.
Malamud: Well routing has certainly been one of the areas you’ve specialized in, routing and addresses, which the two definitely go together.
Chiappa: Right.
Malamud: We’re currently looking at a next-generation Internet Protocol, and one of the questions that we’ve been looking at is the question of address space depletion. Maybe you can tell us a little bit about what an address is and what it oughta be, because I know you’ve thought a lot about this issue.
Chiappa: Yeah, um. Addresses to most people— Addresses— The address field in an IP version 4 packet does at least three different functions. And it’s useful when thinking about potential future architectures to very carefully sort out those three different functions, because you may in fact want to split them up into different fields in the future.
The first function is what we’ve been calling the locator function, which is there’s some structure in that field, sort of like…a good example would be…an analogy would be a mail address. So my mail addresses you know, mumble mumble, such and such road, Grafton, Virginia. So there’s structure in the address which tells you where the thing is. So we call that the locator part of the functionality.
The second thing that an IP version 4 address does for you is it uniquely identifies the entity you’re talking to. So you can tell, for instance in the TCP connection, the TCP connection is identified by the source and destination IP address along with port.
And the third thing that it does for you is what we’re calling the selector function, which is it’s the field in the packet that the intermediate riders look at when the traffic’s passing through.
Now, other networks have made different choices here. I mean, for instance in an X.25 network, the locator is only seen in the call setup packets. And thereafter there’s a virtual circuit identifier which is in packets. And the selector is the virtual circuit identifier. And the locator is the full X.121 address that’s in the call setup packet.
So other architectures have split these functionalities apart. And although IP version 4, we put them together, it seems that perhaps in a next-generation IP architecture there will be reasons to once again split them apart.
Malamud: One of the big questions we’ve been looking at is just how big an address should be.
Chiappa: Well…yes. Now, here’s where it gets tricky because if you in fact split those three different functions among different fields, the question is not just how big the address needs to be but you know, how big does the locator need to be, and how big the host identifier need to be, and how big does the selector need to be.
So before you can ask answer the question of how big do they need to be you have to answer the question of which one of the three functions am I talking about. So let’s deal with them consequential.
The locators, there’s some dispute about exactly how big locators need to be. Some people think that 64 bits are gonna be adequate. Other people thing that something on the order of an NSAP length, ie. 20 bytes or so, will be adequate. I personally don’t know exactly how big is enough in terms of bits, but I’m pretty sure that the following two things are true: that it has to have a variable number of levels in it, and also it’s very useful if each level is variably sized. Which means that you wind up with something that’s basically of variable length.
Now, the problem with that is that those things are very…tend to be very long so there’s a lot of overhead in the header if you carry them in every packet. And they ten to be expensive to parse.
Luckily, it seems that a lot of people now have this vision of future Internet work which isn’t quite exactly the pure datagram network that we have now. Dave Clark had this idea back in about 1980 or so that if you look at the spectrum there are pure virtual circuit networks on one end—say an X.25 network, and there are pure datagram networks on the other—say you know, good old IP version 4. And each one has certain advantages and disadvantages. And the way you build a system that has the advantages both and the disadvantages of neither is build something that’s in the middle. And it’s what he’s calling “flows.”
And basically a flow network is a network that takes…rather than each datagram being an absolutely independent entity, and the intermediate switches making no relationship between previous packets and later packets so each packet is a completely independent entity, the switches basically have a certain amount of state in them about ongoing flows. And a flow is not necessarily just a TCP connection, because if I have an FTP where I have three or four TCP connections, you know, they might all be part of the same application and the same flow so. But you can roughly think of a flow as something like either a TCP connection, or there are UDP-based protocols, for instance voice teleconferencing or packet—you know, video teleconferencing, where you get a sequence of packets. And even though they’re not part of a reliable end-to-end stream, they’re still obviously related. So what we basically need to do is put some state in the network, and have the switches recognize that certain packets are part of the ongoing associations that we call flows.
Now, let’s make it clear that the state in the network is not critical state, which is to say you could take any one of those switches and drop a bomb on it, and you can recover from that and keep going, invisibly to the higher-level applications at the end. So it’s not like an X.25 network in that way. The state that’s in the network is what we called “soft state,” which is to say that you know, you can destroy it any point and it could be recreated from the endpoints.
Malamud: Does it require a setup function, or does it automatically set itself up?
Chiappa: Um…[sighs loudly] The… It turns out that you may— What you may wind up doing is having a state— As having— There’d be evolutionary path where the setup is implicit in the initial deployment and explicit later on. And the reason is that…there’s an informational loss. I mean if I try and look at a stream of packets and figure out from those packets what the flow associations are of those packets, there’s almost inevitably information that’s lost.
Go back to my example with the FTP where there’s three actual TCP connections that make up that FTP stream. It’s going to be very hard for you, looking at the packets flowing through a router, to figure out that those packets in those three TCP connections all belong to the same sort of…you know, application. So, I think eventually we’re gonna have to get to a point where the application does have to say something to the effect of you know “I’m explicitly setting up a flow here now.” I mean, there are more reasons than just that. There are a whole range of functions, including policy routing, quality of service considerations, resource reservation, where the application has to tell the internetworking layer something about the kind of service that it wants. And it’s almost inevitable that we have to have some sort of setup.
And I know that call setup tends to set people’s teeth on edge because they think…you know, we can’t do anything until this heavyweight call setup thing happens, but there’s two things to bring up. First is we’re not talking about getting rid of datagrams entirely. The network is still going to have a datagram load, there are lots of applications for which datagrams are still the right thing. For instance interrogating you know, some random DNS server you’ve never talked to before and never will talk to again.
And the other thing is that you know, there are plenty of ways to introduce something that looks like setup into the network without necessarily paying a big performance penalty. And the classic example that I’ve always used is the old ARPANET. Now, everybody thinks of the old ARPANET as a pure datagram network. Well, it turns out if you lifted the sheet and looked it really wasn’t. Before you could send a packet to a destination IMP, you had to get a reservation for a reassembly buffer in the destination IMP. When they first built the system it didn’t have this feature and they found that the IMPs were all going into lockups because there were sort of full of half-reassembled packets.
So they had to change the system so that when you sent a large packet into the IMP, it’s broken up into small pieces and forwarded through the network independently and reassembled in the IMP at the far end. And they had to do a resource reservation thing where before you could send a packed to a destination IMP you had to make sure that he had a buffer for you.
But, it wasn’t necessarily any more inefficient than the old way in terms of round-trip delay, because the request to allocate the buffer was sent with the first fragment. So that if the destination IMP had the buffer available, then you got the reservation back basically with only a half round-trip time instead of a full round-trip time.
So there are ways to…if you’re intelligent about it, to make the setup phase not be onerous or painful. But I don’t think we’re going to be able to provide a lot of the features that we want to provide in the Internet of the future without some sort of setup phase where the state that the routers— I mean, you’ve got to get the state that the routers need into somehow. And it’s gonna be hard to get it all from just looking at the traffic randomly. You’re almost gonna have to do a setup phase, I think.
Malamud: Noel Chiappa, you’ve been talking a lot and thinking a lot about the routing issues and the complexities of the routing layer. There’s a school of thought out there that says that with the advent of ATM and the large data link cloud, a lot of our problems will go away. [Chiappa laughs] We can foist those off on someone else.
Chiappa: Um. Well I’m glad you put it as in “foist them off on somebody else” because problems…in the organization of large systems don’t just go away. You know, they have to be solved sooner or later.
My particular take on ATM is a sort of very unusual one, which is that think that in some sense it is the great white hope. But I don’t think it’s going to be…the solution of the future in the way that a lot of ATM partisans think it is. There are a lot of large-scale system organization issues that the Internet community has learned in a very painful fashion. There’s a saying attributed to Ben Franklin that experience is a dear master but fools will learn at no other. And I think that pretty well describes the Internet community. We’ve had to learn the hard way about you know, things like very very large-scale routing and resource allocation in datagram networks, dah-dah dah-dah dah-dah, all this other stuff.
And if I look at what the ATM guys are doing, in a way I like it. I mean we talked about how Dave Clark has this theory that the optimal network is one that is not a pure virtual circuit nor a pure datagram. And I think that that argument, you can make a reasonable case that argument applies at the physical link layer as well as at the internetwork layer. Which says that the ATM model, which is sort of intermediate between virtual circuit and pure datagram, is in fact very close to the right thing. So in that sense I really really like ATM. I think it’s… The way you can do bandwidth guarantees and also latency guarantees and stuff like that with the ATM model are really really the right thing.
The thing about ATM that worries me— Well, it doesn’t so much worry me, it’s something that I’m aware of. You look at ATM, and you find a bunch of people who are designing a system from the bottom up. There are a group of people off talking about resource allocation, which they call traffic management. And there’s a group of people talking about routing, and there’s a group of people talking about…you know, various other things. But there isn’t a group of people who are sitting down saying “What is the whole system going to look like when it’s completed, and how are all the pieces gonna fit together.”
And it turns out that as you try and get a more and more advanced infrastructure, there are places where various subsystems need to interact. And the classic example I always give is once again from the old ARPANET. In the old ARPANET, the routing would route traffic around areas of congestion. And the way in which it did this was it made the congestion delay measurement part of the metric that the routing used. And it was a small enough network and they had it all tuned up just right that the routing was stable, even though it…you know, the routing system was actually routing traffic around congested portions of the network.
But, what happens is as the network gets larger and larger and larger you can’t run that as an integrated system anymore because the stabilization time becomes greater than the time change period within the network, so that the routing will just never stabilize. And the way it looks like we’re gonna have to do that function in the future if we want to do that is have two separate subsystems—the resource management subsystem and the routing subsystem—and have the two of them interact to have traffic routed around congested areas. Which means that you have to think carefully about how those two are gonna interact at the time you design them and I don’t see anybody in the ATM world who’s thinking heavily about how all their various subsystems are going to interact.
The other way which I think the ATM people are failing is that… There are certain functions which are almost by definition end-to-end. And the classic end-to-end function that people talk about is reliability. It doesn’t do any good at all to have you know, your particular physical network guarantee that its packets are always received correctly if you know, they can be dropped in the routers or something else like this. People are now starting to understand that it’s not useful to have an extremely reliable, or—not extremely— It’s not useful to have a completely reliable link-level network. Because in an internetwork, you still need end-to-end reliability and end-to-end checksums.
I reckon that a lot of the functions we’re talking about that we’re sort of discovering we need now in the Internet, such as resource allocation, and routing is the one I’m particularly familiar with, are also sort of end-to-end functions in which you want to do the whole thing on a system-wide basis. And what that says to me is there are two rational models for the future.
Rational model number one is that the ATM layer is the internetworking layer, and it includes the ability to include a wide range of physical media and things and like that, because…you know let’s face it, economics and physics always say that there’s going to be a range of various transmission media and various transmission systems. So either the ATM layer is the internetworking layer and it glues all these various networking technologies together into a seamless data transport layer. And we run TCP directly on top of the ATM layer. Or, we’re going to have an internetwork layer which is doing these functions on an end-to-end basis, and at which point it’s doing those functions, again, at a lower layer is simply a waste of time and energy because the lower layer solution, you know, if I have some sort of really hairy routing or resource allocation at the ATM layer, that’s simply a replication of functionality that I have to have at a higher layer anyway, and the lower layer solution is necessarily an incomplete solution because it’s not end-to-end.
So, to me the other rational model is to say well the world of the future is gonna look like as follows. We’re going to have small pieces of ATM mesh, tied together with boxes that have the following look. The bottom of the box is an ATM switch, and the top of the box is an internetwork router. And what happens is traffic doesn’t actually come in through the ATM mesh, get reassembled into a packet, forwarded up through the IP router and back down and disassembled and sent back out—that would be silly. What’s going to happen is that the ATM virtual circuits are gonna plugged together directly end-to-end inside the ATM switch. But the entity that manages those virtual circuits and the [indistinct] decides which circuits to route things through and sets all the ATM mesh switch fabric up, is going to be the internetwork router.
In fact you can almost separate it into two different boxes. You know, there’s an ATM switch box, and there’s an internetwork router box. Which we’ll call… I don’t know, it’s not really a router anymore, it’s more like a…you know, a flow controller or something. And it controls the setup of the ATM switch.
So I think those are two rational models of the future. And I don’t think the former’s gonna happen because I don’t think the ATM guys, except for a few very brave individuals, are really willing to step up and say well ATM’s just going to be the internetwork and we’re gonna design all the mechanisms necessary to make it the internetwork.
So…you know, if that’s not the rational future then the only other rational future is the second one.
Malamud: We’re looking at a couple variants of that rational future. We’re looking at what the next-generation IP is going to look like. And currently there appear to be two large camps. There used to be a lot of small factions but now there is the Simple IP solution, or Steve’s IP Solution—
Chiappa: SIP, yeah.
Malamud: —depending on how you want to reverse engineer the acronym. And there’s TUBA, TCP and UDP with Big Addresses, but that really is the OSI Connectionless Network Protocol. Can you comment on those two solutions to the next generation IP?
Chiappa: Well. It’s important to realize that there is actually a third faction, which is “none of the above.” And I’m sort of one of the major…loud noises in the none of the above faction. You know, I regional actually did— When I was Inter— I was the Area Director for Internet on the IESG for some years. And when I first took the position I in fact did believe in the school that said we needed a new packet format real soon now. And…there was an IAB architecture retreat at San Diego where Van Jacobson put forward an argument to change my mind on that. And what he said was… He said you know, we really don’t know what the network infrastructure’s gonna look like ten years out. You know, what it’s all going to need everything is just simply…it’s just going to be very very different.
And his basic case was you know, we should put off the day of adopting a new packet protocol as long as possible. And I basically believe in that argument. I mean, you know, it’s clear that there’s a whole bunch of areas including security, resource allocation, dah dah dah-dah, where we’re still feeling our way. And exactly what we need in the new internetwork later, I don’t really think we know yet.
Now, if IP version 4 were gonna run out of gas in four years, that would be one thing. I think we would have to sort of panic and get on with something new. But…you know, if you make the assumption that IP version 4 has more than…some minimal number of years of useful life left in it, then I think you can make a pretty good case that you’re better off leaving the design of a new protocol as late as you can. Because the later you leave it the more technical knowledge you’ll have about large-scale internetworking, and you know, the compounding of our knowledge over the years has just astounded me, and I can’t believe how much more we know now than we knew in 1977. It’s just…it’s astonishing to me that IP version 4 has—we’ve managed to sort of tweak it to work as well as it has and scale as well as it has. So I actually very much believe that we should you know, put off picking a new packet format as long as we can.
I mean, the other thing I’m gonna say is that… You know, if I look forward at the network of the future, it’s one of these flow-based networks. And what we’ve got here in SIP and TUBA are you know, two more internetworking—two more datagram protocols. And you know, sorry…you know, datagram protocols are not the wave of the future, I don’t think.
Malamud: Well isn’t the sky falling? I mean are we— What happens if Nintendo decides the next version of their game machine is TCP/IP, or AppleTalk converts, or Windows NT decides to emphasize that? Won’t we run out of addresses soon?
Chiappa: Well, there’s a comp— There’s a complicated series of answers to that. Um. Let’s just first consider the scenario where you know, we don’t try to put every everybody’s television on the internetwork. The best bet that we have at this point— You know, now that this Classless Inter-Domain Routing, CIDR, is coming in and we’re allocating address space in smaller increments, the best bet that we have is that at the projections from the current rate of use, we’ve got at least ten years of life left in it.
If you look at the Internet address space by percentages, I think…27% or something is currently allocated? But a large chunk of that is in the form of a very small number of Class A network numbers. And the smaller network numbers are being allocated— You know, in terms of the percentage of the total address space, the smaller network numbers are actually not using very much the address space at all. Basically the smaller the chunks of the address space we allocate it in we find, the more efficiently it’s used. So for instance you know, you look at a typical Classy C network that’s been assigned and maybe it’s got fifteen hosts on it so that’s…you know, what, 7% utilization. And you look at a Class A network where it’s got 224 class="ordinal">th possible host addresses and it— And you know, look, sorry. Nobody has four million hosts on a Class A network. Nobody even has… You know, I’d be surprised if anybody even has forty thousand hosts on a Class A network. So you know, you’re down by an order of magnitude in utilization there. So getting rid of Class A allocations and going to CIR is really gonna help.
Now, there’s two additional steps above and beyond that. The first step above and beyond that is to— Remember we talked about how the Internet address performs three different functions—there’s a locator, and a host identifier, and a selector. I personally think that the optimal evolution strategy’s— I mean everybody agrees that we need a new routing and addressing architecture by—and I’m using addresses in the locator sense here, ie. some sort of structured name that tells you where the thing is. Everybody agrees we need a new one. I don’t think there’s any dispute there. But rather than deploy a whole new packet format, and a whole new routing and addressing architecture at the same time, what I’d like to try and do is deploy a new routing and addressing format—and I obviously have one under consideration, and sort of use that as sort of a common ground between an old packet format and a new pack format. So to say, let’s get the new routing and addressing architecture deployed, and they we’ll design a new packet format that uses those new new-style locators. And so, this new routing and addressing architecture will be deployed as an adjunct to the current internetwork player. And then that sorts of the common piece between the current internetwork layer and a new internetwork layer. So it’s more of an evolution rather than a well you know, we’re gonna take a total step here from from system A to system B.
Malamud: How big is that network going to be in ten or twenty years? Do you have any thoughts?
Chiappa: Twenty years? In twenty years, there is not going to be a phone network. There is not going to be a television distribution network. There is not going to be any kind of separate communication network. There’s going to be one giant network which handles everything. And the paradigm— You know, it’s all going to be traveling in packets inside an internetwork system. I think you’ll find pretty broad agreement on that among most of the people here.
I’m not sure that all the people in the phone companies and the cable TV companies have bought into this vision yet. I mean, I everybody believes in an integrated system I think, you know. That’s why you know, who is it, Bell Atlantic just bought up that cable TV company. I think everybody believes in the integrated system. It’s just not everybody understands what the integrated system is gonna look like. But you go around and talk to all the people around here and they know exactly what it’s gonna be, and it’s gonna be a giant Internet with resource reservation.
Malamud: So if you go to an IETF, that’s what people think. If you go to a telephone company they think it’s gonna be the integrated broadband ISDN world. If you go to the cable people it’s everything’s going to be your cable box. You see all of these converging, and it really is going to be an internetwork?
Chiappa: Oh, I think so but I mean, you know, if you go to the cable TV guys and say you know, “Tell us about…” You know, they tend to give you the lower layers. I mean, they don’t tell you how… You know, you see all these wonderful diagrams with boxes and wires. But what they don’t tell you is you know, what’s the structure that’s gonna tie all this together. You know, it’s the system-level thinking that I find is missing there. You know, broadband ISDN… I mean how is broadband ISDN different from ATM anyway? I mean it seems to me that all the things that I said about how the ATM guys haven’t really got a clear view of the future that’s deployable and practical, probably applies to the telephone company guys as well—and I may get shot for saying that.
But, I don’t think there’s a realistic view for how to build a global communication network other than pretty much at this point the Internet one. And I think the Internet guys are the ones who’re— You know, maybe we’re all suffering from delusions of grandeur. But we’re at least thinking about what the whole system’s gonna look like as a system. And trying to design a system from the top down that has those capabilities.
Malamud: And how big is this network gonna be? How many nodes is the Internet gonna have?
Chiappa: Oh, twenty years from now? I don’t know, take the number people on the planet and multiply by ten—I don’t know, something like that. I’m fully believing that I will live to see an Internet… You know…the problem is at that point you start get into all sorts of other variables like you know, are we gonna have like, large-scale regional wars which reduce large-scale regions of the globe to poverty. I mean, if things like that happen clearly… This high-technology Internet can only spread through places that can support that kind of infrastructure. And you know, say the coastal regions of China today look like a good bet for being Internet-live in five to ten years. But if something happens inside China and they fall back into massive disarray and confusion and you know, the famines could come back, there could be civil wars. And you know, the Internet’s not gonna spread there if that happens.
So, I think… You know, if you can answer me the question of what areas of the globe are going to be technologically advanced and economically functional in 2010, I can tell you where the Internet’s going to be in 2010 and how big it’s gonna be.
Malamud: This has been Geek of the Week and we’ve been talking to Noel Chiappa.
Chiappa: Thanks a lot.
Malamud: This is Internet Talk Radio, flame of the Internet. You’ve been listening to Geek of the Week. You may copy this program to any medium and change the encoding, but may not alter the data or sell the contents. To purchase an audio cassette of this program, send mail to radio@ora.com.
Support for Geek of the Week comes from Sun Microsystems. Sun, The Network is the Computer. Support for Geek of the Week also comes from O’Reilly & Associates, publishers of the Global Network Navigator, your online hypertext magazine. For more information, send email to info@gnn.com. Network connectivity for the Internet Multicasting Service is provided by MFS DataNet and by UUNET Technologies.
Executive producer for Geek of the Week is Martin Lucas. Production Manager is James Roland. Rick Dunbar and Curtis Generous are the sysadmins. This is Carl Malamud for the Internet Multicasting Service, town crier to the global village.