Carl Malamud: Internet Talk Radio, flame of the Internet.
Malamud: This is Geek of the Week. We’re talking to Peter Ford, who’s a member of the technical staff at Los Alamos National Laboratory. Welcome to Geek of the Week, Peter.
Peter Ford: Thanks, Carl. It’s great to be here.
Malamud: You’ve been one of the instigators of one of the contenders for the next generation of IP, known as TUBA. Maybe you could tell us briefly what TUBA is.
Ford: Sure. TUBA is an acronym for “TCP and UDP with Bigger Addresses.” And originally the proposal was quite generic in the sense of just looking at how to run the current Internet layer transports TCP and UDP, and of course everything above it like telnet and FTP, on top of some network layer protocol that had bigger addresses than the current 32-bit addresses that we have in IP version 4. As things progress, we sort of shopped around, because of the experience of a lot of people in the working group that was involved with TUBA, we went with looking at a specific instantiation on top of CLNP, which is the connectionless network layer protocol that was basically standardized by the ISO community.
Malamud: And so did you basically take the ISO protocol and slap TCP on top of it and off we go, or did you make some changes, or…
Ford: Well basically we didn’t really change anything dramatically. In fact we didn’t change anything in CLNP the way it is today. What we did do is we specified how to encode the carriage of TCP and UDP on top of CLNP. The biggest trick there is that TCP and UDP specify what is known as a pseudo header which actually goes in and reaches into the network layer to get some other information when it generates a checksum for protecting that information. And so that was one of the major design points. We actually have a spec for it and it’s sitting out I guess as an experimental RFC right now.
Malamud: Is that the only really modification that you needed to make?
Ford: Well that’s really not a modification. I should— I want to clarify that. It was really just a specification of how to carry TCP and UDP on top of CLNP.
Malamud: So TCP and UDP remain the same, and CLNP remains the same.
Ford: Architectural— Implementation-wise, CLNP remains the same. TCP and UDP of course change because it only covers the case where the two end systems are both running TCP on top of CLNP. One thing that the current specification of TUBA does not cover is the issue of letting a system originate TCP on an IP host and terminate on a TCP-speaking host that’s running on top of CLNP. That’s not to say it’s impossible to do that, it’s just that to date there isn’t a succinct spec that says how to do that.
Malamud: What were the motivations behind picking CLNP? What made you decide that that was an appropriate Internet protocol?
Ford: CLNP actually started when the American community took the Internet protocol to the ISO community, and not only to tried to get it standardized but also in a sense bring the concept of connectionless datagram networking to the ISO community. As you probably know and maybe many of your listeners know, the ISO community used to be very focused on strictly connection-oriented networking. Some brave, hearty Americans and some fellow compatriots in other countries basically brought CLNP to the ISO community.
The one thing that they did change significantly from the original IP spec was the whole area of addressing. Basically the concern was that if you’re going to build up a network, in this case a connectionless network, at a global scale, it was clear to the people in the committee at that period of time that the 32-bit address space that was in IP version 4 was not sufficient. And so this is the one area that shifts dramatically from IP version 4 to CLNP. There are many other things to carry forward, such a source and destination-based addressing, TTL, the way options are processed, things of that sort are very similar. They’re not identical, but they’re very similar to each other. Someone who knows IP version 4 will recognize most of their favorite features in CLNP.
Malamud: Now the addresses I’ve heard are 20-byte addresses and some ways say gee, that’s an address for every proton in the universe. Could you explain some of the rationale behind [crosstalk] the address space?
Ford: Sure. That’s actually a very good question and I really appreciate it. It’s sort of like a softball being tossed up.
It turns out that NSAPs are actually variable-length addresses. And so the way an NSAP is specified in its representation in a CLNP header is it has a length byte, it has a type in a sense called the Authority and Format Identifier—the AFI, and then the rest of the address. And at the end of the rest of the address there’s a single byte called the end sel, which is equivalent to the proto field in IP version four.
You can actually have anywhere from let’s say, one to up to 255 in the worst case, ‑length addresses in CLNP. The reason the number twenty keeps popping up is because the US GOSIP spec essentially talks about profiling the NSAP address space for 20-byte addresses, and it lays out essentially what the format is going to be and how each of the individual fields within that format are. What’s notable about that is that you then have a length indicator of 20, you have an AFI that basically specifies that this is going to be coming out of a particular range of the address space—
Malamud: AFI is an Authority and Format, um—
Ford: Identifier, right. This is, you know, ISO-ese. A lot of terminology here.
The important thing is it basically specifies who’s in charge of the subsequent portion of the address. And then the remaining bytes are essentially very well-formatted. And they encode things like areas within a site and then sort of the relationship between a site and its provider in terms of how the address space is delegated.
So in fact, you could actually have either much shorter addresses using NSAPs, and CLNP could carry those, or you know, worst case you could have even larger addresses if you wanted. So the 20-byte is currently what’s specified. It’s not mandated that you use that. You don’t have to. It just happens to be a profile that was done by the US GOSIP community.
Malamud: Does the group that is looking at a next-generation IP have any thoughts as to what that length should be, or are you gonna stick with the US GOSIP specs?
Ford: Well I think what we what we have is a generic addressing and routing architecture. And that essentially indicates what the hierarchy should be. It’s felt that the 20 bytes is not a bad choice. And in fact, most of the systems that are out there today work with a 20-byte length. I don’t think we have any predilection to come up with a new addressing plan. In fact we have an addressing plan that was developed inside the IETF, RFC 1237. We’re making some small changes to that. What’s notable excuse me about RRC 1237 is it was actually the base document that was used to generate the most recent work on classless inter-domain addressing, CIDR. And so actually if you look at the CIDR spec, and you look at RFC 1237, you’ll notice an amazing amount of similarity in the two documents.
Malamud: Now, you have a 20-byte source address and a 20-byte destination addresses. Is that too big. Is that something that all of a sudden each packet we send is gonna be incredibly huge?
Ford: Well, actually yes, you’re right. It does change the length of the packet. So for today, for example, worst case that people usually bring out is telnet packets, where a single byte is carried per packet. In fact with IP that means that you’ve got you know, less than 5% utilization of the packet. And with CLNP that does get a little bit worse. You know, I’m not going to mince facts there. That’s true.
However, there are ways to compress that. In fact, at this year’s IETF in Houston, we just went through a proposal on how to do header compression, using CLNP, that would allow us to actually significantly reduce the size of the packet.
Malamud: Do you actually compress the address, is that you do?
Ford: You compress the information that’s in there that doesn’t change. So for example if I have a system A talking to your system B, if A and B can agree on sort of a compression scheme on the information that doesn’t change as I’m talking to you over time, then I can significantly reduce the amount of information that has to go across the link. This is not dissimilar to what happens with compressed SLIP today, for example. And it’s just another scheme to get better utilization of line speed. Of course there are other mechanisms for doing that. And we’re investigating several of them.
CLNP has actually been specified by several communities who’re very concerned about efficient utilization of links, for example radio frequency links, where you typically go 9.6 kilobits per second. And specifically in the airline regulatory arena. And so there’s several other proposals for doing this kind of compression. Our working group, the TUBA Working Group, is looking at several of these proposals and trying to determine if we have one that we think is let’s say best, and use it for the basis of a standard that we might push into the Internet community process.
Another group, CDPD, a very large consortium of cellular data network providers involving a lot of the regional Bell operating companies in the United States, also has a standard. And we’re going to study that as well.
Malamud: When we look at the network layer, we look at more than just the IP protocol or CLNP. We look at a variety of associated protocols. In the current IP world we have ICMP, for example. In the ISO pure spec there’s things like End System to Intermediate System—ESIS. Do you propose to bring some of those ISO protocols into the Internet as well as bringing CLNP in?
Ford: Sure. there’s been a tremendous amount of work done by the ISO community. And in fact, the same people who work in the ISO community on routing protocols and CLNP actually participate and have participated in the IETF for many many years. So, again there’s a lot of similarity that you’ll see between the protocols that you see in the ISO of community and the protocols that you see in the Internet community. In fact, there was a debate a while ago about IS-IS and OSPF. And the fact is that those two protocols are probably far more similar than they are dissimilar. So, with CLNP we propose to bring in ESIS, which is in a sense a replacement for ARP and router discovery, which basically allows an end system to be connected into the network. IS-IS, which is essentially an introdomain routing protocol. And IDRP, which is the Inter-domain Routing Protocol, which just recently became an ISO standard.
What’s significant about IDRP is that it actually derives directly from work that was done within the IETF and Internet community on what’s called the Border Gateway Protocol. And one can think, and in fact within the IETF community most of the people in the BGP Working Group—the Border Gateway Protocol Working Group—think of IDRP as the logical successor to BGP for doing inter-domain routing, which is the routing done between providers of Internet connectivity as the follow-on to BGP 4. And we’re very excited about it making international standard.
So there’s a whole family of protocols, as you said, that will be coming in with this. They’re international specs, they’re well-documented, you can get a hold of them in most cases—IDRP being the notable exception. They’ve been out there in the field, they’ve been tested, and they’re deployed and people actually use them every day.
Malamud: Do you see a political benefit in adopting an internationally-approved standard?
Ford: II don’t see it as a major political benefit. I think the fact is that the people who do the standards in both the Internet community and the ISO communities are really hard-working engineers. They sit down, they develop consensus standards, they do the best job they possibly can. In cases where they find flaws they try to get them fixed. These things are living documents, as you’re probably well aware, in terms of the way the standards processes work.
I think the greatest benefit that we have today in the sense of you know, the overall political scene, is basically to identify for the sake of the information technology…business, if you will—the telecommunication providers, the computer industry—that the benefits of having a single network layer protocol for the development of infrastructure, on a global scale, is important. And I think in the Internet community what we’ve done is we’ve demonstrated that it can be done. I think you know, the ARPANET going into the NSFNET, now going into what we call the Big‑I Internet, has proven that we can do it. It is possible. You can build pan-world network that has many autonomous administrations and it works quite well. And I think the thing is just to basically take this technology base, come up with a workable standard that everyone can agree on, a true consensus standard, and move it ahead.
Malamud: Do you believe that the SIP group and the TUBA group are going to be able to come together and reach a consensus solution?
Ford: I certainly hope so. I believe that we in the TUBA group see some things that are very good about the SIP proposal. You know, they’ve done a good job of engineering. It’s not a bad proposal. The weakest part of that proposal in comparison to the TUBA proposal I think is we would put forward that variable-length addressing is very important. It basically allows you to anticipate for the future. Whereas the SIP proposal proposes 64 bits. One could imagine a very interesting combination of the two proposals if you were to combine a large variable-length address…the ability to go larger than 64 bits, variable-length addressing, in something like the context of the SIP proposal. So I think it’s possible to see that. And I think that’s what we’re going to work on inside of the IETF is to develop a consensus in terms of the requirements that we have to meet, and make sure that we can meet them with the protocol that comes out of the IETF.
Malamud: Let’s say such a consensus is reached. Would that consensus have to be entered back into the international process, and you think the international process could accommodate a change in something so fundamental as its network layer protocol?
Ford: I think…you know, I think the whole thing is gated by what the information technology industry does. I don’t think that you know, by fiat or de jure you can dictate what the future Internet is going to be. I think it’s basically people building great products, good operators turning it on, and the end system users being able to satisfy their requirements for getting their jobs done. Or for that matter having fun, with entertainment. So, I think that in fact what both organizations have to do, both organizations in this case being the Internet community and the ISO community, have to do is basically recognize that this is gonna happen. And whatever the technology base that consensus is developed on is going to be de facto. And that they’re much better off standardizing it so that people will have a reasonably good document that if I’m a new company and I want to build an Internet product, if you will, they’ll have a great document to start out with her building those products and in effect if they build true to that spec, they will be able to interoperate on the global infrastructure.
Malamud: Peter Ford, you’ve been a participant in the efforts to help define a new architecture for the federal networking. In the past there was a backbone to the Internet, it was the ARPANET, and then the NSFNET, and you’ve been one of the key participants in helping to find a new architecture based on NAPs, Network Access Points. Can you help explain what this architecture is?
Ford: Well it’s an architecture that you can see inside the NSFNET solicitation that was issued earlier this year, in 1993. Essentially, the goal of NAPs is to provide good neutral interconnect points for network service providers of the Internet. So if you will, if we had four or five transcontinental Internet providers in the United States, it would be nice if the interconnected…minimally in two or three places. And that basically helps them organize their routing, helps helped them understand how to do debugging of the network, helps them build a more reliable network infrastructure in the interdomain sense, so that in the event one of the neutral interconnects—let’s say it was in New York City—fell apart, that you could still get traffic but it might go through one of the other interconnects in the system.
Ford: Essentially they pop their routers onto the Ethernet and this gives them a place to interconnect. What makes this interesting, and global in the sense of the global Internet exchange, is there’s also a lot of international connectivity that comes into the Washington DC area. Specifically NSF has an international connections program that has three European links—one for Stockholm, one for London, and one for Paris—that also down in the DC area. And so this provides a very nice place for interconnects. The global Internet exchange is actually another discussion on interconnects that was done basically in the international community. There’s a group called the Internet International Engineering and Planning Group. It involves people from a lot of international networks, places like NorgeNet, places like the Australian network AARNET, the Japanese networks like WIDE. And you’ve had several members of this community on your show.
Basically we get together about once or twice a year. We have our own mailing list, of course. And we discussed what can we do to make the international network best. It was felt that a good thing to have was to focus on solid connectivity, and interconnectivity. And this is sort of the first implementation of that. The motivation for NAPs, the motivation for GIXes are the same: the best possible interconnect for some of the multiple providers that exist in the Internet today.
Malamud: In a particular area or within a particular scope. Now, presumably if there’s one GIX in Washington, presumably at some point you’ll need another GIX in Asia, or in Europe. Or do you envision a single GIX?
Ford: No, I don’t envision a single GIX. That’s actually I think— We’ve taken a lot of flak on having multiple NAPs because you know, if you don’t have one, how can you possibly posit more than one. And I think that points to the question that you presented which is, it’s clear you’re gonna need more than one. And I think one of the goals that we have in the NSFNET solicitation is to motivate people to build more than one. Because it solves real problems. And we have that in Europe today. There’s a lot of discussion going on on how to build yet another GIX in the European area. And the issues that come together for that is you know, where to site it, how do you do interconnects, how do you flow global routing information across the system. This essentially is stressing the technology base that we have today. And it’s very exciting, because you know, you’re presented with real problems, we get together, we do that in the IETF, we do that in the IEPG. We work on these problems, we figure out if we need new technology or if it’s stuff that can be controlled through topological constraints, and we go off and we do it. And there’s several members that participate in that. It’s very exciting work.
Malamud: How do you connect one GIX to another GIX, or one NAP to another NAP.
Ford: Well, you know, this is the new Internet, right. It used to be when you only had a single backbone you essentially only had a single provider, typically the government provided it. So you didn’t worry about if I was on one network, let’s say in Alabama and I wanted to talk to somebody on a network in Oregon, you didn’t worry about inter-provider agreements and things of that sort. When you start talking about a global Internet that has thirty, forty, fifty, maybe hundreds of players and networks that are interconnected, it’s quite likely over time that we have to start thinking about intercontinental and transcontinental connectivity providers.
So for example, you could connect these things by bridging them if you wanted. The problem with that is that you essentially have a transit problem. Who pays for the traffic that crosses between them? If you believe that the Internet is just one homogeneous mass, everyone pays their share, and you build that bridge network as something, that’s one way to approach the problem of interconnecting them. I don’t think that’s very realistic. I think that there are competitive advantages for network providers that operate in particular areas to look for other forms of transit.
Both of them will work. You can basically interconnect GIXes, you know. I could be…oh let’s say NTT network and interconnect the GIX in Stockholm with the GIX in DC, for example. And everybody who wanted to transit the network that I built between those GIXes could possibly pay that intermediate network.
Malamud: Could there be multiple—
Ford: Sure.
c —bridges between those GIXes?
Ford: There could be multiple bridges or multiple networks between them. And in a sense what you get is a competitive market for inter-GIX connectivity, if you will. And so I think that’s a very good thing. In the United States, for example, we have at least five networks that are transnational, that basically can give you coverage any place in the United States—and I’m sure five is a significant underestimate. I’ve talked to many people who’re talking about starting up similar concerns.
Malamud: In many of these interconnect arrangements, particularly the Commercial Internet Exchange but also the global Internet exchange, it’s based on the “I’ll take all your traffic and you take all my traffic” model. In other words there’s no settlements. Do you think we’re going to move to a world in which we start measuring the amount of packets that went one direction and another and then a check gets written to account for the imbalance?
Ford: I guess what I would say is I wouldn’t be surprised if that comes about. I don’t see an immediate need for it. As I said I think there’ll be markets for this. And it’s not clear where this is all going to go.
My gut-level feeling is that if you’re building infrastructure and you’re working in a transnational sense, you’re going to want to have a mechanism to make sure that you’re essentially getting revenue to help cover the cost so that you can invest to help build your infrastructure over time. That may be done by setting a price for other providers. It doesn’t mean you have to use a Sender Keep All arrangement— Excuse me. You don’t have to use settlements to collect that money. You can essentially just set the price. And so it doesn’t mean that we ever have to get to pay for bits.
I will point out, though, that from what I guess, most customers of the Internet actually do in some sense pay for the bits they push across. A tremendous number of our users of the Internet actually dial into a place where they pay a charge per hour and things of that sort. And so they are paying some usage-based charge. It may not be for bits, for some people it is by bits. And so, as I said earlier, we have markets that develop and people will charge a scheme, and if the customers come, great, and if they don’t they’ll probably change their pricing schemes.
Malamud: That’s a very…market-oriented approach. This is very different from a lot of the impressions people have of the Internet that this is essentially a government-provided service with maybe a couple leafs on the edge of it that are commercially-driven. Do you see the Internet as evolving from a government network into a true commercial marketplace? Are we there yet?
Ford: You bet. I think we’re actually— You know, if we’re not there we’re past it. You know, if you look at what we have in the US today, of course we have the NSFNET backbone and it’s been a tremendous boost for getting the technology out there, proving that it can be done, showing that you can provide a reliable service that people really really want. But there are a lot of parallel networks to the NSFNET. There’s PSI, there’s Alternet, there’s SprintLink, and in fact the provider for NSFNET, ANS, basically has a commercial arm as well.
And so, we have a competitive marketplace out there, you know. Anybody who wants to connect to the Internet has a whole list of people they can call today to provision them Internet service. Which is great, because I think the customers get you know, fair pricing for their product—they can actually shop around, and they can also buy different kinds of services. Some of these companies are significantly differentiated by the kind of services they offer. Some of them offer fairly low-cost but fairly low hand-holding type services, and others of them will do everything. They’ll come right on your site, they’ll run your routers, and in fact several of them will consider—for a price—running your local area network. So I think we’re there.
There’s still a role for the government in a lot of this in terms of technology development, in terms of bringing new applications online and bringing new communities that’re of national interest onto the network. And I think that’s where you’ll see where the government goes in the future on this.
Malamud: Well thank you very much. We’ve been talking to Peter Ford, and this has been Geek of the Week.
This is Internet Talk Radio, flame of the Internet. You’ve been listening to Geek of the Week. You may copy this program to any medium and change the encoding, but may not alter the data or sell the contents. To purchase an audio cassette of this program, send mail to radio@ora.com.
Support for Geek of the Week comes from Sun Microsystems. Sun, The Network is the Computer. Support for Geek of the Week also comes from O’Reilly & Associates, publishers of the Global Network Navigator, your online hypertext magazine. For more information, send email to info@gnn.com. Network connectivity for the Internet Multicasting Service is provided by MFS DataNet and by UUNET Technologies.
Executive producer for Geek of the Week is Martin Lucas. Production Manager is James Roland. Rick Dunbar and Curtis Generous are the sysadmins. This is Carl Malamud for the Internet Multicasting Service, town crier to the global village.