Carl Malamud: Internet Talk Radio, flame of the Internet.
This is Geek of the Week and we’re talking with Christian Huitema, who’s a researcher at the Institut National de Recherche en Informatique et en Automatique. Did I do that anywhere close?
Christian Huitema: No, that’s quite correct.
Malamud: Well thank you. Welcome to Geek of the Week, Christian. You’re the chair of the SIP Working Group. The Steve’s IP, or Simple IP, or one of the candidates for the next generation Internet protocol. I was wondering if we could talk a bit about what you think some of the requirements are for the next generation of an Internet protocol.
Huitema: Oh. I think the one reason why we want to change IP is growth. I mean, we have so many IP stations with a growth of 10% a month. So if you [indistinct 1:12] a growth of 10% a month, there is a point in time at which we cannot allocate more networks. And that point is…well, somewhere within four or five years, depending on the counts and the technology we use. But we have to prepare for that point now. And if we want a wider Internet, we need wider addresses. As we need wider addresses, we have to redo the packet format. As we redo the packet format, we have to do it well, so that we incorporate the result of research which occurs between [A75?] and now. Like multicast, like mobility, like resource reservation, and also we have to get clean. Remove functionality that we don’t need. That we get fast. So, that’s SIP.
Malamud: Well but That sounds like a very simple editing process. All you do is double the address size and get rid of a couple unused fields. Do you need more than that? Do we need to change the model of what the network layer does.
Huitema: Eh, precisely the idea is to be as simple as possible because… I would compare that to the RISC philosophy. That in order to get fast you had better get simple. And for example you have a much simpler handling of the options. You have a much simpler handling of segmentation—or you don’t do segmentation in most cases. Basically by experience we have learned what is used and what is not. And we don’t want to make a pipe dream on paper, we want to reuse what we have experimented.
Malamud: Well how do you make a transition to this new environment with SIP? Are we gonna lose connectivity for parts of the Internet during a transition period? Are we gonna for some kind of a changeover?
Huitema: Well that’s the challenge. And in fact that’s not something I’ve done personally, that’s something which has been designed in the IP group by Bob Hinden and Dave Crocker in the past. The idea is that the IP address will be embedded in the SIP address. And doing the overlapping two or three years during which we could still do IP but we have to prepare for the transition to 64 bit, during the point where we need only 32 bits and before we need 43, while we install the 64 bits so that we can gain in better routing and easier management, and we’ll have translation tables so we can translate a 64 bit into a 32 bit address transparently to the user. In fact that is already being demonstrated. If you go to the terminal room in the IETF you can see that being demonstrated. A SIP host talking to an IPv4 host, and vice versa. It’s fully transparent.
Malamud: You mentioned resource reservation as one of the areas that a new IP might need to support. Resource reservation sounds a bit like you’re setting up a connection and reserving those resources. Are we moving away from packet switching and going back to some form of connection-oriented network service?
Huitema: Well there are several schools there. At one very extreme you have say the ATM school, that you will do a virtual circuit and you will [presize?] that you want a delay of fifty seconds plus or minus 5%, no more, and that you want a throughput of 100 kilobits except that from time to time you will need to 200, etc. You make a very precise reservation. And…well that’s one school. And indeed you could mimic that in the Internet. You don’t necessarily need to do a virtual circuit to mimic that. You can essentially do some identification of your packets as belonging to a reservation group. And you route them as normal datagrams. But they are queued as part of a reservation group. To put it shortly, they get a better priority. To get a better priority as long as they stay within within the resources. That’s what is being done by resource reservation protocols.
I personally do not believe that you need very strict resource reservation to do most of the thing we want to do. I mean, we have been developing video software, we have been [indistinct] video experiments, precisely because we wanted to gain first-hand experience in that. Because while you read in the papers that you need megabits to do video, you need resource reservation. And yet you observe that if you just put up a video codec and put that on the Internet, provided you have some control of the codec, well it just works.
Malamud: What do you mean it just works. Don’t you need more bandwidth? Don’t you need guaranteed resources? What happens if you drop packets?
Huitema: If you look at the video codecs we have today, I mean if you look at the [NV interface of ?] or if you look at the ivs interface which we have developed in my team, you will find that there is a button which says “What’s the bandwidth you require?” And you can place the button anywhere between say 2 megabits and something like 20 kilobits. You can even go at a low speed. [To me?] what you do is that you can have a continuous quality arbitration between better bits, better quality, a crisper image, and more bits in the network, or less bits in the network. Indeed you have a very fine quality if you are sending 2 megabits per second of video. But to if you make less bits per pixel, if you spend more time doing more CPU cycles for your compression, then you can have a much lower throughput.
So what that means is if you have this button in the interface now it means that you can do manual selection of of your throughput. The next step, what we are currently researching in my team, is to have a feedback algorithm on the Internet so that you’ll send your packet but at the same time you probe the network, and according to what you find as available capacity, then you increase or reduce your throughput. And, well…
Malamud: Well you mentioned 2 million bits per second and 20,000 bits per second as two extremes. What would a 2 megabit picture look like? What’s the resolution and the frames per second and the shades of gray or color?
Huitema: Well, currently the highest throughput we have reached was something like 500 kilobits.
Malamud: Okay. What’s that look like, then?
Huitema: It looks like… It looks essentially like the precision of a video-type image.
Malamud: Okay. So is thirty frames a second? Is that a—
Huitema: Well it’s not thirty frames a second now, although we are getting close. I mean, when we first did ivs on a Sun IPX, we were able to do 1.5 to 2 images per second. If you’re using a Sun 10, and especially if you are using a large machine, we can do four or five images. If you’re using a more powerful workstation, a top-of-the-class workstation now, you can do something like ten images per second. Which means that twenty images or forty images a second is something you will get in say one year or two.
Malamud: So the bottleneck there is the CPU and not the network?
Huitema: The bottleneck is the CPU, yes.
Malamud: Now what about 20,000 bits per second. What kind of an image do you get out of the very very low throughput like that?
Huitema: The problem you have is that in order to get 20 kilobits per second you have to do delta encoding. So you only send the deltas, and in fact you also do data reduction. You send less-complex images. For example you send a lesser number of levels of gray. You only send gross frequencies. So…well it’s not very good. I mean if you move very fast, you’ll see the phantom of your initial position still on the screen and it will clear after a time. But it’s usable. It’s still usable if you’re doing say video phone applications.
Malamud: So talking heads, that’ll work.
Huitema: Talking head works, yes.
Malamud: Okay.
Huitema: Star Wars does not.
Malamud: You’re listening to Geek of the Week. Support for this program is provided by O’Reilly & Associates, recognized worldwide for definitive books on the Internet, Unix, the X Window System, and other technical topics.
Additional support for Geek of the Week comes from Sun Microsystems. Sun, the network is the computer.
Malamud: Now you mentioned the user could throttle the throughput up and down. What happens if every user gets this tool and naturally they’ll say well you know, “Give me the best, of course.” Is the network going to be able to handle that, or do we need something in the network to regulate the…
Huitema: I presume you heard Dave Clark speaking yesterday at the IETF plenary. And he was mentioning several possibilities. One possibility is to have a resource reservation protocol. “I have bought that many resources. I have bought 500 kilobits. I’ve paid for that. So let me use it, and I will find a [match?].” Well that’s a fine model, provided you pay for it. Indeed, if you can have it without paying, everybody would ask for it and that won’t be enough of it.
So, what Dave was mentioning— He’s a better English speaker than I am. And he was mentioning that you want to ask for a good service without exactly specifying the numbers. And what you request is that the network exhibits some predictability. That it enables you to do feedback control and tune in on what is exactly available. That means that we have some research to do in the network on a load-sharing algorithm in the network, so that you don’t queue exactly the same way a video and an FTP, so that if somebody tries to steal all the bandwidth…well he cannot, he’s constrained not to do it, etc. There is still research going, but it’s a good chance that this will be deployed by next year or something like that.
Malamud: And is this part of a more general area of worrying about policy constraints within the routing layer? The question of I want to pay for resources, or I want fast throughput, or reload jitter, or I don’t want to cross the ANS backbone or…
Huitema: I’m not a great fanatic of trying to express your requirements with fifteen variables, because if you try to actually fulfill the fifteen variables you’re getting a computational complexity in the routers which…is not realistic. So I don’t think that if you reserve capacity, you will be able to reserve much more than reserve bandwidth. And policy routing is a loaded word. It means many things. bandwidth is one way to choose a [?]. The normal thing you do is that if you want to pick a route in a network you will pick that one which gives you the better bandwidth.
But you can pick a route according to other principles. For example if you have done reservation, you want to pick a route in order to use exactly the resource you reserved. For example if you reserved 500 kilobits on the link between Paris and New York, you don’t want your packets to be routed by the link between Madrid and Miami. That wouldn’t go. So you have to do some constraints— So this reservation stuff implies that you have some way to control the routing of your packets.
The other reason why you might want to control the routing of your packets has to do with economics—that you have subscribed to one provider, not the other one. And it has also to do with general policies that you don’t want to send data which is exchanged between the DOD and its partner in France by a link that goes through…I don’t know, say Israel. That would not be required. Maybe [indistinct].
Malamud: So are we going to be able to do that, in the network? That’s a fairly…large set of different policies you’ve enunciated there. Are we going to be able to fill all those?
Huitema: Eh… I hope we will. We had better be able. There is a lot of work going on. I’m not exactly at the top of doing this work now. You should better ask that to people like say…[indistinct name] for example has produced a number of papers on that recently. Jacob [indistinct] or people like [two indistinct names]. Many people are working on that. So there’s good hope that we’ll see something being done.
Malamud: Well there’s a very obvious question which comes to mind when we look at your efforts in SIP. Why are we inventing a new network protocol when we have CLNP already out there and deployed. Is there room for multiple candidates for next generation IP? Or should we just be focusing our areas on on existing protocol?
Huitema: The problem of the next generation protocol is that you want to have larger addresses, but you wanted to send them to have larger throughputs. And if you want to have larger throughput, you want to have something simple, that can be programmed easily, that takes advantage of your previous knowledge and of the [indistinct] research. And the problem was that CLNP was essentially the designed ten years ago. And it does not take advantage of the recent discoveries. It’s relatively complex. It’s somewhat of— The addresses are larger, okay, but we don’t necessarily need addresses which are five times larger. I mean, if with 32 bits you could encode one million addresses, which is the number we have in the Internet now, you can expect that with 64 bit you could encode two times 32 bits, that’s one million of millions of addresses. And that’s…well, we don’t expect to have many more than that, many more hosts than that. And that’s something like ten to the power of twelve. With the same algorithm, if CLNP had the same addressing efficiency as IP, we could have the possibility to encode ten to the power of thirty addresses. Well, that’s something like each person in the universe. We don’t need that. It’s too much. And that will [?] so many costs. And it wouldn’t— We won’t get to that point. I mean, we’ll have the bizarre situation where we have this brand new tool in order to cope with the millions of millions of addresses, and we will not be able to use that in more than…a couple millions, because people can’t afford the cost of it.
Malamud: So you think simplicity is key.
Huitema: Yes.
Malamud: I’ve noticed that the Simple Network Management Protocol, the Simple Internet Protocol… “Simple” seems to be the key word. But we lose functionality when we do that. It’s not feature-rich. Is that a good tradeoff?
Huitema: In fact we don’t lost functionality with SIP, we just do it differently. The same way you don’t lose functionality with a RISC processor. Instead of having a set of very complex instruction which cost very much in being programmed, you have a set of simple instruction that you can combine. And there is nothing that you can do with CLNP that you could not do with SIP, in fact we can do more. You can do options, you can do security, you can support mobility, you can support source routing. But simply you do that differently. You pile up separate protocols, where each of them is very efficient. And you don’t have to have this combination effect that you have to look for. Each option in every packet and this…is easier to implement. The proof it’s easier to implement is that it was not existing six months ago, and we have six interoprating implementation now.
Malamud: What RISC does is takes a lot of the functionality and pushes it up the stack, if you will. It says let the microcode do whatever, let the application worry about this function rather than embedding it down in the hardware. Are we trying to move some of that functionality up into the upper parts of the protocol stack? And if so what do we need in the top parts of the protocol stack? Do we need ASN.1, do we need full-blown presentation layers…?
Huitema: Well, I don’t think that ASN.1 has much to do with SIP. It’s another part. There is a general tendency to move functionality up the stack. And… Provided you have a simple enough network that that is [?-less] Move packets around, then you’re fine. The equivalent of up the stack for this is not so much the ASN.1 or the presentation layer, it’s the policy routing. You will most probably implement policy routing or mobility support or this kind of thing as an extra program which is up the stack, that pushes more information, additional headers, or something like that, and moves information around [in?] controls.
Malamud: Give me an example of that.
Huitema: Well a very basic example, suppose you want to do provider selection. The way to do provider selection in SIP is very simple. You have one address which is some kind of an anycast address which is matched by any router which is within a certain prefix. So, if you do provider selection, you just send your packet to that router, any of these routers. And the first router in the provider domain will take that, peel off the first header, and go to the next header which says where it really goes. And that way you will have implemented the selection of your initial provider. Without having to modify your routing tables, without having to modify your protocols, and with good efficiency because only one router will have to peel of the header and do the thing.
Malamud: This is Geek of the Week, featuring interviews with prominent members of the technical community. Geek of the Week is brought to you by O’Reilly & Associates, and by Sun Microsystems.
Malamud: Let’s get back to this top of the protest protocol stack issue. The TCP/IP suite has been a bit sparse up there. We have a remote procedure call mechanism incorporated by reference from Sun’s NFS. But there isn’t a really well-developed set of services. Do we need those services between the application and the lower levels of the network? Or is that just added complexity that doesn’t help?
Huitema: Oh, the lower levels of the network don’t have to know what you are doing an RPC or that you are doing— They specifically don’t have to know whether you are using say Sun RPC, or [OSFTC?] or ASN.1 or whatever. And it’s good [?] It’s good that people which want to try a new RPC can do so. For the Internet side, yes we have a problem that we don’t have one common technology that can be used to develop RPCs. If you look at the successful applications of the Internet, they are pretty much done and ad hoc fashion. Things like SNMP, or MIME, which is [?] encodings. But say, you don’t have one big design.
Big design is something you do in research, as what we try to do now. But that’s a bit… If you look at it— One of the reasons why you don’t have that is that it costs a lot to develop an RPC suite. And if you look at the ASN.1 compiler we developed, we have incorporated something like ten man-years [for the net?]. So it’s not quite the typical thing that you find in the Internet, yeah. A version which is done in six months by a set of volunteers and which is given to the community. You can give a six months’ effort to the community. Giving a ten man-year effort to the community is a bit more complex.
Malamud: So you wanna sell it.
Huitema: Yes, you want to sell it. And so the other thing is that historically the people which have done RPC, have done RPC you know, to sell it. So each provider sells its own RPC. It has its own language. What we need to have is something like a common language which comes with different compilers. You can buy your C compiler by Microsoft, by Borland, by a number of others. And ideally what you would need to have is an RPC language that you can buy from several providers. The closest thing we to that is the ASN.1 compiler. But ASN.1 is a language which has been specified by a thirty party, ISO, and for which you can buy compilers by something like four or five different sources. The problem is it’s a bit on the complex side.
Malamud: But is that something we should be insisting that our applications in the Internet begin using? Do we need to start specifying that as a standard, or just make it available?
Huitema: Well many applications already use it. SNMP does use it, for one thing. WAIS uses it. Eh…let me quote another one. LDAP also uses it for X.500. So it’s used by already a number of applications. It might be a good idea to try to push the technology. Or specifically what might be a good idea would be to try to put a simple profile of ASN.1, something which is easy to implement, get rid of much of the variations, complexity, which are just to accommodate a number of needs of various OSI protocols. You’ll have something simple. That could be the basis for doing an application development framework.
Malamud: One of the areas you’ve done a lot of work in is naming services. You implemented X.500, you have one of the more popular implementations of that. Do you think we need to begin moving the Internet from reliance on just the Domain Name System and moving it into an X.500-based environment?
Huitema: Well X.500 and the Domain Name System are not quite covering the same needs.
Malamud: Well what are the differences between the two?
Huitema: I would say that X.500 has a model which is conceptually very simple. That’s his idea that an entry has a set of attributes, and that you just pick a couple of these attributes to develop a name and you organize them hierarchically. But this simplicity is aberrant. What you have then is an open complexity as complex as SNMP. The same way SNMP has a very large number of [meat?], X.500 has a large number of attributes. So implementing a full X.500 server means that you have to support all these attributes, all the search rules associated with the attributes. And that can get quite large. While that’s not undoable, we could imagine that you will specialize in a program.
The other thing which was difficult with X.500 is that it had to run on the top of the seven-layer stack. So I’ve been [affordin] the Internet to provide simpler specifications. The LDAP effort uses X.500 over TCP without presentation and session layers, without even the RS? layer. That may be a way to go. So that the protocol specification is better.
You still have two problems with X.500. It uses basically the same kind of hierarchical navigation that DNS uses. But the DNS is simple. You can put up a DNS key on your screen. You can remember it, people put it on their business cards. It’s no problem. If you were to put your X.500 name on your business card, your business card will be quite larger than what it is now. And so the X.500 model is that you don’t really have a name, you have a set of servers which enables you to do [dis—] searches. And these [?] searches have one big problem. It’s that you are only authorized to search according to hierarchical rules. So if you could have something like X.500 without the hierarchical rules, and authorizing entries to be located anywhere in the Internet, well that might be a very powerful technology.
Malamud: But if we did that we would need larger wallets to hold our larger business cards.
Huitema: No, you won’t need to. Basically you will need to present your address on your business card. What is on the business card already would suffice. It would be carlmalamud@…uh…what’s the name of your company, by the way?
Malamud: The Internet Multicasting Corporation.
Huitema: Oh yeah. So you would read “Carl Malamud”, you will tap at the Internet Multicasting Corporation, at Reston, Virginia, and that would be it. So you won’t need anything else. But indeed, you will have a powerful search going on to locate to who is holding information on companies in Virginia, or who is holding information on companies dealing with multicast, and to enter progress in the tree, look at that. And that’s not quite what X.500 does today. So I believe that there is a possibility to do a merge between the X.500 technology, the algorithms which are developed in the [wheeze++] group, and end up with a very powerful nameser— Well, not nameserver. I would say a [wide pitch] service.
Malamud: Now how do you do that? Do you go to the CCITT and submit some changes, or do you just grab their standards and unilaterally modify them to do something different?
Huitema: Yeah, it depends what you want to do. If you want to pretend you’re doing X.500, you have to go to CCITT. If you want to do a [wide patch] service for the Internet, you can say that you have taken X.500 as one of the possible inputs, and then you go on defining your own operation. I think the latter is a better way to proceed.
Malamud: Is there an advantage in having that official CCITT stamp on a standard?
Huitema: Well CCITT— It has an advantage to be CCITT—compatible, the advantage that you can speak with PTT-provided services. But apart from that are not that many advantages.
Malamud: Would the PTTs ever start adopting Internet services?
Huitema: Well some already do. They may.
Malamud: But do you see reasons that the Internet process, for example, should either merge or get endorsed by official groups like the CCITT? Is there a benefit to the Internet in doing that?
Huitema: Oh that’s a loaded question. As an IAB—
Malamud: Of course it’s a loaded question.
Huitema: As an IAB member I don’t know if I should respond to that. I mean— Do you want the IAB official to say, “Well, we want all the world’s people to be friends and have a nice period of cooperation, indeed?”
Malamud: Well there you have it. We’ve been speaking with Christian Huitema, a member of the IAB, an official member of the IAB. Thanks for being on Geek of the Week.
Huitema: Thank you.
Malamud: This has Geek of the Week, brought to you by Sun Microsystems and by O’Reilly & Associates. To purchase an audio cassette or audio CD of this program, send electronic mail to mail to radio@ora.com.
Internet Talk Radio. The medium is the message.