Carl Malamud: Internet Talk Radio, flame of the Internet.


Malamud: This is Geek of the week. We’re talk­ing to Scott Bradner. He’s a con­sul­tant on the staff of Harvard University, he’s Area Director in the Internet Engineering Task Force for Operational Requirements, and has also been named in the spe­cial inter­im area look­ing at the ques­tion of the next-generation Internet Protocol. Welcome to Geek of the Week.

Scott O. Bradner: Well, I think I’m glad to be here.

Malamud: Well good. Everyone thinks they’re glad to be here at first. We hope they retain that hap­py impres­sion of this mem­o­rable experience. 

You are best known for your router eval­u­a­tion work that you do at Harvard? Can you describe what you do in this laboratory?

Bradner: Basically what I do is to cre­ate an arti­fi­cial net­work envi­ron­ment in which I abuse ven­dors’ prod­ucts and see whether they can suf­fer the abuse glad­ly, and come up with some kind of a con­sis­tent mea­sure­ment of per­for­mance in the areas of through­put and pack­et loss rate and laten­cy so that some­body might be able to com­pare dif­fer­ent prod­ucts using some­thing oth­er than the mar­keters’ own num­bers, which are…generally tend­ed to be a lit­tle bit on the sus­pect side.

Malamud: And so do you do this…do you break it down by pro­to­col? Do you exer­cise— Are you doing con­for­mance test­ing? Are you see­ing whether the BGP4 X oper­a­tion is being done?

Bradner: Conformance is a won­der­ful thing. Conformance is my read­ing of the specs ver­sus your read­ing of the specs. And see­ing as you’re the one with the lawyers and the mon­ey, I’m always going to lose. So in oth­er words I don’t do any­thing in the way of con­for­mance. Only do per­for­mance. And the per­for­mance is I do a wide vari­ety of tests, it’s about 190 tests now, on any par­tic­u­lar prod­uct. They range from a sin­gle stream of TCP/IP or AppleTalk or IPX or VINES IP or what­ev­er, to mix protocols…to mul­ti­ple streams up to twenty-four or thirty-six streams of par­al­lel data. FDDI. Gonna do some ATM test­ing next week. So it’s a wide vari­ety of things. Comes up with a tremen­dous amount of data, which I sus­pect some­body might be able to fig­ure out how to ana­lyze. I haven’t yet.

Malamud: So you don’t come up with a met­ric? This router is a 9, that router’s a 7.” 

Bradner: That is some­thing that many peo­ple have asked for, because it’s a lot eas­i­er on the mar­ket droids to be able to use just a num­ber say­ing Our router is bet­ter than theirs because it got a 3 and theirs got a 2.” And there was a great deal of dis­cus­sion in a group that I worked with, the Benchmarking Methodology Working Group of the IETF, on doing exact­ly that. We even came up with a name for the metric—we were gonna call em mill­stones.” [sp?] But we nev­er got a way to actu­al­ly do any­thing with it and to define it in a real­is­tic way. 

The prob­lem is that what you want is some­thing that’ll say how this router will work in your net­work. Well your net­work is dif­fer­ent than the guy next door’s net­work. You may have SNA and they don’t. You may have AppleTalk and they don’t. You may have a dif­fer­ent per­cent­age of this pro­to­col ver­sus that pro­to­col than they do. So there’s no con­sis­tent way. There were sug­ges­tions we could take a snap­shot of some aver­age net­work and try that out. But the aver­ages don’t work. There’s just too much vari­ety. So we have to come up with these dis­crete pieces of data, and you have to fig­ure out from pro­fil­ing your own net­work whether these pieces of data are useful.

Malamud: So you’re look­ing at through­put of dif­fer­ent pro­to­cols and dif­fer­ent pro­to­col mix­es. Are you look­ing at things like the dif­fer­ent rout­ing update pro­to­cols and how they perform?

Bradner: Well let me go back to your first point. I actu­al­ly take three dif­fer­ent mea­sure­ments. The first I call pack­et loss rate, and that’s the input offered load against the for­ward­ed traf­fic. So that if you send in 100 thou­sand pack­ets a sec­ond, how many pack­ets a sec­ond come out. And this is use­ful because you want to find out if indeed you over­load a device, is it going to dete­ri­o­rate. And there were some devices a few years ago…matter of fact there was one even last year—mostly on the token ring side for some rea­son or oth­er, where if you over­load them they dete­ri­o­rate cat­a­stroph­i­cal­ly. One par­tic­u­lar [indis­tinct] sent it half a mil­lion pack­ets at a rate of 14,000 pack­ets a sec­ond and it for­ward­ed thir­ty pack­ets. This isn’t a thir­ty pack­ets a sec­ond, it’s thir­ty packets. 

And it turns out what was hap­pen­ing is that it would for­ward pack­ets until it ran out of buffer space, that was about thir­ty pack­ets, then it would lose a pack­et, and it would rec­og­nize it lost a pack­et so it’d go off to account for that fact. And by the time it got back from account­ing for that fact, it lost anoth­er thou­sand pack­ets. And it would go off and account for that. And when it [record­ing gar­bled] from that, it’d lost anoth­er 10,000 packets.

Malamud: Seems like a sub-optimal strategy.

Bradner: It did not yield the cor­rect behav­ior I would expect, but it actu­al­ly might have been some­thing which would be ben­e­fi­cial in a net­work if you have an over­load con­di­tion where you’re get­ting a cat­a­stroph­ic over­load. This would have pro­tect­ed the rest of your net­work because it would’ve just died in its tracks. So maybe you could—maybe a mar­keter or two could turn this into a ben­e­fit. I would­n’t call it that.

That’s the first mea­sure­ment, this pack­et loss mea­sure­ment. And this is done for a vari­ety of pack­et sizes. It’s done for a vari­ety of pro­to­cols, and pro­to­col mixes. 

The sec­ond mea­sure­ment is some­thing called through­put. This is defined in an RFCall of these are defined in RFC 1242. The through­put is the max­i­mum rate at which all of the traf­fic offered to the device is for­ward. And this gives you an idea of the zero-loss or per­fec­tion rate of a router. This can be sig­nif­i­cant­ly dif­fer­ent than the rate which the ven­dor might tell you they can for­ward traf­fic. If you push pack­ets at a router or bridge as fast you can, and most of them don’t do the cat­a­stroph­ic dete­ri­o­ra­tion, so they will for­ward out pack­ets at some rate. But they’ll be los­ing let’s say 30% of the traf­fic you give them. 

If you put that into your net­work and that rate of which they’re start­ing to lose traf­fic is below the rate that you’re going to be offer­ing traf­fic, then you’re back­off algo­rithms and your pro­to­cols start tak­ing over and actu­al traf­fic flow dete­ri­o­rates significantly. 

Malamud: Are there exam­ples of machines in which the zero-loss rate is in fact the max­i­mum rate? Is there great vari­ance in the dif­fer­ence between those two?

Bradner: There are some where th zero-loss rate is very close to the max­i­mum for­ward­ing rate. I don’t know of any where it is actu­al­ly but it’s with­in a per­cent in some devices. Many devices fold over and then go at a pret­ty much flat…um, no mat­ter how much more you offer this is the rate that they for­ward out, which is with­in a few per­cent of the through­put rate. some of the through­put rate is sig­nif­i­cant­ly below the for­ward­ing rate. And that is because there are a num­ber of routers where inter­nal soft­ware gets into over­load con­di­tions or gets into soft­ware updates of some kind—whether it’s sink­ing the disk in a machine, a Unix-type router or some­thing like that, or updat­ing the clock on your man­age­ment con­sole. It los­es a few pack­ets. And some peo­ple might say that’s not very impor­tant. You lose 100 pack­ets out of 100,000. If it hap­pens to be your data stream that los­es those hun­dred pack­ets it can make a sig­nif­i­cant impact. 

The exam­ple of that is dur­ing the start­up phase of the T3 NSFNET, there were times when the net­work was los­ing about half a per­cent of its traf­fic. Which might seem to be quite small, but this was caus­ing user-level per­ceiv­able prob­lems such that cus­tomers in NEARnet, the head of the tech­ni­cal com­mit­tee for NEARnet, users in NEARnet were com­plain­ing that their user-level pro­grams were much slow­er. And this was only with a half-percent loss.

Malamud: Why is that? Is that because all the pack­et loss­es were with­in your net­work and the oth­er net­works were doing fine? 

Bradner: No, this was across the back­bone, and any traf­fic going across the back­bone. We’re cer­tain­ly not going to claim cred­it for los­ing pack­ets in NEARnet; that’s not our job. Packets going across the back­bone, traf­fic going across the back­bone was being lost, and there­fore when­ev­er you lose a pack­et it had to go back through trans­mis­sion and restart. And if you lost two pack­ets in the right sequence, it pushed the trans­fer rate even low­er because of the restart algorithms.

Malamud: So half a per­cent is no—even though we have TCP and retrans­mis­sion, a half a per­cent is enough that the user sees a difference.

Bradner: We were sur­prised. But we looked into it because of user com­plaints. So empir­i­cal evi­dence indi­cates that there’s a problem.

Malamud: So you invent­ed the zero-loss rate test.

Bradner: Well actu— I did­n’t invent­ed it, that’s actu­al­ly came from the work of the Benchmarking Methodology Working Group. A num­ber of peo­ple worked on that. I’m not sure who specif­i­cal­ly came up with that par­tic­u­lar mea­sure­ment but this was some­thing that was felt to be very important.

The third mea­sure­ment I take is laten­cy. I mea­sure laten­cy not because I think it’s a par­tic­u­lar­ly valid mea­sure­ment. Because I don’t. I mea­sure it because every­body in the world seems to want to know it. And—

Malamud: Latency being the amount of time a pack­et spends in a router.

Bradner: The length of time it takes for a router to process it. Yeah, the time it spends lying around in the buffers. In the gen­er­al envi­ron­ment of today’s inter­net­work­ing, routers do pro­cess­ing in the range of a cou­ple hun­dreds microsec­onds for a pack­et. So it’s not very much time. And in fact, it is much small­er than the time it takes to store and for­ward a pack­et if you did no pro­cess­ing what­so­ev­er, because it takes a while for the pack­et to trav­el over the wire. And in that con­text the actu­al laten­cy induced by the pro­cess­ing of a router tends to be a small per­cent­age of the laten­cy going across a network. 

Malamud: So the fact that we’re going through six­teen hops to go from one coast to the oth­er isn’t real­ly intro­duc­ing a significant…amount of latency.

Bradner: It is pro­duc­ing a sig­nif­i­cant amount of laten­cy but it’s not because of rout­ing pro­cess­ing time, it’s because at each one of those hops,you have to receive the whole pack­et before you can start send­ing it. So every time you have to do a full pack­et store-and-forward.

Malamud: So how long does the pack­et stay in a router? Gust give me an order of mag­ni­tude here.

Bradner: It’s in the range of below sixty-four microsec­onds to about 400 microsec­onds in the…that’s the more—modern routers are in that range. Now, that’s quite small. 

And anoth­er fac­tor on laten­cy is that for most pro­to­cols, par­tic­u­lar pro­to­cols with win­dow­ing like TCP/IP, it has very lit­tle effect on user-perceivable behav­ior. Because it’s a win­dowed pro­to­col that has more than one pack­et out­stand­ing on the net­work at any one time, it does­n’t care if the net­work is a lit­tle pseudo-longer because there’s longer laten­cy some­place in the middle. 

On pro­to­cols like old IPX where you had to receive an acknowl­edge­ment for every piece of data sent, then the laten­cy could be very impor­tant. And get­ting devices with very low laten­cies, or even cut-through devices where the pack­et is start­ed to be for­ward­ed before it is ful­ly received, could make a sig­nif­i­cant impact on the per­for­mance. Lotus 123 loads a lot faster with a cut-through device.


Malamud: Scott Bradner, we’ve been look­ing at routers and the per­for­mance test­ing work that you do at Harvard University. Many routers have a few routes in their rout­ing tables. They know about a few inter­nal net­works and they send every­thing out to their ser­vice provider. But there’s maybe thir­ty, fifty, a hun­dred routers in the world that need to know about…the whole world. And those routers, for them the size of the rout­ing table begins to make a dif­fer­ence. Does the size of the rout­ing table affect the per­for­mance of a router?

Bradner: That I’m not sure. Mostly because it’s very dif­fi­cult to sim­u­late the envi­ron­ment which those rotors reside in. It’s a lot­ta typ­ing to put in on all of those routes into a sim­u­la­tion set­up. I did run a test on a ven­dor’s router just recent­ly, but it was for the abil­i­ty to sup­port a large rout­ing table rather than per­for­mance when that rout­ing table was in place. It is some­thing that I’ve thought about doing and in the past have not done because the dif­fi­cul­ty in main­tain­ing the rout­ing table—you have to send in updates on a peri­od­ic basis—and just pro­cess­ing those updates can have a poten­tial­ly sig­nif­i­cant impact on the per­for­mance even if you’re not mak­ing any changes in the rout­ing table just repeat­ing it every X peri­od of time. Though with some of the mod­ern pro­to­cols like BGP you don’t have to con­tin­u­al­ly update it. But it’s some­thing I do hope to do in the future.

I am look­ing at ways to find out the effect of pro­duc­ing rout­ing updates on per­for­mance. If indeed you’re for­ward­ing traf­fic at some rate and then sud­den­ly get a RIP update or an OSPF update and that caus­es some per­mu­ta­tion in the rout­ing table, what effect does that have on the for­ward­ing rate.

Malamud: That’s actu­al­ly a sig­nif­i­cant effect we’ve noticed on the mul­ti­cast back­bone on occa­sion, in which your audio and video was going along just fine, and every nine­ty sec­onds an update occurs and you find a lit­tle bit of your data goes away. How does one test for these types of envi­ron­ments, and how do you fix them? Is it a tun­ing prob­lem? Is it a prob­lem in the ini­tial con­fig­u­ra­tion of the router, the design of the router?

Bradner: It’s not gen­er­al­ly the design of the router because the router is just tran­sis­tors, or now inte­grat­ed cir­cuits and plugs. It’s…the rout­ing pro­to­cols have an effect there. There’s always been a…certainly an inter­est­ing ques­tion about what the effect of a minor” rout­ing update change to a large OSPF net­work which caus­es the entire fab­ric of the net­work to realign, the time it takes to run the algo­rithms on that—the Dijkstra algorithms—on the rout­ing table and what that impact would be on for­ward­ing. This is some­thing that a lot of peo­ple would like to look at, and so far it’s been some­thing that I’ve most most­ly wished to do rather than do. Because it is dif­fi­cult to set up the envi­ron­ment. Something I do plan to do in the future, though.


Malamud: One of the big issues in router design is the size of the rout­ing tables. And we’re cur­rent­ly look­ing at routers in the leafs that are maybe 16 megabytes of mem­o­ry, and in the core of the net­work that’s even 64 megabytes of mem­o­ry. Are we going to be able to stop the growth of the rout­ing table, or does that even mat­ter? Are we just gonna get more and more mem­o­ry like we do on our computers?

Bradner: Well cer­tain­ly some peo­ple think that mem­o­ry is cheap and you can just keep grow­ing that way but it has been point­ed out that the size of mem­o­ry dou­bles every three years and the rout­ing table has been dou­bling every two years, so as a long-term strat­e­gy that prob­a­bly does­n’t work. 

The answer is as CIDR is deployed in the back­bone, which is the route aggre­ga­tion process, this growth should change sig­nif­i­cant­ly. There’s been very recent work in the CIDR deploy­ment that looks extreme­ly promis­ing. The last major pieces have fall­en into place and route aggre­ga­tion is going to start. And there is no rea­son to expect that we’re going to be faced with the kind of cri­sis which is going to require major ren­o­va­tion of the back­bone routers, as long as we can start mak­ing real progress in get­ting the exist­ing route table aggre­gat­ed. That reduces both the absolute size of the exist­ing table and sig­nif­i­cant­ly reduces the growth of the rout­ing table space.

Malamud: How does CIDR do that? Maybe you can give us a brief expla­na­tion of CIDR and why it’s going to change our rout­ing tables.

Bradner: The sim­plest way to put it is that in pre-CIDR days, if you had let’s say 512 Class C net­works and you want­ed peo­ple to know where you were, you had to adver­tise 512 Class C net­works, which would cost 512 entries in the rout­ing table. With CIDR this is adver­tised in a way which allows a sin­gle entry to be put in the rout­ing table rather than the 512, there­by reduc­ing the size of the adver­tise­ment con­sid­er­ably. And in addi­tion to that, since you would have received, obtained, your address­ing from your net­work provider, and they would have pro­vid­ed it out of a block of address­es that they had obtained from the IANA, then all or at least all of the routes from this provider that were CIDR-capable could be the­o­ret­i­cal­ly col­lapsed into a sin­gle rout­ing table entry. 

Malamud: So instead of hand­ing out address­es ran­dom­ly, we’re dol­ing em out by the struc­ture of the net­work, essentially. 

Bradner: Well we’re dol­ing them out two ways. One is by the struc­ture of the net­work, so that providers get large clumps of address­es to hand out. And sec­ond of all those are being hand­ed out in a log­i­cal fash­ion which allows the seg­re­ga­tion. Basically they’re being hand­ed out and pow­ers of two. Powers of two Class C net­works. Because that way you can aggre­gate them and just make them look like one entry. CIDR stands for class­less inter-domain rout­ing, and the whole point is that you’re no longer treat­ed as Class A, Class B, Class C, you’re just treat­ed as effec­tive­ly a bit mask over the address space.

Malamud: So there’s two issues. One is how we hand out address­es. The sec­ond is tak­ing advan­tage of that in the rout­ing announcements.

Bradner: That’s correct.

Malamud: And where’s that imple­ment­ed? Is CIDR a pro­to­col, or do we see that come in some­place else?

Bradner: The first of those two is how we hand out address­es. And the process of hand­ing out address­es by the Internet providers has been using CIDR log­ic for the last year or two. So we have a lot of already-assigned address­es, a lot of the growth—recent growth—in the Internet has been along the lines that are CIDR-capable. 

CIDR itself is imple­ment­ed in the Exterior Gateway Protocol, a pro­to­col that’s used to exchange rout­ing infor­ma­tion between region­al net­works and the back­bone, or between autonomous sys­tems, in par­tic­u­lar it’s BGP4—Border Gateway Protocol num­ber 4 or revi­sion 4. This is used in the back­bone from the NSFNET to talk to oth­er region­al net­works. Or…NSFNET’s not a region­al net­work… Well actu­al­ly, maybe it is in the glob­al sense. But in any case, it’s the inter­change of the sum­ma­tion rout­ing infor­ma­tion between providers, whether they be NSFNET or AlterNet, or NEARnet, or the European net­works, they exchange infor­ma­tion with BGP4 and that allows them to make use of the CIDR aggre­ga­tion possibilities. 


Malamud: Scott Bradner, we’ve been talk­ing a lot about the engi­neer­ing of a glob­al Internet, and rout­ing pro­to­cols, and how do we make IP traf­fic flow effi­cient­ly from one place to anoth­er. Yet most of the nodes in the world don’t run IP, they run IPX from Novell, or they run DECnet, or they run a vari­ety of pro­to­cols. How are we going to sup­port all these node out there? Will the Novell peo­ple have to shift over to IP? Will we shift over to IPS? Or will they both some­how coexist?

Bradner: Well it depends what you mean by sup­port. If you mean by sup­port that I can sit at my desk run­ning a Macintosh run­ning Unix run­ning send­mail, and send email to the per­son two cubi­cles down that hap­pens to be on a PC run­ning on Novell…we sup­port that now. We do that through gate­ways. We can do that through application-specific gate­way. Email is cer­tain­ly the most com­mon one, there’s poten­tial for oth­er types of application-specific gate­ways. This is how you get to BITNET, IBM mainframe-specific kind of net­works, or AppleTalk-based net­works, or Novell.

So if you mean sup­port by the abil­i­ty to com­mu­ni­cate, par­tic­u­lar­ly in a non-real-time fash­ion, we do that through func­tion­al gate­ways. And we’ll con­tin­ue to do that through func­tion­al gate­ways. Not only because the under­ly­ing archi­tec­ture is dif­fer­ent, the pro­to­cols are dif­fer­ent, but because some peo­ple believe this is a good way to intro­duce secu­ri­ty, or addi­tion­al secu­ri­ty…some secu­ri­ty, into the Internet struc­ture. Because by putting in an appli­ca­tion gate­way of this sort, you only pass the kind of func­tion that you wish to allow, and keep­ing out the kind of func­tion you don’t, ie. the peo­ple that are try­ing to peer around and find your fam­i­ly jew­els. So, we’re gonna sup­port that kind of thing that way.

There is an addi­tion­al ques­tion, though, of if you want to sup­port the kind of thing which does­n’t go through an appli­ca­tion gate­way ter­ri­bly well, or whose func­tion is not sup­port­ed on the under­ly­ing protocol—running Mosaic over IPX, for example—you could do it by build­ing a ver­sion of Mosaic that ran over IPX that went through a gate­way. You could do by fig­ur­ing out a way to encap­su­late TCP/IP over IPX and put that through a gate­way and strip off the TCP. And that’s what’s done in the AppleTalk gate­ways to Ethernet, for exam­ple. When you run TCP/IP on a Mac, it encap­su­lates at least one form of that, encap­su­lates the IP in AppleTalk pack­ets, and then they get stripped out and turned into reg­u­lar IP in the gate­way. We could do it that way. Or we could migrate the nodes to a com­mon infra­struc­ture, to a com­mon pro­to­col infra­struc­ture. And the IPng effort in the IETF is try­ing to—

Malamud: IPng is IP Next Generation.

Bradner: Yes. IP Next Generation. This was done at a time when cer­tain new sci­ence fic­tion shows were show­ing up on the TV, and it con­scious­ly cho­sen in this way; the title was. 

We’re try­ing to con­scious­ly take into account the var­i­ous ways that you could grow and what the require­ments are on a more glob­al data net­work­ing inter­con­nect phase require­ment. Instead of look­ing at the Internet sim­ply as this col­lec­tion of TCP/IP LANs, look­ing at the Internet as the future data net­work­ing needs of the globe, not lim­it­ed to any one pro­to­col. But that does­n’t mean that the IPng area or its Area Directors are ego­tis­ti­cal enough to believe that we’re going to fig­ure out a way to con­vert every IBM main­frame and every PC in the world to a sin­gle pro­to­col, because that’s not going to hap­pen real soon now. But we want to take into account those require­ments so that in case, and in places where it is fea­si­ble to migrate the end sys­tems to a com­mon under­lay­ment in order to pro­vide a com­mon set of ser­vices, that can be done. The address­ing will be suf­fi­cient to do it, the rout­ing sta­bil­i­ty will be suf­fi­cient, the scal­a­bil­i­ty was suf­fi­cient. And the secu­ri­ty will be sufficient. 

Those are all things that we’re try­ing to take into con­sid­er­a­tion. We put out a call for whitepa­pers using RFC 1550, and we received a num­ber of them from a wide range of orga­ni­za­tions and indi­vid­u­als around the world telling us what they believe the require­ments are in this area in order to be able to sup­port this sort of thing.

Malamud: Now, you’re assum­ing a sin­gle glob­al Internet. And for a while there the trade press got on a lit­tle kick which said that Novell is gonna invent its own Internet, and our Internet will go away or will have to some­how com­pete with their Internet. Is Internet a net­work, or is it some­thing more fundamental?

Bradner: Well that’s why I very care­ful­ly phrased it as that we’re look­ing at the Internet as the data net­work­ing needs of the globe, rather than tying it specif­i­cal­ly to any par­tic­u­lar pro­to­col. There is…whether this par­tic­u­lar IPng Area Director believes that a large IPX Internet will grow up and be a viable com­mer­cial enter­prise or not…and there’s cer­tain­ly some pres­sure for some things like that…and run that in par­al­lel with an IP Internet or an IPng Internet, that does not change the pic­ture that you still would need to coop­er­ate between them. 

Malamud: You still have to interconnect.

Bradner: You still have to inter­con­nect. You’re not going to get either side—if sides be the right term in this kind of con­ver­sa­tion. You’re just not going to get either side to admit that the oth­er has won to the extent that they’re going to con­vert all their box­es, not nec­es­sar­i­ly because they don’t think that the oth­er side has won, but some of those box­es will nev­er be con­vert­ed sim­ply because no one knows how to run them any­more. And they’ve just been run­ning because some grad­u­ate stu­dent set it up three years ago and went off into the out­er dark and who­ev­er has it does­n’t know them any­more. So there’s some envi­ron­ments that sim­ply will nev­er change.

Malamud: So we’re nev­er gonna have a sin­gle inter­net­work protocol.

Bradner: We do not have a sin­gle Internet pro­to­col now,—

Malamud: And we’re nev­er gonna con­verge on one.

Bradner: And we can’t con­ver­gent one by def­i­n­i­tion, actu­al­ly. The IPng effort is defin­ing yet anoth­er Internet pro­to­col. It cer­tain­ly is hoped that we define one that every­thing could use. But, there will be IP ver­sion 4, the exist­ing gen­er­a­tion of IP, for many many years now, in real ways and for the forsee­able future. We have a great deal of iner­tia in the knowl­edge base of the mar­ket as to what they can oper­ate what they can do. We have anoth­er prob­lem, though, is let’s say you were a ven­dor and you were going to come up with some soft­ware that ran on a serv­er. And you’ll get the option of imple­ment­ing this serv­er in IPng where there are…a hun­dred thou­sand hosts. Let’s say this is a cou­ple years down the road. Or you could imple­ment it with IPv4, where there are…20 mil­lion hosts. 

Malamud: Gee. Bigger mar­ket, small­er mar­ket. Which should we pick? [both chuck­le]

Bradner: So, there’s not only an iner­tia in terms of the install base that’s there but that’s in devel­op­ment. There’s an iner­tia in devel­op­ment that will tend to go along the same lines. Now, some of the IPng pro­pos­als have ways to mit­i­gate this by using method­olo­gies by which you could build a serv­er that could deal with both types of clients—IPv4 clients and IPng clients. That won’t nec­es­sar­i­ly make the IPv4 clients go away. But it would mit­i­gate the prob­lem of providers pro­vid­ing only IPv4 services.

Malamud: So we should be pre­pared for a messy world.

Bradner: We’re in a messy world now.

Malamud: Well there you have it. This has been Geek of the Week. We’ve been talk­ing to Scott Bradner. Thanks a lot.


Malamud: This is Internet Talk Radio, flame of the Internet. You’ve been lis­ten­ing to Geek of the Week. You may copy this pro­gram to any medi­um, and change the encod­ing, but may not alter the data or sell the con­tents. To pur­chase an audio cas­sette of this pro­gram, send mail to radio@​ora.​com.

Support for Geek of the Week comes from Sun Microsystems. Sun, The Network is the Computer. Support for Geek of the Week also comes from O’Reilly & Associates, pub­lish­ers of the Global Network Navigator, your online hyper­text mag­a­zine. For more infor­ma­tion, send mail to info@​gnn.​com. Network con­nec­tiv­i­ty for the Internet Multicasting Service is pro­vid­ed by MFS DataNet and by UUNET Technologies.

Executive Producer for Geek of the Week is Martin Lucas. Production Manager is James Roland. Rick Dunbar and Curtis Generous are the sysad­mins. This is Carl Malamud for the Internet Multicasting Service, town crier to the glob­al village.