Carl Malamud: Internet Talk Radio, flame of the Internet. 


Malamud: This is Geek of the Week and we’re talk­ing to Noel Chiappa, orig­i­nal­ly one of the archi­tects of the Proteon router, for­mer staff mem­ber at MIT. I assume you were a stu­dent at MIT before that?

Noel Chiappa: Yeah, I was a stu­dent for a cou­ple of years, and then I took all the com­put­er sci­ence cours­es that inter­est­ed me, had no inter­est in tak­ing all the oth­er math and physics and sev­en­teen oth­er cours­es. At basi­cal­ly the end of my junior year I went to… I was inter­est­ed in oper­at­ing sys­tems at that point, actu­al­ly. I went over to the group that had done all the oper­at­ing sys­tems work at MIT in the com­put­er sci­ence lab there, and said Gee, I want to come,” you know, do stuff with you guys.” And it turned out they were just in the mid­dle of being done with oper­at­ing sys­tems and get­ting into net­work­ing. That was 1977

And they were work­ing on a pro­to­type 1 megabit-per-second ring togeth­er with some guys at UC Irvine, Dave Farber’s peo­ple. And it was a PDP-11 inter­face and they need­ed some­body who could pro­gram PDP-11s to do inter­face diag­nos­tics. And of course they were all Multicians and they all knew Honeywell 6180s and Multics and they did­n’t have any PDP-11 peo­ple. So the deal was that I would get to do some oper­at­ing sys­tem stuff if I wrote their net­work diag­nos­tics for their PDP-11 local net­work inter­face. And some­how I got into net­works and you know, how­ev­er many years it is lat­er, six­teen years lat­er I’m still doing net­works. It’s just one of those things.

Malamud: Well rout­ing has cer­tain­ly been one of the areas you’ve spe­cial­ized in, rout­ing and address­es, which the two def­i­nite­ly go together.

Chiappa: Right.

Malamud: We’re cur­rent­ly look­ing at a next-generation Internet Protocol, and one of the ques­tions that we’ve been look­ing at is the ques­tion of address space deple­tion. Maybe you can tell us a lit­tle bit about what an address is and what it ough­ta be, because I know you’ve thought a lot about this issue.

Chiappa: Yeah, um. Addresses to most peo­ple— Addresses— The address field in an IP ver­sion 4 pack­et does at least three dif­fer­ent func­tions. And it’s use­ful when think­ing about poten­tial future archi­tec­tures to very care­ful­ly sort out those three dif­fer­ent func­tions, because you may in fact want to split them up into dif­fer­ent fields in the future. 

The first func­tion is what we’ve been call­ing the loca­tor func­tion, which is there’s some struc­ture in that field, sort of like…a good exam­ple would be…an anal­o­gy would be a mail address. So my mail address­es you know, mum­ble mum­ble, such and such road, Grafton, Virginia. So there’s struc­ture in the address which tells you where the thing is. So we call that the loca­tor part of the functionality.

The sec­ond thing that an IP ver­sion 4 address does for you is it unique­ly iden­ti­fies the enti­ty you’re talk­ing to. So you can tell, for instance in the TCP con­nec­tion, the TCP con­nec­tion is iden­ti­fied by the source and des­ti­na­tion IP address along with port.

And the third thing that it does for you is what we’re call­ing the selec­tor func­tion, which is it’s the field in the pack­et that the inter­me­di­ate rid­ers look at when the traf­fic’s pass­ing through. 

Now, oth­er net­works have made dif­fer­ent choic­es here. I mean, for instance in an X.25 net­work, the loca­tor is only seen in the call set­up pack­ets. And there­after there’s a vir­tu­al cir­cuit iden­ti­fi­er which is in pack­ets. And the selec­tor is the vir­tu­al cir­cuit iden­ti­fi­er. And the loca­tor is the full X.121 address that’s in the call set­up packet.

So oth­er archi­tec­tures have split these func­tion­al­i­ties apart. And although IP ver­sion 4, we put them togeth­er, it seems that per­haps in a next-generation IP archi­tec­ture there will be rea­sons to once again split them apart. 

Malamud: One of the big ques­tions we’ve been look­ing at is just how big an address should be.

Chiappa: Well…yes. Now, here’s where it gets tricky because if you in fact split those three dif­fer­ent func­tions among dif­fer­ent fields, the ques­tion is not just how big the address needs to be but you know, how big does the loca­tor need to be, and how big the host iden­ti­fi­er need to be, and how big does the selec­tor need to be. 

So before you can ask answer the ques­tion of how big do they need to be you have to answer the ques­tion of which one of the three func­tions am I talk­ing about. So let’s deal with them consequential. 

The loca­tors, there’s some dis­pute about exact­ly how big loca­tors need to be. Some peo­ple think that 64 bits are gonna be ade­quate. Other peo­ple thing that some­thing on the order of an NSAP length, ie. 20 bytes or so, will be ade­quate. I per­son­al­ly don’t know exact­ly how big is enough in terms of bits, but I’m pret­ty sure that the fol­low­ing two things are true: that it has to have a vari­able num­ber of lev­els in it, and also it’s very use­ful if each lev­el is vari­ably sized. Which means that you wind up with some­thing that’s basi­cal­ly of vari­able length.

Now, the prob­lem with that is that those things are very…tend to be very long so there’s a lot of over­head in the head­er if you car­ry them in every pack­et. And they ten to be expen­sive to parse. 

Luckily, it seems that a lot of peo­ple now have this vision of future Internet work which isn’t quite exact­ly the pure data­gram net­work that we have now. Dave Clark had this idea back in about 1980 or so that if you look at the spec­trum there are pure vir­tu­al cir­cuit net­works on one end—say an X.25 net­work, and there are pure data­gram net­works on the other—say you know, good old IP ver­sion 4. And each one has cer­tain advan­tages and dis­ad­van­tages. And the way you build a sys­tem that has the advan­tages both and the dis­ad­van­tages of nei­ther is build some­thing that’s in the mid­dle. And it’s what he’s call­ing flows.”

And basi­cal­ly a flow net­work is a net­work that takes…rather than each data­gram being an absolute­ly inde­pen­dent enti­ty, and the inter­me­di­ate switch­es mak­ing no rela­tion­ship between pre­vi­ous pack­ets and lat­er pack­ets so each pack­et is a com­plete­ly inde­pen­dent enti­ty, the switch­es basi­cal­ly have a cer­tain amount of state in them about ongo­ing flows. And a flow is not nec­es­sar­i­ly just a TCP con­nec­tion, because if I have an FTP where I have three or four TCP con­nec­tions, you know, they might all be part of the same appli­ca­tion and the same flow so. But you can rough­ly think of a flow as some­thing like either a TCP con­nec­tion, or there are UDP-based pro­to­cols, for instance voice tele­con­fer­enc­ing or packet—you know, video tele­con­fer­enc­ing, where you get a sequence of pack­ets. And even though they’re not part of a reli­able end-to-end stream, they’re still obvi­ous­ly relat­ed. So what we basi­cal­ly need to do is put some state in the net­work, and have the switch­es rec­og­nize that cer­tain pack­ets are part of the ongo­ing asso­ci­a­tions that we call flows. 

Now, let’s make it clear that the state in the net­work is not crit­i­cal state, which is to say you could take any one of those switch­es and drop a bomb on it, and you can recov­er from that and keep going, invis­i­bly to the higher-level appli­ca­tions at the end. So it’s not like an X.25 net­work in that way. The state that’s in the net­work is what we called soft state,” which is to say that you know, you can destroy it any point and it could be recre­at­ed from the endpoints. 

Malamud: Does it require a set­up func­tion, or does it auto­mat­i­cal­ly set itself up?

Chiappa: Um…[sighs loud­ly] The… It turns out that you may— What you may wind up doing is hav­ing a state— As hav­ing— There’d be evo­lu­tion­ary path where the set­up is implic­it in the ini­tial deploy­ment and explic­it lat­er on. And the rea­son is that…there’s an infor­ma­tion­al loss. I mean if I try and look at a stream of pack­ets and fig­ure out from those pack­ets what the flow asso­ci­a­tions are of those pack­ets, there’s almost inevitably infor­ma­tion that’s lost. 

Go back to my exam­ple with the FTP where there’s three actu­al TCP con­nec­tions that make up that FTP stream. It’s going to be very hard for you, look­ing at the pack­ets flow­ing through a router, to fig­ure out that those pack­ets in those three TCP con­nec­tions all belong to the same sort of…you know, appli­ca­tion. So, I think even­tu­al­ly we’re gonna have to get to a point where the appli­ca­tion does have to say some­thing to the effect of you know I’m explic­it­ly set­ting up a flow here now.” I mean, there are more rea­sons than just that. There are a whole range of func­tions, includ­ing pol­i­cy rout­ing, qual­i­ty of ser­vice con­sid­er­a­tions, resource reser­va­tion, where the appli­ca­tion has to tell the inter­net­work­ing lay­er some­thing about the kind of ser­vice that it wants. And it’s almost inevitable that we have to have some sort of setup. 

And I know that call set­up tends to set peo­ple’s teeth on edge because they think…you know, we can’t do any­thing until this heavy­weight call set­up thing hap­pens, but there’s two things to bring up. First is we’re not talk­ing about get­ting rid of data­grams entire­ly. The net­work is still going to have a data­gram load, there are lots of appli­ca­tions for which data­grams are still the right thing. For instance inter­ro­gat­ing you know, some ran­dom DNS serv­er you’ve nev­er talked to before and nev­er will talk to again.

And the oth­er thing is that you know, there are plen­ty of ways to intro­duce some­thing that looks like set­up into the net­work with­out nec­es­sar­i­ly pay­ing a big per­for­mance penal­ty. And the clas­sic exam­ple that I’ve always used is the old ARPANET. Now, every­body thinks of the old ARPANET as a pure data­gram net­work. Well, it turns out if you lift­ed the sheet and looked it real­ly was­n’t. Before you could send a pack­et to a des­ti­na­tion IMP, you had to get a reser­va­tion for a reassem­bly buffer in the des­ti­na­tion IMP. When they first built the sys­tem it did­n’t have this fea­ture and they found that the IMPs were all going into lock­ups because there were sort of full of half-reassembled packets. 

So they had to change the sys­tem so that when you sent a large pack­et into the IMP, it’s bro­ken up into small pieces and for­ward­ed through the net­work inde­pen­dent­ly and reassem­bled in the IMP at the far end. And they had to do a resource reser­va­tion thing where before you could send a packed to a des­ti­na­tion IMP you had to make sure that he had a buffer for you. 

But, it was­n’t nec­es­sar­i­ly any more inef­fi­cient than the old way in terms of round-trip delay, because the request to allo­cate the buffer was sent with the first frag­ment. So that if the des­ti­na­tion IMP had the buffer avail­able, then you got the reser­va­tion back basi­cal­ly with only a half round-trip time instead of a full round-trip time. 

So there are ways to…if you’re intel­li­gent about it, to make the set­up phase not be oner­ous or painful. But I don’t think we’re going to be able to pro­vide a lot of the fea­tures that we want to pro­vide in the Internet of the future with­out some sort of set­up phase where the state that the routers— I mean, you’ve got to get the state that the routers need into some­how. And it’s gonna be hard to get it all from just look­ing at the traf­fic ran­dom­ly. You’re almost gonna have to do a set­up phase, I think.


Malamud: Noel Chiappa, you’ve been talk­ing a lot and think­ing a lot about the rout­ing issues and the com­plex­i­ties of the rout­ing lay­er. There’s a school of thought out there that says that with the advent of ATM and the large data link cloud, a lot of our prob­lems will go away. [Chiappa laughs] We can foist those off on some­one else.

Chiappa: Um. Well I’m glad you put it as in foist them off on some­body else” because problems…in the orga­ni­za­tion of large sys­tems don’t just go away. You know, they have to be solved soon­er or later. 

My par­tic­u­lar take on ATM is a sort of very unusu­al one, which is that think that in some sense it is the great white hope. But I don’t think it’s going to be…the solu­tion of the future in the way that a lot of ATM par­ti­sans think it is. There are a lot of large-scale sys­tem orga­ni­za­tion issues that the Internet com­mu­ni­ty has learned in a very painful fash­ion. There’s a say­ing attrib­uted to Ben Franklin that expe­ri­ence is a dear mas­ter but fools will learn at no oth­er. And I think that pret­ty well describes the Internet com­mu­ni­ty. We’ve had to learn the hard way about you know, things like very very large-scale rout­ing and resource allo­ca­tion in data­gram net­works, dah-dah dah-dah dah-dah, all this oth­er stuff. 

And if I look at what the ATM guys are doing, in a way I like it. I mean we talked about how Dave Clark has this the­o­ry that the opti­mal net­work is one that is not a pure vir­tu­al cir­cuit nor a pure data­gram. And I think that that argu­ment, you can make a rea­son­able case that argu­ment applies at the phys­i­cal link lay­er as well as at the inter­net­work lay­er. Which says that the ATM mod­el, which is sort of inter­me­di­ate between vir­tu­al cir­cuit and pure data­gram, is in fact very close to the right thing. So in that sense I real­ly real­ly like ATM. I think it’s… The way you can do band­width guar­an­tees and also laten­cy guar­an­tees and stuff like that with the ATM mod­el are real­ly real­ly the right thing. 

The thing about ATM that wor­ries me— Well, it does­n’t so much wor­ry me, it’s some­thing that I’m aware of. You look at ATM, and you find a bunch of peo­ple who are design­ing a sys­tem from the bot­tom up. There are a group of peo­ple off talk­ing about resource allo­ca­tion, which they call traf­fic man­age­ment. And there’s a group of peo­ple talk­ing about rout­ing, and there’s a group of peo­ple talk­ing about…you know, var­i­ous oth­er things. But there isn’t a group of peo­ple who are sit­ting down say­ing What is the whole sys­tem going to look like when it’s com­plet­ed, and how are all the pieces gonna fit together.” 

And it turns out that as you try and get a more and more advanced infra­struc­ture, there are places where var­i­ous sub­sys­tems need to inter­act. And the clas­sic exam­ple I always give is once again from the old ARPANET. In the old ARPANET, the rout­ing would route traf­fic around areas of con­ges­tion. And the way in which it did this was it made the con­ges­tion delay mea­sure­ment part of the met­ric that the rout­ing used. And it was a small enough net­work and they had it all tuned up just right that the rout­ing was sta­ble, even though it…you know, the rout­ing sys­tem was actu­al­ly rout­ing traf­fic around con­gest­ed por­tions of the network.

But, what hap­pens is as the net­work gets larg­er and larg­er and larg­er you can’t run that as an inte­grat­ed sys­tem any­more because the sta­bi­liza­tion time becomes greater than the time change peri­od with­in the net­work, so that the rout­ing will just nev­er sta­bi­lize. And the way it looks like we’re gonna have to do that func­tion in the future if we want to do that is have two sep­a­rate subsystems—the resource man­age­ment sub­sys­tem and the rout­ing subsystem—and have the two of them inter­act to have traf­fic rout­ed around con­gest­ed areas. Which means that you have to think care­ful­ly about how those two are gonna inter­act at the time you design them and I don’t see any­body in the ATM world who’s think­ing heav­i­ly about how all their var­i­ous sub­sys­tems are going to interact. 

The oth­er way which I think the ATM peo­ple are fail­ing is that… There are cer­tain func­tions which are almost by def­i­n­i­tion end-to-end. And the clas­sic end-to-end func­tion that peo­ple talk about is reli­a­bil­i­ty. It does­n’t do any good at all to have you know, your par­tic­u­lar phys­i­cal net­work guar­an­tee that its pack­ets are always received cor­rect­ly if you know, they can be dropped in the routers or some­thing else like this. People are now start­ing to under­stand that it’s not use­ful to have an extreme­ly reli­able, or—not extreme­ly— It’s not use­ful to have a com­plete­ly reli­able link-level net­work. Because in an inter­net­work, you still need end-to-end reli­a­bil­i­ty and end-to-end checksums.

I reck­on that a lot of the func­tions we’re talk­ing about that we’re sort of dis­cov­er­ing we need now in the Internet, such as resource allo­ca­tion, and rout­ing is the one I’m par­tic­u­lar­ly famil­iar with, are also sort of end-to-end func­tions in which you want to do the whole thing on a system-wide basis. And what that says to me is there are two ratio­nal mod­els for the future.

Rational mod­el num­ber one is that the ATM lay­er is the inter­net­work­ing lay­er, and it includes the abil­i­ty to include a wide range of phys­i­cal media and things and like that, because…you know let’s face it, eco­nom­ics and physics always say that there’s going to be a range of var­i­ous trans­mis­sion media and var­i­ous trans­mis­sion sys­tems. So either the ATM lay­er is the inter­net­work­ing lay­er and it glues all these var­i­ous net­work­ing tech­nolo­gies togeth­er into a seam­less data trans­port lay­er. And we run TCP direct­ly on top of the ATM lay­er. Or, we’re going to have an inter­net­work lay­er which is doing these func­tions on an end-to-end basis, and at which point it’s doing those func­tions, again, at a low­er lay­er is sim­ply a waste of time and ener­gy because the low­er lay­er solu­tion, you know, if I have some sort of real­ly hairy rout­ing or resource allo­ca­tion at the ATM lay­er, that’s sim­ply a repli­ca­tion of func­tion­al­i­ty that I have to have at a high­er lay­er any­way, and the low­er lay­er solu­tion is nec­es­sar­i­ly an incom­plete solu­tion because it’s not end-to-end. 

So, to me the oth­er ratio­nal mod­el is to say well the world of the future is gonna look like as fol­lows. We’re going to have small pieces of ATM mesh, tied togeth­er with box­es that have the fol­low­ing look. The bot­tom of the box is an ATM switch, and the top of the box is an inter­net­work router. And what hap­pens is traf­fic does­n’t actu­al­ly come in through the ATM mesh, get reassem­bled into a pack­et, for­ward­ed up through the IP router and back down and dis­as­sem­bled and sent back out—that would be sil­ly. What’s going to hap­pen is that the ATM vir­tu­al cir­cuits are gonna plugged togeth­er direct­ly end-to-end inside the ATM switch. But the enti­ty that man­ages those vir­tu­al cir­cuits and the [indis­tinct] decides which cir­cuits to route things through and sets all the ATM mesh switch fab­ric up, is going to be the inter­net­work router. 

In fact you can almost sep­a­rate it into two dif­fer­ent box­es. You know, there’s an ATM switch box, and there’s an inter­net­work router box. Which we’ll call… I don’t know, it’s not real­ly a router any­more, it’s more like a…you know, a flow con­troller or some­thing. And it con­trols the set­up of the ATM switch.

So I think those are two ratio­nal mod­els of the future. And I don’t think the for­mer’s gonna hap­pen because I don’t think the ATM guys, except for a few very brave indi­vid­u­als, are real­ly will­ing to step up and say well ATM’s just going to be the inter­net­work and we’re gonna design all the mech­a­nisms nec­es­sary to make it the internetwork. 

So…you know, if that’s not the ratio­nal future then the only oth­er ratio­nal future is the sec­ond one. 

Malamud: We’re look­ing at a cou­ple vari­ants of that ratio­nal future. We’re look­ing at what the next-generation IP is going to look like. And cur­rent­ly there appear to be two large camps. There used to be a lot of small fac­tions but now there is the Simple IP solu­tion, or Steve’s IP Solution—

Chiappa: SIP, yeah. 

Malamud: —depend­ing on how you want to reverse engi­neer the acronym. And there’s TUBA, TCP and UDP with Big Addresses, but that real­ly is the OSI Connectionless Network Protocol. Can you com­ment on those two solu­tions to the next gen­er­a­tion IP?

Chiappa: Well. It’s impor­tant to real­ize that there is actu­al­ly a third fac­tion, which is none of the above.” And I’m sort of one of the major…loud nois­es in the none of the above fac­tion. You know, I region­al actu­al­ly did— When I was Inter— I was the Area Director for Internet on the IESG for some years. And when I first took the posi­tion I in fact did believe in the school that said we need­ed a new pack­et for­mat real soon now. And…there was an IAB archi­tec­ture retreat at San Diego where Van Jacobson put for­ward an argu­ment to change my mind on that. And what he said was… He said you know, we real­ly don’t know what the net­work infra­struc­ture’s gonna look like ten years out. You know, what it’s all going to need every­thing is just simply…it’s just going to be very very different. 

And his basic case was you know, we should put off the day of adopt­ing a new pack­et pro­to­col as long as pos­si­ble. And I basi­cal­ly believe in that argu­ment. I mean, you know, it’s clear that there’s a whole bunch of areas includ­ing secu­ri­ty, resource allo­ca­tion, dah dah dah-dah, where we’re still feel­ing our way. And exact­ly what we need in the new inter­net­work lat­er, I don’t real­ly think we know yet. 

Now, if IP ver­sion 4 were gonna run out of gas in four years, that would be one thing. I think we would have to sort of pan­ic and get on with some­thing new. But…you know, if you make the assump­tion that IP ver­sion 4 has more than…some min­i­mal num­ber of years of use­ful life left in it, then I think you can make a pret­ty good case that you’re bet­ter off leav­ing the design of a new pro­to­col as late as you can. Because the lat­er you leave it the more tech­ni­cal knowl­edge you’ll have about large-scale inter­net­work­ing, and you know, the com­pound­ing of our knowl­edge over the years has just astound­ed me, and I can’t believe how much more we know now than we knew in 1977. It’s just…it’s aston­ish­ing to me that IP ver­sion 4 has—we’ve man­aged to sort of tweak it to work as well as it has and scale as well as it has. So I actu­al­ly very much believe that we should you know, put off pick­ing a new pack­et for­mat as long as we can. 

I mean, the oth­er thing I’m gonna say is that… You know, if I look for­ward at the net­work of the future, it’s one of these flow-based net­works. And what we’ve got here in SIP and TUBA are you know, two more internetworking—two more data­gram pro­to­cols. And you know, sorry…you know, data­gram pro­to­cols are not the wave of the future, I don’t think.


Malamud: Well isn’t the sky falling? I mean are we— What hap­pens if Nintendo decides the next ver­sion of their game machine is TCP/IP, or AppleTalk con­verts, or Windows NT decides to empha­size that? Won’t we run out of address­es soon?

Chiappa: Well, there’s a comp— There’s a com­pli­cat­ed series of answers to that. Um. Let’s just first con­sid­er the sce­nario where you know, we don’t try to put every every­body’s tele­vi­sion on the inter­net­work. The best bet that we have at this point— You know, now that this Classless Inter-Domain Routing, CIDR, is com­ing in and we’re allo­cat­ing address space in small­er incre­ments, the best bet that we have is that at the pro­jec­tions from the cur­rent rate of use, we’ve got at least ten years of life left in it. 

If you look at the Internet address space by per­cent­ages, I think…27% or some­thing is cur­rent­ly allo­cat­ed? But a large chunk of that is in the form of a very small num­ber of Class A net­work num­bers. And the small­er net­work num­bers are being allo­cat­ed— You know, in terms of the per­cent­age of the total address space, the small­er net­work num­bers are actu­al­ly not using very much the address space at all. Basically the small­er the chunks of the address space we allo­cate it in we find, the more effi­cient­ly it’s used. So for instance you know, you look at a typ­i­cal Classy C net­work that’s been assigned and maybe it’s got fif­teen hosts on it so that’s…you know, what, 7% uti­liza­tion. And you look at a Class A net­work where it’s got 224 class="ordinal">th pos­si­ble host address­es and it— And you know, look, sor­ry. Nobody has four mil­lion hosts on a Class A net­work. Nobody even has… You know, I’d be sur­prised if any­body even has forty thou­sand hosts on a Class A net­work. So you know, you’re down by an order of mag­ni­tude in uti­liza­tion there. So get­ting rid of Class A allo­ca­tions and going to CIR is real­ly gonna help. 

Now, there’s two addi­tion­al steps above and beyond that. The first step above and beyond that is to— Remember we talked about how the Internet address per­forms three dif­fer­ent functions—there’s a loca­tor, and a host iden­ti­fi­er, and a selec­tor. I per­son­al­ly think that the opti­mal evo­lu­tion strat­e­gy’s— I mean every­body agrees that we need a new rout­ing and address­ing archi­tec­ture by—and I’m using address­es in the loca­tor sense here, ie. some sort of struc­tured name that tells you where the thing is. Everybody agrees we need a new one. I don’t think there’s any dis­pute there. But rather than deploy a whole new pack­et for­mat, and a whole new rout­ing and address­ing archi­tec­ture at the same time, what I’d like to try and do is deploy a new rout­ing and address­ing format—and I obvi­ous­ly have one under con­sid­er­a­tion, and sort of use that as sort of a com­mon ground between an old pack­et for­mat and a new pack for­mat. So to say, let’s get the new rout­ing and address­ing archi­tec­ture deployed, and they we’ll design a new pack­et for­mat that uses those new new-style loca­tors. And so, this new rout­ing and address­ing archi­tec­ture will be deployed as an adjunct to the cur­rent inter­net­work play­er. And then that sorts of the com­mon piece between the cur­rent inter­net­work lay­er and a new inter­net­work lay­er. So it’s more of an evo­lu­tion rather than a well you know, we’re gonna take a total step here from from sys­tem A to sys­tem B.


Malamud: How big is that net­work going to be in ten or twen­ty years? Do you have any thoughts?

Chiappa: Twenty years? In twen­ty years, there is not going to be a phone net­work. There is not going to be a tele­vi­sion dis­tri­b­u­tion net­work. There is not going to be any kind of sep­a­rate com­mu­ni­ca­tion net­work. There’s going to be one giant net­work which han­dles every­thing. And the par­a­digm— You know, it’s all going to be trav­el­ing in pack­ets inside an inter­net­work sys­tem. I think you’ll find pret­ty broad agree­ment on that among most of the peo­ple here. 

I’m not sure that all the peo­ple in the phone com­pa­nies and the cable TV com­pa­nies have bought into this vision yet. I mean, I every­body believes in an inte­grat­ed sys­tem I think, you know. That’s why you know, who is it, Bell Atlantic just bought up that cable TV com­pa­ny. I think every­body believes in the inte­grat­ed sys­tem. It’s just not every­body under­stands what the inte­grat­ed sys­tem is gonna look like. But you go around and talk to all the peo­ple around here and they know exact­ly what it’s gonna be, and it’s gonna be a giant Internet with resource reservation. 

Malamud: So if you go to an IETF, that’s what peo­ple think. If you go to a tele­phone com­pa­ny they think it’s gonna be the inte­grat­ed broad­band ISDN world. If you go to the cable peo­ple it’s every­thing’s going to be your cable box. You see all of these con­verg­ing, and it real­ly is going to be an internetwork?

Chiappa: Oh, I think so but I mean, you know, if you go to the cable TV guys and say you know, Tell us about…” You know, they tend to give you the low­er lay­ers. I mean, they don’t tell you how… You know, you see all these won­der­ful dia­grams with box­es and wires. But what they don’t tell you is you know, what’s the struc­ture that’s gonna tie all this togeth­er. You know, it’s the system-level think­ing that I find is miss­ing there. You know, broad­band ISDN… I mean how is broad­band ISDN dif­fer­ent from ATM any­way? I mean it seems to me that all the things that I said about how the ATM guys haven’t real­ly got a clear view of the future that’s deploy­able and prac­ti­cal, prob­a­bly applies to the tele­phone com­pa­ny guys as well—and I may get shot for say­ing that. 

But, I don’t think there’s a real­is­tic view for how to build a glob­al com­mu­ni­ca­tion net­work oth­er than pret­ty much at this point the Internet one. And I think the Internet guys are the ones who’re— You know, maybe we’re all suf­fer­ing from delu­sions of grandeur. But we’re at least think­ing about what the whole sys­tem’s gonna look like as a sys­tem. And try­ing to design a sys­tem from the top down that has those capabilities.

Malamud: And how big is this net­work gonna be? How many nodes is the Internet gonna have?

Chiappa: Oh, twen­ty years from now? I don’t know, take the num­ber peo­ple on the plan­et and mul­ti­ply by ten—I don’t know, some­thing like that. I’m ful­ly believ­ing that I will live to see an Internet… You know…the prob­lem is at that point you start get into all sorts of oth­er vari­ables like you know, are we gonna have like, large-scale region­al wars which reduce large-scale regions of the globe to pover­ty. I mean, if things like that hap­pen clear­ly… This high-technology Internet can only spread through places that can sup­port that kind of infra­struc­ture. And you know, say the coastal regions of China today look like a good bet for being Internet-live in five to ten years. But if some­thing hap­pens inside China and they fall back into mas­sive dis­ar­ray and con­fu­sion and you know, the famines could come back, there could be civ­il wars. And you know, the Internet’s not gonna spread there if that happens.

So, I think… You know, if you can answer me the ques­tion of what areas of the globe are going to be tech­no­log­i­cal­ly advanced and eco­nom­i­cal­ly func­tion­al in 2010, I can tell you where the Internet’s going to be in 2010 and how big it’s gonna be.

Malamud: This has been Geek of the Week and we’ve been talk­ing to Noel Chiappa. 

Chiappa: Thanks a lot.


Malamud: This is Internet Talk Radio, flame of the Internet. You’ve been lis­ten­ing to Geek of the Week. You may copy this pro­gram to any medi­um and change the encod­ing, but may not alter the data or sell the con­tents. To pur­chase an audio cas­sette of this pro­gram, send mail to radio@​ora.​com.

Support for Geek of the Week comes from Sun Microsystems. Sun, The Network is the Computer. Support for Geek of the Week also comes from O’Reilly & Associates, pub­lish­ers of the Global Network Navigator, your online hyper­text mag­a­zine. For more infor­ma­tion, send email to info@​gnn.​com. Network con­nec­tiv­i­ty for the Internet Multicasting Service is pro­vid­ed by MFS DataNet and by UUNET Technologies.

Executive pro­duc­er for Geek of the Week is Martin Lucas. Production Manager is James Roland. Rick Dunbar and Curtis Generous are the sysad­mins. This is Carl Malamud for the Internet Multicasting Service, town crier to the glob­al village.