Carl Malamud: Internet Talk Radio, flame of the Internet.
Malamud: This is Geek of the Week. We’re talking to Craig Partridge. He’s a senior scientist at BBN and he’s the author of a new book by Addison-Wesley called Gigabit Networking. Welcome to Geek of the Week, Craig.
Craig Partridge: Thank you, Carl. It’s fun to be here.
Malamud: So this book, Gigabit Networking… Is it any good?
Partridge: Boy. Huh. How does one answer that question? Uh…it sold out its first printing in five weeks. So…either people—
Malamud: But why gigabit networking? Why not megabit networking or terabit networking. I mean why is this a significant thing to write a book about?
Partridge: Well, I think the real reason is that that’s the speed that’s comin’ round the corner at us. I mean megabit networking we’ve been doing for a long long time. We know how to do—we’ve been doing it since ’73. I mean we can sort of do it in our sleep. And terabit—true terabit networking I mean, in which a single host gets you know, a terabit per second, is still…I would guess three to four or five years away…
Malamud: So when we say “gigabit” we mean…a host swallowing a gigabit per second.
Partridge: You mean a single host can swallow or you know, send a gigabit per second. There are people who have a different definition in which they sort of say well you know, I’ve got a hundred hosts, each sending at 10 megabits over the same wire, this equals a gigabit network. But since no one host can get more than 10 meg I sort of say no, it’s not a gigabit. I mean we’ve always talked about bandwidth in terms of what can a single node attached get. And I don’t believe we should change those rules now.
Malamud: So what’s the challenge of going from a 10 megabit connection, which seems to be pretty standard, up to a gigabit? How does that change the way we do networking support on a computer?
Partridge: Well…it changes things in weird ways. The first thing it does is it makes the bit shorter. Everyone thinks a gigabit means you go faster, you know. It’s like an airplane goes faster or a car goes faster. It’s not like that. What it really means is that we’ve managed to pack more bits per unit distance of fiber. So it’s sort of like we make the highways wider, or the plane bigger, and it doesn’t cost any more to fly from here to there but we can carry you know, a thousand times as much stuff.
And so it changes all the dynamics. I mean it used to be, even on a 10 megabit network, a single bit is a very long thing in a cable. It takes a very long time to get from here to there. Ethernet works on that. Ethernet, the CSMA collision is based on the notion that you know, when you’re sending a bit, the other guys hear it before you finish sending it because there’s so many electrons you have to put into the cable. It’s not quite like that, they actually allow something like 100 bits to be in flight. But the point is there’s a limit.
Malamud: So the minimum packet length means that the data stays on the network long enough that other people can hear it on the net and therefore not send—avoid their collision.
Partridge: Absolutely. And if you try to scale it up to gigabit speeds, you suddenly start finding that the packets become these monstergrams. The minimum packet size is just huge. It’s measured in thousands of bytes, which is just crazy for a minimum packet size. Imagine every click at your window suddenly translate into this Moby packet over your sort of gigabit Ethernet because that’s the smallest packet size you can send.
Malamud: So collision detect, multiple access, the way we do Ethernets not going to work in a gigabit world?
Partridge: It’s not going to work in a gigabit world. If you see what they’ve done even to get to a hundred megabits, what they did was you’ve got two choices. You can either increase the minimum packet size or you can shorten the cable length. And for hundred-megabit Ethernet what they did was shorten the cable length to sort of what 10Base‑T required. If you shorten it again you’re starting to talk about cable runs of about you know, ten, fifteen feet. I mean it’s just not interesting at gigabit speed.
Malamud: That sounds like a peripheral but, not a network.
Partridge: Well, yeah… Of course you know, a lot of people believe that gigabit networking should be an extended peripheral bus. I mean, there are lots of people out there who are running the two types of technologies together. HIPPI’s one, SCI’s another, fiber channel’s a third. It’s a little weird.
Malamud: Why don’t you explain those three technologies? HIPI, SCI, and fiber channel. Those are three of the main contenders for [crosstalk] the gigabit network of—
Partridge: Yeah. They’re three of the main contenders particularly for sort of your local area gigabit network. HIPPI is the high-performance parallel interface. And the main thing that makes HIPPI interesting is it’s here, it’s now, and it’s really really really simple. It’s 800 megabits per second in its standard form, and there’s a double-wide HIPPI that people are now moving to, which is also in the standard but no one was using initially, which is sixteen-hundred megabits, so 1.6 gigabits.
Malamud: Now, HIPPI is a circuit-switched— As I understand it, many HIPPI switches are actually physical connections that close when you say “I wanna talk to that host.”
Partridge: Absolutely. What you do is you say that the HIPPI switch “I want to talk to that outbound port,” and it makes a physical connection, says “I’m ready to talk.” Then you blast a packet through, then your tear down the connection. This is what makes it a LAN technology, because you don’t want to wait, you know, miles upon miles of dead fiber, basically. You’re not doing anything while you’re waiting for this request to go through to link up the switches. But within a— Well, HIPPI was originally designed to work within a computer room. And it’s been extended with something called serial HIPPI so that it can run a few kilometers around a switch. Within that distance you can do very well. Cray shows 700+ megabits per second over an 800 megabit HIPPI channel between two Crays using…you know, full connection setup each packet, tearing it down for each packet. And it you know, uses 802 framing and it’s all pretty straightforward.
Now, you know, the weird thing about is that at the same time it also supports IPI-3 disk instructions in parallel with your 802 stuff. So you’ve got this network is both an 802 network, and it’s also this sort of…disk access controlling…pipe.
Malamud: So IPI, Intelligent Peripheral Interface.
Partridge: Yeah.
Malamud: And so I can basically put a disk drive far away from a computer?
Partridge: Yeah. Through a HIPPI switch. And it will talk IPI to the disk farm. And it will talk 802 to other networking devices. And provided you know what you’re doing, that all works.
Malamud: Now this is real. There are [indistinct; crosstalk] switches or are computers using ’em…?
Partridge: This is— Absolutely real. You can go out, you can buy most of the pieces of technology for— You can buy HIPPI interfaces for on the order of a couple thousand bucks per for most major hosts. Almost every major vendor has a HIPPI interface, you can buy HIPPI switches from lots of people that’re you know, second and third parties providing stuff for HIPPI. It’s been around for a couple years, it’s mature, it’s here, it’s now.
Malamud: Okay, what about SCI and fiber channel, then.
Partridge: Well, let’s see. SCI is a new proposal that I think is still trying to figure out where its head’s at. But the notion is sort of…um, you take a bus protocol and you try to extend it so that it does everything.
Malamud: Is this long-distance SCSI? Is that what that is? I mean the initials are the same.
Partridge: Yeah, well it has some of that feel but I’ve never been able to get a strai— I mean, I’ve talked to the guys who’re writing the standards and we talk about SCI and they sort of go “Well you know, it can slice and dice and it’s a ginsu knife protocol combining networking and buses.”
And you sort of go, “How does it work?”
And they say, “Well it’s a ginsu knife protocol that does—”
And you say, “Well how does it work?”
And they say “Well, it’s a ginsu knife protocol—” and you start to get a little bored.
Malamud: Reminds me of the Generic Ultimate Protocol—GUP—proposal.
Partridge: Right.
Malamud: Okay, what about fiber channel, then?
Partridge: Fiber channel… The simplest way to explain fiber channel is fiber channel is HIPPI spelled “IBM.” It’s basically a more complicated version of HIPPI, same data rates, um…you know, supports a few more remote access protocols—I think it does SCSI as well as IPI‑3 and 802. It is very close to being real. You can go buy fiber channel switches now, but if you buy fiber channel switches from two vendors they won’t necessarily interoperate with each other, which—
Malamud: Do HIPPI switches interoperate?
Partridge: HIPPI switches interoperate just fine. But you buy two fiber channel switches and they won’t. And there are debates about what is the proper way to read different pieces of the spec. And the ink isn’t dry on all the specs yet. They’re supposed to dry…you know, sometime in late 1994.
Malamud: Okay. And how fast does fiber channel run?
Partridge: Same as HIPPI, 800 megabits per second. So that’s why people say it’s HIPPI spelled IBM. It’s more sophisticated, it’s got some more things, IBM’s the big proponent. Um, but in terms of technology for networking that you get when you buy fiber channel you’re not buying a faster data rate or anything that you’re not getting basically with HIPPI.
Malamud: Craig Partridge, we’ve talked about HIPPI, we’ve talked about SCI and fiber channel. You haven’t mentioned ATM. Is ATM a candidate for the gigabit LAN?
Partridge: Well certainly people think it is. And I think that’s probably fair. One of— I mean, ATM’s basic technology is it’s a switch technology. It’s a switch technology like HIPPI’s a switch technology. So if you think HIPPI’s a candidate, ATM’s obviously a candidate, too. The major merit of ATM is that it scales beautifully over a range of speeds. So if you want to plug in one host at say 55 megabits per second and another on at 155 which is SONET OC-3, if you want another one at 2.4 gigabits, you can plug ’em all in, they can all talk ATM. You just pull one— You know, it all looks— You can even make the interface look the same to the host. You just plug in one at a higher speed and a little better [indistinct] and off you go. And the theory is that the ATM switch in the middle will be able to connect up at all these different speeds and will work just beautifully.
And…that’s a theory. In reality right now you can go out and you can buy ATM switches. Much like fiber channel they won’t interoperate quite yet, though they are closer to interoperating I think than are fiber channel switches, since it’s at this point a small matter of software not hardware issues that are holding them up.
And right now the fastest you can go on any one of them is 155 megabits per second. So it’s not a gigabit yet. So if you wanted to buy a gigabit today you’re going to have to look at HIPPI or fiber channel. But, another three, four years you’ll find yourself probably looking at high-speed ATM interfaces at a gigabit.
Malamud: Now what about wide-area networks? Are we even thinking about wide-area gigabit networks now?
Partridge: Oh sure. Thinking about ’em a lot. Um…limited choices. The basic answer is SONET. And that’s it. SONET is the Synchronous Optical Network. It is a telephony standard for how you multiplex bits over a wire, okay. And essentially that’s what everyone’s using because everyone wants to basically be providing you know, SONET connectivity for the phone company.
Now while it’s called “Synchronous Optical Network” it’s actually part of a broader suite of protocols called the Synchronous Digital Hierarchy. And…if you really want to talk about details, there are a few slight differences between SDH, broad umbrella, and SONET as the particular instantiation of the SDH. But for practical purposes you can basically think of them as the same.
And so what you can think in terms of microwave SONET. There’s 2.4 gigabit microwave link in northern New Jersey that AT&T’s been testing over about a 40 kilometer distance.
Malamud: So SONET is the interface to that very fast fiber, just like DS3 might be the interface to a T3 line—
Partridge: Absolutely. Same deal, same purpose. SONET contains all the stuff to keep everything plesiochronous and all that messy stuff that we have to worry about— You know. I mean there’s this basic problem which is that fiber, or any media in the phone network, if exposed to warm temperatures gets longer. And you’re clocking bits in at one end, you’re clocking bits out at the other end, but the fiber got longer and so the clocking can get out of sync. And so you need some protocol that restores clocking and deals with the skew. That’s what’s SONET basically does and it allows you to add and drop lines at various speeds.
Malamud: Would a computer talk SONET, or would a computer talk some other protocol which would talk SONET?
Partridge: Computer would talk SONET as the bit-level framing…bit-level protocol—you know, the thing that just marks the bits, okay. And actually SONET provides a little framing around the bits, too. SONET is a framed-bit protocol.
And then what you have to do, though, to do almost anything useful is you have to put something on top of SONET. Now, right now there are only like three serious proposals for things put on top of SONET. Two of which I sus— Well, at least one of which probably will go away over time. One is HIPPI over SONET. There are rules for putting HIPPI frames in SONET frames and shipping them over the wire. And this is largely because if you’ve got a bunch of HIPPI networks and you want to connect them up over long distances the only thing you can do is lease a a SONET line, so you’ve gotta put HIPPI over SONET.
Malamud: And is that being done? [crosstalk] People are doing that.
Partridge: It’s being done. Yeah, you can go buy HIPPI over SONET adapters now. I think there are like two or three different proposals for it. So you may get a different one from different vendors. But you can do it.
Another thing is a proposal for PPP over SONET, which is really sort of a wild idea. But basically, you know, you do the equivalent of dialing up the SONET link…
Malamud: What about SLIP?
Partridge: Uh, no one has proposed SLIP over SONET. The only thing we’ve seen so far is PPP over SONET. There is a proposal, there’s apparently some group fabbing a chip to see how actually it would turn out.
And then the third thing, which almost everybody does, is ATM over SONET. And you put your ATM cells in your SONET frame. And by the way, SONET’s what makes ATM scalable. Everyone says “ATM, gorgeously scalable.” Basically what happened was…the ATM folks said well here’s a way to put ATM into SONET. And they’d already solved all the scaling issues for SONET. So once you say ATM, SONET, and a speed for the SONET line, you know how to do it. It’s easy, it’s trivial.
Malamud: Why does SONET scale?
Partridge: SONET scales because what they did was they framed everything. They said that you know, basically what SONET does is it sends out groups of bits in chunks called “frames.” And basically one frame comes every 125 microseconds. Surprise. Okay. And…at OC‑1, the data rate equals 55 megabits per second. So you can sort of figure out how big the frame is.
Now, what you can do is you can define a rule that says there are these things called…they’re actually called “frames” again which is sort of confusing, let’s call them superframes—in which basically what you do is you say okay, we’re still sending frames at 125 microseconds but we’re gonna send you a multiple of the number of frames we send you at say, 55 megabits, so at OC‑1.
OC‑3, which is what everyone does for 155 megabits is OC‑1 (55 megabits), an OC‑1 frame multiplied three times and sent in 125 microseconds. So you send three frames at 125 microseconds. And that gives you the higher speed. You go to OC-192, I’m sending you 192 SONET frames in a 125-microsecond shot.
And there are two ways to handle those frames. One is to handle them as a bunch of separate 55-megabit frames, okay. The other one is to concatenate them. So they’re treated as sot of one superframe of the three frames or 192 frames smashed together. And that’s called Concatenated SONET. And that’s why you hear people talking about OC‑3, which is simply three 55-megabit channels, or OC-3c, okay, which is concatenated in 155.
Malamud: So that scales between any one point and any other point. What is it about SONET that would make it scale as a global high-speed communication system?
Partridge: What makes it scale as a global high-speed communication systems is again this framing which allows you to multiplex lower-speed line into larger-speed lines and add/drop particular lines cleanly. And you look inside the phone network technology today, it’s extremely tricky. Those multiplexes they have are a very messy, tricky beast to actually multiplex lines and remultiplex them again. And SONET said “We’re not going to play those fancy bit-stuffing games and all this wild stuff. We’re going to think of everything as frames and we can demultiplex and multiplex in frames at any SONET muxing system you want.” So if you want and OC-3c line, and I want OC-24, okay, there’s a way to take my OC-24 line and your OC‑3 line and multiplex them into a larger SONET line trivially through a multiplexer. And even…you know, if you decide next week you want more speed, turn up the speed of em link and we’re sharing a link, and it’ll all multiplex it together fine, okay. But it’s not a packet protocol, it’s a multiplexing protocols. So you can do anything you could do with a multiplexer.
Malamud: Craig Partridge, um…I had a rule of thumb when I was doing consulting. If I had a 1-MIP machine, I needed at least a megabyte of RAM for a one-megabit wide-area connection. That was the old VAX, if you would think about it. And then workstations came in and we had 10-megabit links and you needed at least an order of magnitude more RAM, 16 megabytes. And it was a multi-MIP machine, let’s say 10 MIPS. Now we’re looking at a gigabit-per-second network. Am I gonna need a gigabyte of RAM? Am I gonna need a megaflop machine in order to be able to keep up with all that data coming in?
Partridge: Well yes and no. Let’s step back for a moment. The basic rule you’re describing is Amdahl’s old rule of thumb, which said that for every instruction you need a bit of I/O and a byte of memory. Okay, well…that rule’s held up pretty well. And sure, in the near future you’re going to have a machine with…one BIP, a 1 billion instruction per second processor, and you’re gonna need one gigabyte of main memory, sure. And then we’re gonna plug into a 1 gigabit per second network link. And I don’t think that that’s a terribly…scary thing to think about. I mean you know, we’re already seeing people with 300 megahertz processors coming out, and the speeds are goin’ up. And 300 megahertz, if you take the usual processor scaling rate…we’re gonna be 1 BIP very shortly. 1997 is not a crazy target date to think of.
And by that time a gigabyte of memory won’t look so crazy. I mean, memory prices also scaled up reasonably nicely. Memory speeds haven’t, and that’s a pain. Trying to get stuff in and out of your memory keeps getting harder.
And a gigabit network interface, I don’t find that particular scary easy, either. A gigabit network interface says that you’re moving data around at the interface, in silicon. On 32 bit wide paths, that’s a little over 30 megahertz. Well, I mean 30 megahertz boards are not exactly rare in this world already. In fact, if you take a look at some fairly standard processors today and you look at the amount of data that’s going through them per second, it’s well over a gigabit, in your workstation and probably in you know, your portable laptop. It’s actually probably moving at the core right around the CPU, a gigabit per second through the data paths already. And so I don’t find it all very scary to think about those numbers.
Now, people do. I mean, when I started teaching courses on gigabit networking in 1990 I had people send in reviews that said “You’re a crackpot, every week talking about a 1 BIP workstation.” Now they sorta look at me and say “When?” And you know, another few years they’re going to say “How much?”
Malamud: Or at least say let’s put you under non-disclosure.
Partridge: That’s right, yeah. It’s just not…
Malamud: So what’re we gonna do on these machines? Is this just going to mean more…is it just people like me that’re causing the need for this, people that’re spitting out larger and larger amounts of data? Or will we do something fundamentally different on our machines?
Partridge: Well you’re a prime time generator. And sure I mean, people like you will be part of it. But of course…I mean, that’s important. I mean I’m hardly gonna argue that we don’t want to be able to put more multimedia-type stuff on the network. And multimedia costs. I mean a single HDTV connection is a 20-megabit connection. Okay, well you know, you’re doing a video conference with three or sites sites and you want all the data on your screen, and…well you know, you just blew 100 meg right there on the video. And you know, the audio you can forget. I mean the audio’s not such a big deal.
But easily you can chew it up in video. If you want to do virtual reality it gets pretty scary. Those guys are talking millions and millions or hundreds of millions of polygons updated per second. And you know, every polygons requires so many bytes of description being sent over the network. Well I mean, I’ve heard estimates of up to nearly 1 billion polygons per second to give you a real high-fidelity virtual reality environment, particularly some of the complicated ones. And you know, I don’t see any way that 1 billion polygons is going to be any less than several billion bits per second. So several gigabits per second, just to do virtual reality.
Now, you may say well you know, is virtual reality the thing that’s going to drive it? Lots of things are going to drive it. I mean, other things to think about is your poor workstation. It’s sitting there. It’s you know, got this tremendously powerful processor doing one BIPS per second. And it page faults. Okay. Well, we’ve known for a long time that if you’ve got a really fast box and you want to make it really fast you better make sure that the data path all the way to the file server is fast enough to feed you the data back for your page fault or whatever. Or you’re never gonna see it. I mean it’s why people a few years ago used to all log into the file servers. Remember that era when your system manager would come down and say, “Stop logging into the file server!” because everyone’s everyone was logging in. Why? Because the network path wasn’t optimized, and so as a result the data coming off the disk was available to you much much faster on the file server than it was on the remote client. And you said well I don’t want to sit here going slow, and so you’d log into the file server. And that would drive everyone else’s performance down because they’re competing with you the local pig on the file server against their clients, and so they’d all log in and everyone would log in, and you’d get terrible performance on the file server. Well you know, it’s because the data paths weren’t optimized. And if you do the data path optimization, it’s pretty clear you’re gonna need a gigabit link between your file servers and your clients.
Malamud: We’ve talked data links and things like SONET. We’ve talked applications, things like file servers and virtual reality. Is there something in the middle that’s gonna have to change? Is TCP/IP gonna work or are we’re gonna have to all move towards an OSI platform to do gigabit networking? Is there something in the guts of the network stack that is gonna have to adjust at those speeds?
Partridge: A whole lot and very little at the same time. So let me see if I can sort of answer the two parts.
Things that don’t have to change. IP doesn’t have to change very much. If you just want to move data at a gigabit per second IP works just fine today.
Malamud: Do we need a bigger packet size or something?
Partridge: No, not particularly. And you don’t need a fixed packet size, either. Lots of people say well you know, we’re gonna have to make the IP packet size really narrowly-bounded to do isochronous traffic, isochronous stuff being you know, data with strict timing requirements. Not true. And in fact, one of the things that’s true is as you get to higher speeds all these timing problems get so easy because the network’s moving the data so fast. You know, you want to synchronize to a millisecond, sure that’s a snap. Synchronizing to the millisecond’s very easy when you’re moving data this fast; a millisecond’s a megabit. A megabit’s a whole lotta data. We’re not going to send megabit packets around. So you know, you don’t have to worry about interference as much because there are lots of chances for each packet in a time to meet you millisecond requirement.
We will have to do a little bit to manage our queues a bit. Because in gigabit networks necessarily the queues scale up like the memory scales up and everything else. So we’ll have to do a little more fancy queuing if we want to be allowed to for example deliver Internet Talk Radio in real-time over large parts of the gigabit Internet. But that all oughta come along pretty easily, and in fact there are people building experimental gigabit routers today. Bell Labs has one they wrote up about a year ago.
TCP’s a trickier problem. We know TCP can go at a gigabit. If you guy buy Cray’s TCP right now, plug it in, you will go at a gigabit per second. This is not hard. The trickiness is…scaling problems. And let me see if I can explain that very very briefly.
In the Internet today what we assume when we start up a TCP connection is that we have no clue about how much bandwidth is available to us and that for example a connection between you and me, you may have you know, 100 gigabit per second link but I may be at the tail end of a 9.6 kilobit link. And so your TCP when it starts sending is very conservative and sends one packet, sees how long it takes for that to come back, then says well gee, it got through, I’ll send two packets. And so it scaled up until it starts to feel like it’s overdriving the link, and then you fall back and you try again. And this call “slow start.”
Well the problem is that in a gigabit Internet, there are still going to be people at the end of 9.6 kilobit links. And so there’s this dual problem which is that starting up with sending one packet over a gigabit link, waiting for the ACK to come back then sending two packets, and then say sending four and so on takes a huge amount of time to scale up—several seconds or more, right. On the other hand if you assumed it was a gigabit link to start with and launched all the data, fwoom, okay, then I’m going to be waiting several minutes or hours at the end of my 9.6 kilobit link to have it clear out because you just dumped far more data into the path than I can cope with.
And the best guess is the way we’re going to solve this problem is we’re going to find ways to give you more information in the routing protocol. So you as a host, when you’re starting up will say to your nearby router, “Hey, psst. Could you tell me how much bandwidth there is between me and Craig?”
And it will say, “Well, we know the slowest link is a 9.6 kilobit link.”
And you go, “Okay, right. We’ll do the nice simple feed.” But you say, “Psst. What’s the data rate between say me and Barry Shine[sp?]?”
And it comes back and says, “Oh. Yeah. No problem. 10 gigabits all the way straight through.” Then you’ll fire up your TCP and your TCP window will start big and launch lots of data in to start. And you’ll sort of accommodate yourself around a 10 gigabit path.
Malamud: Craig Partridge, we spend a lot of time worrying about making a user be able to use a billion bits per second. Should we be worrying more about getting a billion users on the network? Should we be worrying less about feeding a few supercomputers and more about things like universal access? Or are the two not incompatible?
Partridge: I don’t think they’re incompatible. Uh… Let’s put it this way. The cost of fiber, okay, is dropping dramatically. It is now getting to the point at which if you were thinking of cabling up an office building, the cost of fiber and the cost of say grade 5 twisted pair is about the same, as is the installation cost. Okay. What that says is that if you wire up a building, you’re actually gonna fiber it up. And that means that everybody’s desktop gets a gigabit-capable link basically for free. You know, here is this link that’s completely prepared to send at one gigabit per second, all you gotta do is put the electronics at both ends.
Well, the electronics is almost here. I mean, HIPPI interfaces don’t cost very much, and other higher-speed interfaces will come along shortly. So every office will shortly have a gigabit, and it’s just gonna come for free. So…yes we want to wire up more people. I think that’s important in terms of the…sort of global village that everyone is sort of now dreaming of and which…god help me, when the Internet was [chuckles] thirty nets was not quite what was envisioned.
Malamud: It’s grown a little bigger.
Partridge: Yeah. Well, it’s very strange to someone who basically got on at the moment of the ARPANET/MILNET split and then to watch it all boom. But, at any rate, yes we want it to boom. But as we give it to more people we have to upgrade the backbone. So backbone speeds have to soar. I mean you know, to take the telephone network where everyone gets no better than a 64 kilobit link, the backbones of most major phone networks these days are gigabit links. The Internet’s going to need the same. So in the backbone we clearly need it.
And the problem is at the edges we’re also delivering gigabits to people now, or will be very shortly. It’s just the costs are coming down so sharply everyone’s gonna have gigabits in their office. They might not have it in their home yet. In I would guess you won’t get it in your home until after the year 2000. But you’ll have it in your office by late 1997, 1998 at the latest.
Malamud: With computers a funny thing happened. As they got more powerful, they didn’t get harder to use, they got easer to use. Will networks get easier to use as we get more bandwidth?
Partridge: Um… That’s an interesting question. I think in fact that the problem…and maybe this just shows that I’m myopic. I think the real answer is that we’re gonna get to throw a little more software at making it easier to use a network from the user’s point of view. But the network itself doesn’t care, it just moves these packet around and you know, the user never really sees that. And I mean, your question really just gave me this vision of sort of a nice friendly packet flying by and I’m not quite sure [laughs] how you’d make a bracket friendlier. But it’s clear that we can spend more time making our software packages more network-friendly. And there’s no inconsistency there. I mean people say well but you know, how do we make ’em friendly and fast. Well, most of our systems software right now for networking is very poorly-tuned and very slow, and we can speed it up dramatically just by going in and tuning it a bit, and that gives us some extra compute cycles back to make it more friendly and more easy to use.
Malamud: You’ve been listening to Geek of the Week, a production of the Internet Multicasting Service. To purchase an audio cassette of this program, send mail to audio@ora.com. You may copy this file and change the encoding format, but may not resell the content or make a derivative work.
Support for Geek of the Week comes from Sun Microsystems. Sun, makers of open system solutions for open minds. Support for Geek of the Week also comes from O’Reilly & Associates. O’Reilly & Associates, publishers of the Global Network Navigator. Send mail to info@gnn.com for more information. Additional support is provided by HarperCollins and Pearsall. Network connectivity for the Internet Multicasting Service is provided by UUNET Technologies, and MFS DataNet.
Geek of the Week is produced by Martin Lucas, and features Tungsten Macaque, our house band. This is Carl Malamud for the Internet Multicasting Service, flame of the Internet.