Carl Malamud: Internet Talk Radio, flame of the Internet.
Malamud: This is Geek of the Week. We’re talking to Geoff Baehr, who is the Chief Technical Officer for Networking. He also has the amazing title of Director of Networking and Commerce, I believe, at Sun Microsystems. Welcome to Geek of the Week, Geoff.
Geoff Baehr: Thanks, Carl. It’s a pleasure to be here.
Malamud: You are well-known for advocating the use of ATM as a local area network. Why would I want to use this very complex telephone company-developed technology instead of let’s say, Ethernet?
Baehr: Well actually precisely because it is a telephone technology. And unlike Ethernet in its early days, one can take advantage of the fact that dozens if not hundreds of telephony research labs and telcos around the world have been developing this technology since the mid ’80s. And one can dive in and recognize that the ability to take this technology and apply it to the local area net and to leverage off of all of the work that was done could have some substantial changes in what’s going on in networking.
Malamud: Well, using that strategy I would think that you’d use that same logic for let’s say ISDN. Why aren’t are we using ISDN as a LAN?
Baehr: Well actually when we looked— A couple years ago several people were looking around at what the next technology might be. And the criteria were to have something that was scalable, and also something where the fundamental technology in the local area net and the wide area net was not different. And ISDN didn’t meet the criteria.
Malamud: Why is not different a goal? Why does it matter? I mean, do we want a single uniform data link for all things?
Baehr: Well that’s right. What we refer to here is we’ve drop the W or the L of wide area or local area net—WAN and LAN—we just call it AN, A Network. And the goal here is to use the same technology across the wide area as well as in the local area, and have the wide area net end at your desktop. Because we can then preserve the characteristics of transmission speed and latency and such across any link. Right now, I don’t know about you but I’m painfully aware when I cross that T1 router off into my Ethernet or FDDI network. I’m painfully aware of the fact that I’m doing that cross. And I don’t want that anymore.
Malamud: That’s just a matter it seems not of the interface but of simple raw bandwidth.
Baehr: Well it’s also concerned much more, and it will be concerned more in the future, with latency control and signaling transactions to determine the amount of bandwidth one can reserve and to determinate the end-to-end delay and some of the other characteristics that future apps will want to have.
Malamud: Tell me a little more about that. Why would I want to control for example the latency of my link, and how would I do that?
Baehr: Well actually this is one of the areas that is being researched, is how does one determine the end-to-end latency. Moreover, how do you ensure that latency doesn’t fluctuate? That there’s very little jitter in the delivery. This would be important for delivering things such as video. But I mean like, video is kind of the no-brainer, if you well. I much prefer to concentrate on real-world apps such as multicasting, and applications that use multi-casting, I should say.
And we’ve thought about several. What if one were able to multicast database updates to database servers? And do this within a time-bounded latency-guaranteed mechanism. This would change the way that people run parallel databases. And you could do things such as think of it in the stock market setting, where one could go out and those who are able to multicast database updates regarding stock prices in 100 milliseconds would be substantially ahead of those folks who were waiting for a standard transmission to occur in a best-effort approach. Arbitrage against time.
Malamud: I’ve seen people already multicasting Usenet news. People have been multicasting images from NASA as JPEG data. And so basically you tell your grab tool on the MBone that “Hey, I want the next photo that comes by.” Do we have to fundamentally change that current paradigm in order to be able to do—you’re talking relational databases and transaction, time-bounded operations.
Baehr: Well actually what we have to change is the way that people enter and leave multicast groups. Because right now multicast groups are static, and the copying and retransmission of multicast data is also done pretty much in a static sense, in that the inbound to outbound mapping is set up at the particular configuration time that the machine is set up. And we would very much like to be able to change that and say any person can join or leave any multicast group, depending upon their particular interest or their needs.
Malamud: And their security, of course.
Baehr: Yeah, of course. And there’s always the leftover of security.
Malamud: That will be determined at a future document.
Baehr: [laughs] That’s right. Actually it will be determined in a document which will be classified and kept secret.
Malamud: Well, the nice thing about security is obscurity. When you’re talking about multicasting, the current multicast backbone shares bandwidth with the rest of the Internet. And if someone starts to do a large file transfer, for example, it can take away bits that the multicast environment had. How are we going to guarantee bandwidth? Or are we?
Baehr: Well let me— And this is one of the problems that is kind of fundamental that remains to be solved with ATM. People seem to believe that these problems have already been fully thought out on the solutions are at hand, and they’re not. And the problem is more precisely, how do I guarantee if I open up the fifth application on my workstation that requires multicasting, not that I’ve damaged myself, but that someone on the other side of the world doesn’t have their application drop off the face of their workstation because I’m congesting their switch? Or that their switch is being called on to do the copying of yet another multicast stream, and it exceeds the capability of that switch.
And there are various schemes that people are researching right now for actually reserving bandwidth, and that’s what it really comes down to. How do you make a request for bandwidth, and then enforce it? And then if you don’t subscribe, or you don’t follow the request that you made, what is the penalty? And the penalty usually is your packets get dropped, or your cells get dropped. How do we enforce that on a network-wide basis, and make that apply to the new apps?
Malamud: You’re talking about the wide area and the local area network are the same, presumably ATM. Why are we doing multicasting at the Internet layer instead of building that into our ATM switches?
Baehr: Well the question really is where is the interface, and where is the translation going to occur? I shouldn’t say the translation as much as, how is one going to treat ATM? Just as a fatter pipe? Which is probably what’s going to happen, with permanent virtual circuits in the beginning, as opposed to SVCs. If you treat it as a fatter pipe, then you’re running the current suite of Internet protocols on top of it. Therefore you can’t guarantee that each one of the switches in the intervening distance in an intervening network is going to support all the capabilities that you want. For this reason, the fatter pipes evolve, people bridge their current traffic or route their current traffic over this fatter pipe, and we’re stuck with the current model.
Malamud: Well again, if we’re looking at where do we build a function in, we could be doing multicasting at the IP layer. If that’s the case, why are we using ATM at all? Why don’t we just build IP on top of let’s say the Synchronous Optical Network, SONET, and just use raw bandwidth?
Baehr: Alright, this is one of the fundamental problems. People don’t seem to realize that the inherent multiplexing capability should be at the ATM layer, where one is able to push the problem off of copying and such into the switches and let the hardware do the copying for you before you have to deal with it at the network layer. And this is I believe the right way to go. However, people always take the path of least resistance, and they’ll go and do IP multicasting at the network layer because that’s what’s here right now. No one knows how to do ATM copying, cell copying, and multicast group admission and make that work in a very large network. So people will go with what they’ve got.
Malamud: Geoff Baehr, you do working for Sun Microsystems and it seems to me that you have some fundamental conflicts. For example your products have to be secure, yet they have to be easy to use. How does an application like let’s say Mosaic in the World Wide Web interact with security when you’re thinking about what your network products look like?
Baehr: Well maybe I could give you the viewpoint from how we run our network here at Sun, which is total paranoia with our connection to the outside world. And the security now is largely dealt with—entirely dealt with by application-level gateways and these essentially store-and-forward relays that take application data from one side and hand it to the other without any IP forwarding built into the kernel with IP forwarding turned off. That’s how people like to have their security right now, because there isn’t any other solution.
Malamud: So my mail message hits your outer gateway and your outer gateway hands it to your inner gateway and sends it on in.
Baehr: That’s correct.
Malamud: How is that secure? How is that…solving anything that—
Baehr: This is the usual moat defense, which by putting in as many barriers as possible without forwarding packets, with having only specific services listening on the router machines and on the gateway machines, we hope to have a little security. In reality, what should be required and what should be installed is what I kind of call the ultimate firewall, which is really a packet-tracing utility that has some heuristics built in. And if that packet-tracing utility sees—and I will leave it to the listener as an exercise to determine what “bad packets” are—but if this tracing utility sees “bad packets,” it should do something. And the something is either eradicate the incoming packet or never let an ACK to a SYN out onto the network so no one sees that there is any possibility of having a connection, that there’s a server sitting there.
And that’s the ultimate approach. However unfortunately, no one’s built one of those yet.
Malamud: It would seem that in addition there’s one other very useful thing. If we had strong authentication on the network and we knew that it was Geoff Baehr coming from some other place, we would let Geoff in, and his packets and let him do whatever he wants. And if it’s some random student, maybe that student would go off to some public archive instead. Is strong authentication something that will help solve some of these firewall problems?
Baehr: Yes, however it should be recognized that the strong authentication… You’re really referring to two different models. One is authenticating the host coming in. But since IP addresses aren’t tied to any notion of a particular host location or who’s using that host, there also has to be authentication of the individual users. And this get directly into the issue of what type of identification do you carry as a user to identify you to a machine, and more importantly how do you identify a machine end-to-end to a firewall or to a gateway?
And the question that we’ve been looking at is, how do you reduce the necessary information down into something that people will accept? Is it something that you want embedded in the machine? Probably not. Is it something that the people carry with them, like a smart card? Maybe. Is it a very long password or some type of RSA key that they carry around? That’s also possible.
But the question comes down to is it going to be people or machines, and the answer is it’s going to be both that are authenticated. And the machines themselves will have to have some type of mechanism to bind their IP address with a particular certificate that says “Yes, I indeed am the correct machine at this address. I’m not spoofing you.” But secondarily, the people who use the machine to what you really want to authenticate.
Malamud: So for the machines it seems pretty simple, you just buy yourself a Clipper chip and put it on your machine, right?
Baehr: Well, uh…the Clipper chip uh, to put it mildly I don’t think is the right answer. And we and a number of other people have been very strong in our opposition to this. And—
Malamud: What’s wrong with the clipper chip?
Baehr: If you’d like to have the government spy on you from now until the end of time, go ahead and use Clipper chips. Because the problem with Clipper I don’t believe is related to the fact that they can read your information as much as the fact that they want to analyze your traffic. Because traffic analysis is much more important than the actual information contained, in many instances.
Also, I don’t trust the government to hold— Nah, I don’t really care about the keys as much as the information that’s derived from using the keys. I don’t believe that within the way the government works that there’s any security in the data that the government is going to collect. And unless that data is kept classified—and by the way, I have no desire to increase the amount of classified data around the world—I have absolutely no faith that this information will be kept secret. And I pose the question, if Clipper had authenticated the conversations between Tonya Harding and Gillooly, how long do we think that those conversations would remain private?
Malamud: So it sounds like you have two worries. One is that the key escrow mechanism is not a reliable one. That we can’t keep that key secret.
Baehr: That’s correct.
Malamud: But also it sounds like you were hinting that even if we couldn’t get the key we might be able to break that encrypted information?
Baehr: Well. I actually— And no, I think that even if the keys were kept secret and were being used by a law enforcement agency, I have no faith that the information that was derived will in itself be kept secret, and will be kept private. In other words, the conversations that you and I have over the telephone during the day, I have no faith that that’ll be kept secret until the end of time. And more importantly I don’t like the fact that the government holds the mechanism to read my private data and my private voice traffic. And if that data has been recorded, the government has the ability to replay that or attack that information from now until whenever.
Malamud: Is there a better solution for authenticating hosts? Is it just a matter of they shouldn’t have the backdoor key, or is it a matter of there shouldn’t be a universal standard and… I’m trying to understand what it’s gonna take to authenticate hosts and at the same time preserve the individual privacy and freedom.
Baehr: Well I think the first problem is that an algorithm should be used which is understood, and has a reasonable chance of not having a trapdoor or backdoor built into it. Secondarily, I think that the algorithm should employ keys that are maintained by the user. And this should be a stronger form of authentication than what the government is employing. Of course we’re trading off the ability to authenticate and also to encrypt those conversations that are private with those that might damage the security of the country. Where do you call the individual— Where do you draw the line as to what should be broken and what shouldn’t?
To get back to the question, what do you want for an algorithm? You want something that’s public, you want something that’s verifiable, and you want something where the keys can be freely exchanged and can be updated. And also bad keys can be declared to be bad so that people don’t use keys which have been invalidated.
Malamud: Is this an area where the government should be issuing the keys to people? Is this something where each individual can go ahead and use whatever they want? Do we need a standard type of certificate?
Baehr: We need a standard so that people can write programs which use authentication and expect a particular key size or a particular key mechanism to be used. Regarding how the keys are generated, there also has to be a standard to ensure that the keys are sufficiently random and that there are certain characteristics of the key generation algorithms which in themselves are not weakened purely by programming error and by lack of knowledge. But the actual key themselves should be kept private. People should be able to retain their own key and do with it as they wish.
Malamud: How do we preserve all those things like making sure they’re are random and safe and good, and still let any user go ahead and generate their own? Are there laws that say “Thee shall not generate a bad certificate?”
Baehr: Well actually they’re not— Yes, they’re called mathematical laws, and the algorithms if they are employed, one can determined using various tests whether algorithms used to create the keys are good.
The question here is not so much that the keys themselves are impregnable, but the fact that one should be able to change the key upon demand. You as a user should be able to do what you wish in terms of encryption. And I’d add at this point that it’s ludicrous for the US government to believe that by mandating a scheme that’s applicable in the US, that the rest of the world won’t go and do exactly what they want, which is implement a scheme which they find to be flexible and strong. And to this end, there are several dozen schemes which are floating around the net, outside the US, for both authentication and encryption. And people are using these right now. And do we have any belief that mandating a standard in the US, or mandating a government approach in the US will cause this to change? I don’t think so.
Malamud: So it sounds like we’re really not going to be able to legislate individual behavior, and when we’re talking about how you secure your virtual person, it’s like having the government say you shouldn’t spit on the sidewalk. You can pass laws like that, but there’s a limit as to how much people will actually listen.
Baehr: I’d also say that the government will certainly find out in a short order that the people who are bad will not be using their approved encryption mechanism. And even if they do, they’ll probably encrypt the data before they even apply it to a Clipper chip.
Malamud: So you do your first encryption, and then you hand it off to Clipper and say, [crosstalk] “Here, do whatever you want.”
Baehr: “Here, I have— Thank you very much, have an excellent time.” And there are mechanisms that are known to search for encrypted data inside data streams, but you actually have to break the data stream first to find out that it’s encrypted.
Malamud: Well theory behind Clipper of courses is that because the federal government is gonna to buy lots of these and everyone in industry will follow, this seems to remind me of another government-led standard called GOSIP.
Baehr: I was just going to mention OSI. And I see that they’re waving the white flag now. The only problem here is the war isn’t something which is…intrinsically interesting only to computer science and networking people. This is something which is much more pervasive and affects the entire society. And I hate to have the government go through all this just to find out that the entire system has been negated by either someone revealing the algorithm and a trapdoor, someone breaking it, someone using their own algorithm, what have you. There’s so many avenues to negate this that the government should actually concentrate on things which are much more important.
Malamud: So Clipper might end up being the GOSIP of the 1990s if we’re not careful.
Baehr: And actually that’s probably a good way of stating it.
Malamud: Geoff Baehr, you’ve been active representing Sun in a variety of groups. And as a computer vendor you’re the ones that actually have to make the stuff. And I guess I’m wondering which groups matter today? Do you listen to the IETF? Do you listen— There are so many groups out there. ATM forums, and SNDS forums, and you know, Interop shows, just to name a few. What part of that feedback actually is useful to you in designing a product?
Baehr: Well I’d say first of all, standards are good. That’s the general attitude of the industry these days. And one therefore must have a sufficient number of people on every standards body to indicate that you too support standards.
Actually what we found out over time is that the groups that are most effective are ones that come together as ad hoc coalitions. And particularly those such as the IETF and the IAB, and groups that are led by people who actually have a product to ship or who have a standard which affects people making money, either indirectly or directly. Those groups seem to have a goal, as opposed to some of the open-ended groups which are working on a standard for its own good. And this leads to thrashing. And we’ve seen that before. We mentioned one of the previous protocol suites that has had endless revisions.
The efforts which Sun is making right now in standards are we’re attempting to do the unification of several different flavors of Unix and such, because it affects the way we make money. And also with networking and data communications, we implement those standards and we participate in those standards bodies which are driving ahead and are making progress. We participate in many different bodies, but kind of the criteria that we apply are when is the output going to be visible? And what does it affect? And does it affect real-world applications that people want? And if the answer is yes, we should go ahead and push real hard.
Malamud: Well what about the other groups doing virtual standards that are based on imaginary products? Do those groups matter? Do you have to send people there in self-defense or do you just ignore them?
Baehr: Well this is really the question of if you don’t go people can accuse you of not going. So, do you send people who don’t matter to you from your company, who are probably not the top rank? And the question is no, because then what happens is you’re not able to leave the group. So it’s a doubly bad position. But we try to send people to most of the groups that have country-wide effect, and this includes some of the groups in Europe and some of the groups around the world which are mandating standards for their entire country or for an entire industry. You don’t want to be left out. That’s the problem. And, at the same time people use standards as a weapon right now. And it’s just like anything else. In the beginning it started for the good of all people. Now it’s turned around to a weapon which people can use to either sell or not sell machines. Even if the standard is only a check box or tick box item.
Malamud: Many of these groups seem to have two purposes. One is “we fight among ourselves to agree on what the standard is going to be,” the other is to somehow promote the industry. And networking seems to be getting big enough and strong enough that there’s a need to do things like that. I’ve noticed some groups in Washington that are attempting to influence our national information infrastructure. There’s a cross-industry working group, there’s been gigabit jamborees. Are we going to be able to influence the shape of this so-called information superhighway, or is this going to happen led by the telephone companies and the network TV and the cable TV— Are we gonna play a part in the NII or are we just gonna furnish MIPS?
Baehr: Well… It’s… I belong to one of these groups. And I believe that what will happen is that while the government is inherently slow, and the government’s regulatory processes are designed for the Communications Acts of the 30s, therefore the government will indicate a proposal of what they would like to do when they can apply money. And the money causes people to jump up and to pay attention. But, like anything else in networking, the specifications are sufficient no matter how tightly you think you’ve written. They’re still sufficiently vague that the proposals and the results can swing all over the map.
What this means is that the people who have money and who are going to try to make money off of these things are the ones who are going to guide or to push their results around. And be it the venture capital community, or be it the access providers, or be it the carriers, these are the people who are going to make money off of these things. So that their desires that we’ll see reflected in terms of the implementations.
And there’s nothing to say that an implementation necessarily has to meet the goal of the government. So the argument that I would make is that the commercial implementations indeed will be this information superhighway or what have you. And that the commercial implementations will lead. Because public policy is necessarily slow. Because the deliberative process requires such a substantial amount of time.
Malamud: But what is this information superhighway? Is it just the Internet grown bigger? Is it you know, Super Mosaic? Or is it something totally different? Are we talking cable TV with maybe the ability to order a movie?
Baehr: Well my belief is that it’s done, and it’s called the Internet. And sooner or later someone should stand up and say this information superhighway’s here, and it’s the question of degree. Whether you’re gonna run DS3 lines around or whether you’re gonna run T1s into every school, fine, that’s up to you for you folks to pay. But in reality, it’s here. The mechanisms that the cable TV folks are putting together, those are mechanisms for them to make money, to be able to keep their businesses going to a closed user base. Talk to the cable television people and you’ll find out they have no goal to interconnect cable systems together. There’s no internetworking that’s been the goal by any of these people.
Malamud: So they’re building LANs. LANs under the livery of video data.
Baehr: They’re building MANs, for the delivery of video data and shopping services, what have you, for their subscriber base. Yet when you look at the interconnectedness and the range of services, and also the subscriber base out there on the Internet, gee, the people who figure out how to conduct commerce over the Internet are the folks who will realize the fact that this highway is here—I hate calling it “highway,” by the way. They’ll figure out that this thing is here.
Malamud: So is the home user gonna have several different ways to get out to the world? They’ll use their TV and their set-top box to get a movie, they’ll use their modem to go out to the Internet—
Baehr: Yes.
Malamud: —or is there gonna be the single magic set-top box?
Baehr: No, I don’t believe there’ll be a single magic set-top box, for a single reason, and that is if you’ve ever tried to get between a three-year-old and a television set to go surf the Internet, I can tell you who’s going to win that battle. And it’s not going to be surfing the Internet, it’s going to be the three-year-old.
And the question I have is, if you bring all these services to people, how much is going to actually be used versus the cost of making all this infrastructure go? I mean we’re talking about two antithetical ideas here. One is I sell services to a closed user base. Yet on the other hand I’m going to hook in the Internet, to richly interconnect all these people together? It doesn’t quite make sense to me.
Malamud: Do you think people are going to want both, or they just want to get their movies? Do people care about universal Internet access?
Baehr: Uh. Well it’s interesting. I think that the people who do care will go and buy the appropriate gadget to hook them in. Be that a PC or some specialized gadget. The folks who want to see movies, it’s a very compelling argument that you pay only $2 to go down to Blockbuster to rent a movie. The folks—
Malamud: Or $2 for pay-per-view.
Baehr: The pay-per-view, what have you. People are used to doing that. Changing things fundamentally, and this is— I dislike saying “paradigm shift,” but changing the usage model for the public takes a long time, and people have gotta be prepared to stick out for the long course any of these changes. Take for example VCRs. How long was it before VCRs became popular, before people were not scared and were able to go and put a tape into a VCR and use it. It was a ten-year program.
Malamud: Let alone program it.
Baehr: That’s correct. And with programming, if anyone out there knows of the appropriate universal remote control let me know because we’ve tried them all here. And can’t find any of them that seem to be particularly good.
Malamud: This has been Geek of the Week. We’ve been talking to Geoff Baehr from Sun Microsystems. Thanks a lot, Geoff.
Baehr: Thanks, Carl.
Malamud: You’ve been listening to Geek of the Week, a production of the Internet Multicasting Service. To purchase an audio cassette of this program, send mail to audio@ora.com. You may copy this file and change the encoding format, but may not resell the content or make a derivative work.
Support for Geek of the Week comes from Sun Microsystems. Sun, makers of open system solutions for open minds. Support for Geek of the Week also comes from O’Reilly & Associates. O’Reilly & Associates, publishers of the Global Network Navigator. Send mail to info@gnn.com for more information. Additional support is provided by HarperCollins and Pearsall. Network connectivity for the Internet Multicasting Service is provided by UUNET Technologies, and MFS DataNet.
Geek of the Week is produced by Martin Lucas, and features Tungsten Macaque, our house band. This is Carl Malamud for the Internet Multicasting Service, flame of the Internet.