Carl Malamud: Internet Talk Radio, flame of the Internet.
This is Geek of the Week. We have with us Milo Medin, who is deputy project manager of the NASA Science Internet office. He also heads up engineering for NASA’s NREN program. Those are a lot of titles, but basically when we think of NASA Science Internet we think of Milo’s net. Milo’s also an active participant in the Internet Engineering Task Force. He’s worked a lot on OSPF and other standards. Milo, welcome to Geek of the Week.
Milo Medin: Ah, thank you Carl. Quite an interesting opportunity to participate in this new extension of multimedia capability on the Internet.
Malamud: That’s right, an adventure in broadcasting.
Medin: That’s right, that’s right.
Malamud: I’ve got a couple questions about the NASA Science Internet and its relationship to the rest of the Internet. You’re both part and apart of the Internet—
Medin: Right.
Malamud: You’re a mission-oriented network.
Medin: Right.
Malamud: And you’re also the transit network for much of the Pacific Rim.
Medin: Right. We have cooperative relationships with a number of organizations. In general, we try and accomplish our job in such a way that we can benefit the overall infrastructure of the Internet as a whole. Because we believe that if we— There’s no way we can go off and run point-to-point links to every place in the network. That’s sort of counter to the Internet philosophy anyway. By going in and working with the people in a nation who are working on that nation’s infrastructure, we can achieve higher levels of performance and connectivity at lower costs, and make it in general better for our scientists and scientists overall to be able to access.
Malamud: Do you find the goals of the country might conflict with your goals as a mission-oriented network?
Medin: In some cases that’s there, and other cases we… You know, our requirements are such that sometimes we have to meet them via dedicated facilities. There’s just no way that if we have a flight projects for example who is pushing data at near real-time capability, the Internet today, Internet technology in general does not allow you to sort of have dedicated resources or bandwidth allotment. They’re being worked on in R&D on DARPA’s DARTNet and several organizations are working on things like that. But in general, it’s sort of a free-for-all. Now, sometimes that works out quite well and it’s compatible with certain requirements, and sometimes it’s not. But we, I think, and the federal government as a whole tries to take the approach that we want to build up the infrastructure overall, and that will in the end be better off for all of our needs, mission and non-mission requirements as well.
Malamud: Do you expect some of these clients of yours, countries for example, to migrate over to alternative carriers to commercial nets in the future? Are you a transition path for them?
Medin: Well, I’m not the carrier for countries per se. For example in PACOM, we’re part of that consortium. And we get things out of that, other people get things out of that. We act as their sort of gateway to the Internet, to the general-purpose Internet. As a whole it’s true. But that’s not different than the role that NSF or DOE play in other arenas.
I think if the Internet really becomes a public data network in the true sense of the word, something that equals Telenet or something like that, or surpasses it in capability, but it’s being used that way then I think in general the mission agency networks will become much more private, and not have to go off and pull capability to places. The overall structure of the public data network will be at the level that our own networks won’t have to be as large and we can concentrate on those particular requirements which need special links.
And I think if everything is successful, then that will greatly decrease in the future. On the other hand, there are a lot of issues which will make it difficult for the Internet to expand into that global sort of public data network vision, and we have a job to do. So the question is how to meet that job, how to how to meet the requirements that we have in the most cost-effective ways possible, and trying to do it in a way which is as cooperative and as beneficial to the science and research community overall. We’re not in the business of trying to compete with Sprint or anybody else. It’s not the role of the federal government. But we have a lot of expertise, and we have certain requirements that are basically very difficult to meet right now in the general-purpose structure.
Malamud: I’ve noticed that the National Science Foundation places certain restrictions on the type of data that goes over their network, an appropriate use policy.
Medin: Right.
Malamud: Energy Sciences has their own AUP policy—
Medin: Right
Malamud: —as does NASA Science Internet.
Medin: Right.
Malamud: Are those policies consistent with each other, and if not how do we handle that?
Medin: In general they’re consistent with each other. There’s an agreement between the agencies to sort of pass traffic for each other on sort of a quid pro quo basis. Most of those interactions occur at the FIXes, the two FIX points where the federal agencies interconnect to each other. But in general for example, NSF can use our link to the Antarctic—in fact that’s all a link that was put in place with the polar programs people at and NSF—in exchange for us using their connectivity to Cornell University, for example. And so, the federal government is…this is one of the rather unique examples of tight cooperation in the federal government. Most of the time you’ll end up with agencies duplicating things quite a bit, and in general I think we’ve had a remarkable amount of cooperation. Mostly due to I think the personalities and the sort of common viewpoints of people at the headquarters level share.
Malamud: It’s been cooperation among agencies and also cooperation among the agencies towards a common vision.
Medin: Right.
Malamud: Is this an example of an industrial policy? Has been one of the few examples maybe in the US government of that happening?
Medin: As a civil service employee, I certainly couldn’t comment on a policy like that.
Malamud: Well there you have it. [both laugh]
You mentioned the FIX, the Federal Internet Exchange—
Medin: Right.
Malamud: Currently there’s a movement for something called the Global Internet Exchange, a neutral place where anybody can connect to and basically peer with any other network.
Medin: Right.
Malamud: Can you explain maybe the difference between a FIX and a GIX, and…
Medin: Well, it’s hard— It’s… You know, the FIXes— First off you know, the FIXes are connection points between the major federal R&D networks. Those are relatively few in number, okay. The architecture of the FIX, where everybody peers with each other directly, is not an architecture that can scale. That is to say you can’t have twenty people all peering with everybody else and doing that kind of thing effectively. It becomes a management nightmare to configure and make sense of all that.
The GIXes I think are trying to do things in a somewhat more distributed fashion and allow greater amounts of carriers and organizations to connect to each other in an environment where they can sort of hand traffic off to each other, primarily of an international flavor. But you could look at them as sort of NAPs on a global scale, the NSF…
Malamud: No, I still don’t understand why a GIX wouldn’t have the same scaling problem that a FIX would have.
Medin: I think that some people have talked about putting route servers, or people not necessarily peering with each other if— You typically have a situation where international networks are more interested in getting data that’s in the US, and not necessarily from each other. If you look at the overall traffic figures for example of how much say Thailand talks to Australia, compared to how much Thailand talks to the US and Australia talks the US, I think you’ll find that the overwhelming amount of traffic is going to a certain set of autonomous systems, if you wanna look at the NSF backbone or something like that.
If you’re in the federal environment I think the sort of load mix is much more diverse. So you can optimize in the GIX case by reducing the amount of peering you have and depending on sort of third-party peering or route servers or something like that to minimize that. I think there are significant management issues which are going to end up having to be dealt with with GIXes and it’s not clear to me that those things are going to be easy to solve. But we’ll see, you know. It’s one of those things.
Malamud: Now you mentioned one other concept, which is a NAP, a Network Access Point.
Medin: Right.
Malamud: How does that compare to GIXes and FIXes?
Medin: Well I think the concept of a NAP as defined by sort of what NSF’s vision of where they’re going, at least in my mind is they’re sort of an interconnection point for networks that’s basically AUP-free, but there’s something called a route server there which acts as sort of the brain that the sort of NSFNET routers do today. But this route server isn’t actually a network. It doesn’t move data from one point to the other. Everybody sends it routes, it digests them, and sends back a consistent set of routing information to the people who are peering with it. And…
Malamud: That sounds just like a GIX, though, doesn’t it?
Medin: Well, I think the concept of the GIX is that people actually peer with each other. And that there’s not central sort of routing policy source on a GIX. Whereas on a NAP the idea is that there’s somebody there that everybody can talk to, and so the relationships are not n squared. It’s everybody’s peering with one, and that in turn redistributes the information around. It’s the same sort of thing we do in routing algorithms, right. If you’re on an Ethernet and you’re running a link-state routing protocol, you could model that Ethernet as a series of point-to-point connections between all the routers there. Or you can sort of abstract it by talking to a centralized routing agent which redistributes things. And those things are called designated routers in the IS-IS and the OSPF routing protocols. So it’s the same sort of thing. You deal with it by trying to consolidate the routing interchanges so you don’t have that kind of random, or widespread routing interaction.
Malamud: You’re listening to Geek of the Week. Support for this program is provided by O’Reilly and Associates, recognized worldwide for definitive books on the Internet, Unix, the X Windows system, and other technical topics. Additional support for Geek of the Week comes from Sun Microsystems. Sun, the network is the computer.
Don’t touch that mouse. Internet Talk Radio will be right back.
[Ask Dr. SNMP segment omitted]
Malamud: You mentioned OSPF and IS-IS, which are two internal routing protocols. Of course we started with the famous RIP as an internal routing protocol. Then OSPF and IS. People are now using Border Gateway Protocol as an internal protocol. We have RIP2 coming down the pike.
Medin: Right.
Malamud: Do we have too many routing protocols? Should we say no, or is this good to have so many?
Medin: You know, I don’t know. I find RIP actually is a great router discovery protocol. You can use it, basically routers send out RIP default, and most hosts that are shipped from vendors are equipped with a routed that they can run in ‑q mode (basically quiet mode) and they just pick up routes from routers, and that way hosts can figure out where their default gateway is and then they can be ICMP redirected—
Malamud: So it’s good for a workstation.
Medin: So I think it’s good for certain sets of thing there. It’s also the case that a lot of situations if you have three or four Ethernets, you know, do you really need something more complicated? Especially if you’ve got low-cost routers. People are building Ethernet routers, PC-based platforms that Ethernet to Ethernet very cheap. You have a lot of companies getting into the low-end Robert business these days. And so there’s a lot of topology that I think that you don’t—static routing work fine in, okay. And so RIP is just an easier way of doing it than static routing.
But once— I think we’ve learned in the past that networks tend to get very complicated. They grow. They’re not a static thing. And as network get larger and larger you need to move up to something that has a more capable protocol. OSPF I think has been proven in the marketplace as a good protocol. It’s being used all over the world. It’s being used in some very large networks. And one of the things that we put into it when it was being designed was this notion of classless routing. Everything is sort of a route plus a mask. There’s not A, B, or C that’s hard-coded into that. That has proven with CIDR and some of the future activities that people are trying to put in place now for sort of restructuring the way routing in the Internet works to be a very good decision. And so I think you may use OSPF in places where the complexities isn’t driving you but just the way you want to do your subnetting or sort of route masking and matching works.
Malamud: Now I know some networks…EBONE, the European Backbone for example, instead of using OSPF internally they’re going to be using BGP internally—IBGP.
Medin: Right.
Malamud: Why would you want to use IBGP or OSPF? Is there room for both of them as an internal routing protocol?
Medin: Um, I don’t— You know, if you’re basically in the transit net business, you don’t have really very many internal routes, okay. If you look at NSF backbone, for example, how many routers and how many router-to-router interconnects are there? There are not that many. Most of the routes that are passed are external routes. You can pass external routes by importing them into an internal gateway protocol like OSPF and then reexporting them out at the other end. Or you can pass them in essence via an external protocol that’s running through your internal network. That’s the difference. OSPF can carry external information, it can carry a certain amount of information in a tag field that goes with it. And that allows you to make policy determinations when you import it and export it. You can do the same thing with IBGP.
I think the way that IBGP works, with its primarily sort of point-to-point orientation— And in certain circumstances that works quite well. And it avoids having all that information be passed in your [insides?]. If you’ve got a router, you’ve got a very simple network topology internally, and you’re carrying a lot of external information, then why bother the internal protocols with carrying all kinds of information where it’s just basically acting as sort of a transport for it It’s just passing it from one place to the other, it’s not really acting on it internally, effectively.
On the other hand, if you’ve got a complex internal structure like NSI does, or the way a lot of regional networks do, there are some definite advantages by importing that and especially if you’ve got a very wide range of interfaces. You may not be able to effectively do point-to-point IBGP connectivity.
Malamud: What about IS-IS? When would I want to run that?
Medin: Well, um… You know, I think right now there’s just not enough IS-IS implementations out that people really want to use that. I think if you’re in an IP-only environment, I think you’re probably still better off using OSPF which was really designed and optimized to deal with things the way IP does them. IS-IS is a very capable protocol, but I think that most of the proponents of IS-IS argue that you should use it in sort of this integrated routing model where you have both IP and OSI being carried in a single routing protocol. Or other protocols as well. And we just don’t have enough vendors who are doing that, or enough customers who are actually doing that right now to say whether it’s successful or not. I just… All the successful multi-protocol router vendors so far have implemented multiple protocols using multiple stacks. They have not done it the integrated way.
In certain situations, there’s advantages to doing it in an integrated way. In other situations it’s not. The marketplace is basically going to decide that, you know. That’s the great thing about the Internet, right, you can actually validate your design principle by whether or not it’s successful at meeting real-world problems that real-world people have. And it’s not just theory, it’s not just—
Malamud: Well, gross revenue speaks a world, doesn’t it?
Medin: Yeah, it certainly does. It certainly does.
Malamud: I’ve heard a view, and I’m not sure how many people share it, which is that all this work on complex scalable routing protocols is a waste of time, and the reason is because we’re moving to a global ATM cloud around the world. And you don’t really need routing protocols, everyone’ll be able to speak to everyone else directly.
Medin: Well, you know— That’s an interesting view, and it’s held by a lot of people who I have a lot of respect for. The problem— You know, the problem is that you’re always going to have conventional LANs out. There’s Ethernets that are out there… Ethernet is not going to go away. Not when you can get adapters for you know fifty, sixty dollars and that’s just a part…you know, you put them in and they’re built into workstations. 10BASE‑T and all of that, operating at 10 megabits. There’s a lot of conventional networking technology that’s out there.
And there are low-speed lines. You know, I’ve heard some people talk about putting ATM over T1 or even something like 56Kbit but really you know, you pay a lot of overhead when you’re trying to run on top of a small line that you’re trying to get the maximum amount of data across—
Malamud: As we’ve learned with TCP over over dialup lines.
Medin: For example! For example. And so—
Malamud: And header compression for ATM I guess is not being considered yet? [laughs]
Medin: Well I mean, there are different ways you could compress. You know, there are pluses and minuses of that. I think you know, the question is really… You know ATM is not a perfect technology. And I don’t think any of its proponents are arguing that. It solves a certain set of problems. And it’s a standard that a lot of people have embraced. And it gives you certain functionality that’s very good to get that way. It uses resource control, and some of the other capabilities are great; isochronous transport.
But as long as you have disparate networks, you need an Internet Protocol, okay. Now, if you’re going Ethernet-to-ATM, how’re you going to do that? Are you going to do that by pushing some MAC-layer stuff and then, what application stack? You know, TCP/IP… You’ve got to TCP as transport, but that’s designed to run on top of IP. You can’t run TCP on top of 802.2 right now. I don’t know anybody who’s proposing doing that kind of thing. And given that, you’ve got a large software base, a lot of existing host interfaces. I think there’s a large argument that can be made for running TCP and IP directly on top of ATM. Even if you had a ubiquitous ATM structure, you could just run that and have compatibility with all your applications and all these other facilities that are out there right now. And it’s not clear to me what you’re trying to optimize if you get rid of IP. There are people who will argue that TCP is not going to be the protocol that takes us into the gigabits. I think a lot of people have said that TCP/IP was not the particle that would work well on Ethernet. There are people who said TCP/IP is not a particle that would work well on FIDI. And now people are saying that TCP/IP is not a protocol that will work well at very high data rates.
Malamud: Well I think the benchmarks and work done by David Borman at Cray kind of proves that it may not be the optimal protocol for gigabits but it will certainly work [crosstalk] at that speed.
Medin: Right. And the other thing too is you know, we’re not talking about a static target. You have options…header extension options and other kinda things which are in place for TCP, and people are also working on things. So it’s not the same TCP that we had back in 1980 that’s running at gigabits. You’ve got bigger window sizes and things like that in there. And so there’s a capability there for evolving the standard, which we do all the time in the Internet. And so I have a… You know, again the question is what do you gain? If you look at where the cycles are spent it’s typically not in TCP and IP. You’ve got a lot of other problems to deal with when you’re operating at a gigabit speed.
So you know, the question there is what kind of…why would you want to do that? Maybe multimedia conferencing, etc., some of these applications may not be well-suited for running on top of IP. Maybe better-suited to running on top of an ATM network directly.
Malamud: Oh sure. Audio for example could never run over the Internet.
Medin: No, it could never run on top of something like IP. And so you know, there’s a lot of different ways that you could solve some of these problems. I think one of my concerns as someone who’s looking toward actually building an ATM wide area system and using it, is that I think a lot of people are overhyping ATM. Especially some of the LAN vendors who are purporting that ATM will cure baldness and you know, it’s a dessert topping, it’s a floor wax, it does everything. And I guess my concern is you know, as with all network technology, that it be used where it makes sense and that it not be sold into environments where it’s not the right answer.
Malamud: Does it make sense as a LAN interconnect for high-speed workstations.
Medin: I think it does. I think there’s a lot of cases where it makes sense for that. Now, there’s a lot of environments where Ethernet’s a perfectly good solution. And you don’t really need very high-speed requirements, performance requirements, because workstations’ I/O architecture can’t support good I/O out at very high data rates anyway. And if you’re looking at a physical plant that’s limited to 10BASE‑T type of wiring, Type 5 wiring plan or something, you might get 155 out of it. But you know, you’ve got FDDI and other things there. I think it’s good at doing a certain set of things, but we need to look at it from the point— I mean, there’s a paradigm shift involved when you’re talking about ATM. You’re talking about a switching rather than a LAN technology. And the ability of a LAN over the network to be useful, you have to have operations and management capabilities as well. And with ATM you just can’t butt a lanalyzer into the LAN and see all the traffic that’s going across it, right. By definition it’s being switched from—there’s not a central spot where you can just tap into it like you can with FDDI or Ethernet.
So, it’s important that…you know, there’s a— That’s a down— That’s a negative argument against using that type of technology in LANs, because how many times when you are having a problem do you resort just to putting something out on the network in promiscuous mode and looking at all the packets. If you didn’t have the tools available to you…you got yourself potentially some problems.
Now, that has to be weighed against the advantages that ATM switching gives you. And I think a potential advantage is an integrated local area/wide area environment. And so again you have to look at the cost/benefit tradeoffs. But it’s not a simple calculation. And I think people, in general, a lot of decisions that I think sort of market droids try and try and push are decisions that the customers really have to take a good solid look at. Not just in terms of acquisition cost. Not just in terms of raw performance. But in terms of operations, and maintenance, and the ability to support the thing, and adapters that work at high performance. We’re only now starting to see adapters for things like workstations that actually operate at reasonable data rates because before, they were doing the segmentation and reassembly in software. And you’re burning your workstation’s CPU just trying to put the cells? together and pull them part.
Malamud: You’re listening to Geek of the Week. Support for this program is provided by Sun Microsystems. Sun Microsystems, open systems for open minds. Additional support for Geek of the Week comes from O’Reilly and Associates, publishers of books that help people get more out of computers.
This is Internet Talk Radio. You may copy these files and change the encoding format, but may not alter the content or resell the programs. You can send us mail to mail@radio.com.
Internet Talk Radio, same-day service in a nanosecond world.
You talked about migration of the TCP protocols over time to handle faster speeds. We’re looking at a migration effort now at the IP level, to handle right address space exhaustion, to look at routing table explosion, to support things like policy routing. Where do you see that going? What are the requirements we should be looking at for routing protocols?
Medin: Well you know, it’s an interesting environment. I think the…you know… My personal opinion is that— Well we have a saying in NASA called “failure to achieve orbit,” right. You’ve launched a rocket, but it didn’t quite have enough delta‑v to make orbit. And I think a lot of the proposals out there are going to end up not really making orbit. They’re not gonna have a lot of implementation. And I think part of the issue there is you have to look at not just architecture but a transition plan of how you get from where you are now to where you want to be.
And we have to learn something from the ISO experience. People are not gonna convert to some new protocol unless there’s a significant functional advantage to doing so. Okay?
Malamud: Well that’s interesting because many of the ISO protocols had um…were feature-rich, one might say. X.400 was touted that way. Many of the lower layers.
Medin: Right. But I can run X.4—
Malamud: What happened there.
Medin: But I can run X.400 and X.500 on top of IP.
Malamud: Mkay.
Medin: So, I mean just because I want the ISO application, regardless of whether or not the application is good or bad, if it’s a good application, my—inside NASA there’s a major effort to put in place nameservers using the X.500 protocols running on top of an IP Internet. So I don’t need the ISO lower layers to get ISO applications.
Malamud: Are there advantages at the lower layers that I ISO had. For example, ESIS as a discovery program.
Medin: I think there’s a lot of advantages in piecework, but the question is you know, you’ve got a large installed TCP/IP base/ It is not adequate just to have slight advantages to get people to convert, okay. How many vendors… Here’s an interesting question. How many vendors ship ISO software as their default network software that their operating systems use, okay, for things like file service, remote logon, remote procedure call mechanisms—ship it by default in their workstations. It’s not an extra cost option. I’m talking about something that’s bundled in because it’s so key to the software on the distributed computing environment that it has to be there. How many vendors do that?
Malamud: Well it’s a good thing we can’t name vendors on this program.
Medin: [laughs] Right. So it’s…you know, so there’s a question there. And I think the question with an IPv7 follow-on is what advantages does it give you on top of the existing IP? If the only advantage it gives you is that the system is scalable, better-scalable, that’s an advantage which is a global advantage but has very little effect on a local decision, right. Some university is not going to go off and renumber…I mean go off and reimplement all the host software, or buy brand-new host software and network diagnostic tools and routers and all that other stuff…just so that the Internet can grow, right. There’s no advantage to them to do that. So, from the point of view of an incentive, what incentive do you provide people to change, okay?
Malamud: What would you like to see in a new routing protocol? What are some of the things missing in IP that you’d like to see there?
Medin: Well, you mean in an Internet protocol.
Malamud: Yeah.
Medin: I think the point— Well, there’s obviously some issues with scaling. I think that would be useful. But I think…better— We’re not getting with ATM and that type of an environment communication subnets which have ability to do resource control and some better performance-assuring capabilities. And you’d like to have the IP protocol have some capability for doing flow IDs or something like that would allow the routers to get some information that they could use to test either signal down to the subnet in terms of “this flow needs such and such capability” and now the ATM network can provide that, the TCP layer knows that it needs that, how you get that information.
Malamud: So that’s coordinating. For example if we have isochronous data running across multiple ATM clouds in a complex Internet, a flow ID would let us identify that flow of data through the network.
Medin: Yeah, it’s…it’s an oversimplification but yeah. Basically what we’d like— You know, think about how much better it would be if you had an MBbone, a multicast backbone, where you could actually devote bandwidth on demand so when somebody’s running a video conference that they can get that bandwidth and not be stomped on by two Crays starting to do a file transfer with each other. With an ATM substrate you can actually do that kind of thing. But yet it’s very difficult when you’re actually trying to use that in an environment where you’ve got TCP and IP and routers and the whole nine yards to make that work. So I think there would be some advantages there.
I think another area would be something that would help you better provide security, and securing access to hosts. So this is primarily a problem with the complex operating systems we have. There’s potential at least for certain things that could be done there. I think that’s— Unless something is done about being able to secure communications, not just…[indistinct phrase] mostly encryption, but make sure that people can’t do bad things to you across the network and that your network’s not subject to denial of service attacks, etc. The Internet’s not going to become a global village public data network. Businesses are not going to pick a common carrier-based IP solution as opposed to a frame relay or an ATM or SMDS type of solution, because of the potential issues.
You’ve gotta— Take your average workstation. You look at how many lines of code there are in there for the various services that’re available over the network. That’s a very complex beast, and it really—a lot of this was not designed with security in mind. And now if you’ve got corporate data that’s your livelihood, and you’re attaching it to a network where anybody, you know, from the Antarctic to you know, Kuwait can get on to the thing, I mean…this is quite an quite interesting thing. You need to think about that.
Malamud: Interesting’s an understatement. [both laugh]
There you have it. This is Geek of the Week. We’ve been talking to Milo Medin. Thanks a lot Milo.
Medin: Thank you.
Malamud: This has been Geek of the Week, brought to you by Sun Microsystems and by O’Reilly and
Associates. To purchase an audio cassette or audio CD of this program send electronic mail to radio@ora.com.
Internet talk radio, the medium is the message.