Carl Malamud: Internet Talk Radio, flame of the Internet.
Malamud: This is Geek of the Week, and we’re talking to Stuart Vance, who’s vice president of engineering at TGV. Welcome to Geek of the Week, Stuart.
L. Stuart Vance: Thank you Carl.
Malamud: Your company makes TCP/IP and NFS software for VMS operating system-based machines. I’ve always been curious, what happens if DEC starts doing the same thing, bundles it into their operating system. Does that just wipe you out of that market? Are you gone?
Vance: Well, DEC fact does have a TCP/IP product for VMS that they sell. “DEC TCP/IP services for OpenVMS,” kind of a mouthful. And we have been a bit worried about whether they might actually bundle it in with their operating system. They have not as of yet. And while we’re concerned about it I wouldn’t say that we’re overly so. We have a very good product. It’s been around for a long time. And the history of MultiNet goes back to ’82. In fact MultiNet was one of the first NCP/TCP stacks on the Internet, period. Back then I guess it was actually just the ARPANET.
DEC’s product is much newer. It’s a lot… It hasn’t really stood the test of time. And I think that DEC is discovering with their product that it might be easy to play in a small environment, but playing in the Internet itself and providing all of the requisite services that people…well, require for the Internet is a lot harder than it looks perhaps on paper.
Malamud: The reason I ask is because as you know there there’s many giants in this industry. There’s Microsoft, and there’s many small companies competing with Microsoft. Do you think small companies like yours are able to compete with the giants, by being quicker, faster, better?
Vance: It certainly helps to be nimble. One of the reasons why we’ve been able to be successful is when we’ve seen an opportunity we’ve been able to jump on it quickly. Whereas with companies that’re…the billion-companies out there… They have big feet, and it takes a lot longer for them to get them moving. Of course, the one danger of that for a small company is that companies with big feet tend to leave a lot of a flattened toes. So…
Malamud: No doubt about it.
Vance: But I think it is clear that if you look at in particular the PC companies, certainly Microsoft could’ve come in a long time ago and done their own TCP/IP or bought one of the smaller companies and started selling their product. But I think that in particular in the case of Microsoft, they’ve recognized the advantage of having these small innovative companies out there providing really good software. So, I actually rather applaud that in certain companies.
Malamud: You spend a lot of your time when you’re not actually doing your job going over to Interop and helping set up the show network there. In fact you’re part of the core team of this fanatical band of volunteers that puts this huge network together. Why do you do that?
Vance: That’s a really good question, and I’m not sure that I can give you a particularly coherent answer on it. I’ve been doing it myself since ’89. Which I guess was the second or third Interop. And we’ve had a couple other people at TGV who’re working on the current show, in fact, and who’ve been working on Interop since then.
I guess what keeps bringing me back is the collection of people. It’s not so much that we’re doing it out of the goodness of our heart, but it is fun working with technology that…for example I personally don’t get the chance to work with much here at TGV.
Malamud: Is this a particularly sophisticated network that you’re putting together, or particularly large-scale, is that why you do it?
Vance: I guess that’s partly it. I used to work at a large land grant institution, as we used to call it, a big university in Texas. Which had a very large network. Seventy buildings on a broadband network. But the thing that’s most impressive about Interop is the fact that the network is staged over the course of two to three weeks, and then when we actually move into the facility and put it up, it takes about twelve hours. And we end up mobilizing anywhere from a hundred to a hundred and fifty people, with a core group of about twelve to fifteen people. And basically there’s this incredibly monumental effort to put together a network that’s used basically for three days. I guess there’s some sort of…a sort of morbid fascination at this. It’s always somewhat disheartening when you actually sit down and realize that you just put a significant portion of your life…well, perhaps in terms of energy output, to put together something that three days later you’re going to rip out of the ceiling and…perhaps not throw away, but spool up and store away for later. There’s something kinda strange about it.
But I guess what keeps bringing me back, and the people at TGV, is the collection of people that Internet has put together for their NOC team, which is what they call this core group. They’re people from a variety of different companies, from large system manufacturers, from consultants, from software companies like ourselves. And the collection of people is…is truly amazing. There are software engineers, there are hardware engineers. There are cable pullers. And it’s a really great group of people to work with.
Malamud: It sounds like fun, as I’ve learned just doing this a couple times just on the weekends.
I want to turn back to software, and protocols, and things on the Internet. You work in a company that uses VMS a lot, and as you know most of the Internet is somewhat Unix-centric. Do you see that Unix-centric approach hurting the Internet? Is there something that can be done to change that? Are the protocols…too Unix‑y?
Vance: Well Unix is clearly the most predominant operating system in use on the Internet, so it’s hard to say one way or the other whether it’s
hurting
the Internet. It certainly causes us grief every now and then, because there’s a lot of public domain software written that can grok Unix quite well but when it comes to anything else it just…the translation doesn’t work so well. We certainly find ourselves in the minority in terms of supporting VMS. At times we find ourselves… I don’t want to save the subject of abuse, but that’s what it borders on. VMS is looked at at times with scorn by many Unix advocates out there. But, our whole goal is to be able to integrate VMS in with, in particular Unix, to provide all the normal services and functionality that will allow the two to communicate bidirectionally, such that a person on a Unix system would never even be able to tell that he’s talking with a VMS system.
Malamud: And you’re able to do that.
Vance: We get pretty close.
Malamud: Pretty close. And do you think someone with Windows NT, or OS/2 or some of these other operating systems—the Macintosh—is gonna be the same thing?
Vance: I don’t think so. There are a number of common concepts between VMS and Unix that you don’t really get with things like NT and the Macintosh. Yeah, they all pretty much have hierarchical filesystems, but in terms of a multitasking operating system, in terms of the concept of what a user is, what a file is, in the Macintosh and Windows NT world, it’s very different. It’s… In fact I think people are going to discover as NT becomes more and more popular, and people in particular start looking at NT as a server platform, they’re going to realize that it’s going to take a new style of thinking to adapt servers to the NT environment. Because it doesn’t have the same concept of a user that you get with VMS and Unix.
Malamud: Your software is often used for an application which has puzzled me a bit. And that is creating DECnet tunnels. You encapsulate DECnet packets inside of TCP/IP packets, move ’em over a TCP/IP network to some other place, decapsulate ’em, turn ’em back into a DECnet packet. Um, why would anybody do this? Why don’t you just run multiprotocol router, for example, or do away with one of the networks?
Vance: Well, it is kind of interesting. We first did DECnet over IP largely on…I don’t know if I’d necessarily say it was a dare, but someone told Ken Adelman, one of our founders, that it couldn’t be done. That you could not do DECnet over IP tunneling. Ken of course…really really hates being told that he can’t do something. So he set out to prove the guy wrong. He developed the software— As it turns out it’s been very useful in the Internet, in particular by NASA. NASA runs a very large DECnet network, formerly called SPAN, the Space Physics Analysis Network, currently I believe called NSI/DECnet.
And basically these guys need to be able to provide DECnet to some of the far reaches of the Internet. Well, it’s not really…practical in many cases to establish a dedicated DECnet line from one of the NASA hub sites to a small college somewhere, or to a research facility. And when you look at how these folks are connected anyway, they’ve already got an Internet link. But the Internet itself of course doesn’t run DECnet. So being able to tunnel DECnet over the Internet backbone to be able to reach into some of these smaller sites saves you a tremendous amount of money, and extra line costs in terms of potentially having to swap out one router vendor for another if the existing router vendor doesn’t support DECnet. There are quite a number of economies of scale that you can achieve by using this existing infrastructure. And NASA has definitely been one of our largest supporters in that area.
Now as it turns out, one of the other big uses that we’ve seen for running DECnet over IP are in certain campus environments, where they do not allow anything else on their campus backbone aside from IP.
Malamud: You’re listening to Geek of the Week. Support for this program is provided by Sun Microsystems. Sun Microsystems, Open Systems for Open Minds. Additional support for Geek of the Week comes from O’Reilly & Associates, publishers of books that help people get more out of computers.
Malamud: Now, let me ask you this. What if I had multiprotocol routers on the entire path between these two endpoints. Would it still make sense to do a tunnel, or should I just activate the multiprotocol routing capability?
Vance: It doesn’t really make much sense at that point. You do end up taking a hit in performance because of course you have to have these two end systems running MultiNet to be able to do the encapsulation and decapsulation. And if you’ve got a dedicated system that you can provide for that, performance isn’t really so much of an issue. But it’s going to reduce the bandwidth you’re going to have, it’s going to increase the latency. So in that particular case, you’re really better off just enabling DECnet on the multiprotocol routers. But the issue is then that now you have to worry about managing two protocols on a particular platform instead of just one. Because you know you’re gonna have to— Since you know you’re gonna have to manage DECnet on the VMS system anyway…yeah, adding an extra circuit in usually isn’t a big deal.
Malamud: So running the TCP/IP-only backbone might actually make sense in some environments as far as the network management load and things of that sort.
Vance: In some environments, yes. As I mentioned there are couple of universities I’ve run into that do not allow anything on their campus backbone aside from IP because they feel like they can manage IP. They understand it. And they actually tie together all of their DECnet hosts by having one DECnet host in each department running MultiNet, With a DECnet over IP link going into a central system at the main computer facility. And it—at least the last time that I talked to them it was running something on the order of sixteen different DECnet over IP tunnels to these departments.
Malamud: Tunneling and encapsulation is used when services are not deployed through that entire net. It’s a way of jumping over an infrastructure. That’s currently being used to support new technologies such as multicasting. Is this is a generic technique we should be using in designing our protocols?
Vance: I fully think so. The nice thing about encapsulation like this, tunneling, is that you can employ a new protocol, or deploy a new protocol, in an area where you don’t have the native network support for it. For example if you look at some of the work that’s going on with the IP version 7, there’s one particular proposal called SIP, the Simple Internet Protocol. There is testing going on on the Internet today between end system implementations of SIP, even though there is no end-to-end SIP path between the two testing sites. They’re using a scheme known as IPAE, IP Address Encapsulation, that allows you to establish a SIP connection, “connection,” between two SIP hosts, over an IP backbone.
So the nice thing about this is it allows you to do rapid prototyping because you don’t have to worry about replacing your entire network infrastructure. And it can end up saving you a lot of time and money in terms of development.
Malamud: You think this is how the Internet is gonna do its transition to the to the next IP, is by transitioning parts of the net and then using tunnels on the parts that haven’t made it? Or is there a more sophisticated scheme we can use?
Vance: Well it’s probably going to depend somewhat on what proposal is chosen. If you look at the two current popular proposals, one is SIP, which is pretty much like straight IP only with a 64-bit address—a fair number of other changes, but it’s not a radical departure.
The other, however, is TUBA, which is TCP and UDP over the OSI Connectionless Networking Protocol stack. That’s a more radical departure. And while there is quite a bit of OSI work, OSI traffic flowing around on the Internet over a number of multiprotocol backbones, when it comes to transitioning, you really are looking at a transition. It’s not a… There’s not really as much coexistence potentially possible as you have with say SIP and IPAE.
But I think in both cases you are looking at a situation you’re gonna do a lot of encapsulation, a lot of tunneling, because you are going to certain areas where you’ve got two hosts that want to communicate using CLNP, and there’s only an IP backbone between them. And of course certainly as is the case with SIP, it’s planned all along that you’ll have SIP tunneled over IP, and IP tunneled over SIP, and they’re even looking at even more general things like tunneling IP over IP.
Malamud: Why would you do that?
Vance: It has a number of interesting benefits, one of which is in terms of supporting mobile hosts. You can have your host assigned a unique IP address, which you keep with you no matter where you go. And if you ever have to stop and plug in somewhere, you get a local network near a local IP address, but you then established an IP over IP link into your home facility, your central facility, giving you effectively a constant address no matter where you go.
The other nice thing it provide you is a situation where you might have two different subnets of an IP network number separated by a different network number. IP in general doesn’t deal well with partitioned subnets, but if you can establish an IP over IP link to tie the two subnet together, you’ve basically healed your subnet gap. And it gives you an interesting and very flexible way of doing…in particular handling—doing multiple path routing where…let’s say you’ve got two paths between two sites. But one of the paths is a different network number. Well, if the primary path goes down, you can failover using the backup path as long as you have this IP over IP tunnel to be able to maintain connectivity between the two sides.
Malamud: Is this an approach that’s gonna scale? What’s gonna happen when every car is a mobile host and the car starts talking to traffic control and there’s a million of those?
Vance: I don’t think anybody really has a good concept of what’s going to happen when every car, and every toaster, and every television has its own IP address, or its own network address. It’s sort of one of those things that’s not just an order of magnitude problem, it’s a several orders of magnitude problem over what people are used to dealing with now. It’s certainly the case that the existing Internet infrastructure could not even begin to handle that big of a jump. Either in terms of address allocation or in terms of handling the routing. The Internet right now is getting precariously close to an edge from which it’s going to have a hard time backing off. Because there are too many network numbers in use, and the existing routing infrastructure is having a hard time dealing with it. I really do think it’s going to take a couple of leaps in technology before we can look at what a number of people have dubbed “ToasterNet,” which is you know, pretty much every electromechanical device having its own addressable network…basically being an addressable network entity.
Malamud: Will that be the Internet that we know today, evolved of course, or will it be a different thing? Will the Internet just go away and become history.
Vance: That’s another really tough one. I have to tell you I cannot honestly tell you. There are times when I think the Internet will get its act together and will have a long and fruitful history, and there are other times when I think that the Internet is teetering precariously on the brink of disaster. I think that if any of the common carriers ever really get their act together and can come up with a low-cost internetwork service, the Internet as we know today may indeed disappear. I’m not convinced that the common carriers quite comprehend how to deal with that, how to do something like that. Sprint certainly has gotten into business now. MCI has gotten into it to a certain extent. But none of them have really taken a big plunge…in comparison to providing even say X.25 service, which a number of the carriers do.
So I— I really don’t know. It’s going to be interesting to see what happens in the near future because I think that within the next two to three years we’ll get a good handle over where the Internet is going to be going for the next ten to fifteen years.
[Here the recording inserts a clipped copy of the earlier Ken Adelman anecdote, which has been omitted as an error. But it’s unclear if there may have been other audio that should have appeared instead.]
Malamud: This is Geek of the Week, featuring interviews with prominent members of the technical community. Geek of the Week is brought to you by O’Reilly & Associates, and by Sun Microsystems.
Malamud: You’ve been working on an interesting little research project with some other folks known as IP over E‑mail. Can you tell me about this effort?
Vance: Well, unfortunately it’s never quite gotten as far as we’ve wanted. We’ve actually got the protocol designed. But it sort of came out of getting some email from somebody in the Soviet Union wanting information about our product. And I started talking with some friends of mine down at a company called Innosoft International. They make a popular VMS email gateway package called PMDF. And were thinking gee, it’s certainly nice that these guys have email but gosh, I wonder how they could actually ever end up getting directly on the Internet. Well we figured you know, if they’ve got email access, well what we could do is come up with a framing scheme whereby you could layer IP packets into an SMTP mail message. I mean if you think about it, electronic mail really is just a datagram protocol. The datagrams tend to be a little bit larger, and they can be of variable size, but you can do pretty much anything with them that you want. And if you consider how email functions in its store-and-forward traversal of the Internet, it really is a lot like an IP datagram flowing through the Internet. Although the nice thing is that in general you’ve got a little bit more likelihood that your SMTP mail message is going to get all the way through than you do say, of an IP datagram getting through.
Te real challenge, though, is that if you’re going to support something like IP over email, you’ve gotta make sure that say, your round-trip estimation, the RTT estimation, can handle delays of say…a day. Two days, or three days. I—
Malamud: So telnet as an application may not be as snappy as you’re used to it.
Vance: No, telnet definitely is going to be a bit slower. I mean if you can think about how telnet normally works, you type a few characters and in general you end up getting one character per packet. Well if you think about that in terms of one character per email message…all of a sudden you start realizing that the headers of your email message start dwarfing the actual data inside by oh, I’d say factors of maybe five or six hundred in the worst cases. Sort of depends on how many email gateways you end up having to pop through. But yeah, the round-trip estimation, I don’t if many Berkeley-derived Unix systems could actually handle round-trip time characterizations of two or three days.
The other thing you have to worry about is that most Berkeley-based Unix systems have a TCP connection timeout of thirty-seven and a half seconds. All of those would certainly have to be fixed. I mean if you if you imagine having to send a TCP SYN in an email message, wait for the SYN-ACK coming back, and then sending the ACK to finally open the connection, you potentially may have anywhere from a few minutes to several hours that will take just to open a TCP connection. I don’t know of many TCPs that really are gonna deal with that particular well either, but. We actually looked at the kernel hacks that would be necessary to do that in MultiNet, and we’ve talked to Innosoft and looked at handling it in MultiNet. And it’s one of those things where I— As soon as we get in a particular silly mood…we’ll definitely go ahead and add in. It’s actually something that wouldn’t really be all that difficult. It’s just that I guess we’ve… Well, I guess we’ve sort of blown ourselves out on fun and frivolous activities for a while. As you know, we’ve done a number of other entertaining things, and—
Malamud: You should describe some of those.
Vance: Well back about three years ago, I met a gentleman by the name of Simon Hackett, who at the time was working at the University of Adelaide down in Adelaide, South Australia. Simon and I met at the Australian Network Shop, which was a meeting of academic and research types from all over Australia, to try and put together a national network down there. You know, these guys got started on this process in ’89, which allowed them to do something that unfortunately we in the US did not have the luxury of doing. It allowed them to learn from the mistakes made by…
Malamud: They were able to do it right.
Vance: Exactly, to be able to plan on how to do it right from the start. And AARNET, the Australian Academic Research Network, has been to my mind a shining example of how to do a national backbone.
But Simon and I ended up meeting. He was doing VMS and networking support down at the University of Adelaide. And he and I met at this conference and ended up chatting quite a bit. And I discovered that Simon has quite a knack for electronic tinkering, and interfacing strange things to computers. For example he worked with some friends of his down there on doing a computer-controlled…in fact an Apple II-controlled multimedia system. You control the CD player, a slide projector. I believe also a film projector, so that he was actually running this multimedia presentation all from an Apple II which he had programmed himself.
And he and I got to discussing how you could make things like that network-controlled. And what sort of followed from that was the first SNMP-controlled stereo system. We used standard Pioneer stereo equipment, which had this nice little remote control jack in the back. The initial pass at the controller was a custom 68000 little controller box. Which we still have some upstairs somewhere around here. It had 64K of memory in it, so we had the fun of relearning how to program without the benefit of virtual memory. Wrote a simple TCP/IP/UDP stack, which actually fit in about 6K. Added an SNMP agent, added a simple command parser in it. And ended up building something that was an SNMP controller for a stereo system. You could control the volume, you could select what radio station you wanted to listen to, you could change the input to the CD player. You could play a CD. You could cause the US CD disc pack to be ejected. You could pop a new one in. It was really a pretty sophisticated little box.
Malamud: And it wasn’t a general-purpose operating system. All it ran was the network stack and some upper layer—
Vance: It was strictly the network stack and a few applications. It was…not multi-tasking, it was single-threaded. But…
Malamud: But it was small and cheap.
Vance: It was small and cheap, and if you look at how much time we put into it, most of the work for this was done in about a month to two months before Interop in ’90. It was a pretty amazing little project. It managed to garner both TGV and Simon quite a bit of both notice and I suppose notoriety at that particular Interop.
Now as it turns out the following year, we did a slightly different spin on the on the network audio stuff. Instead of an SNMP-controlled stereo system, we actually started working on doing audio over the network. Simon developed a protocol called MMDS, the MultiMedia Data Stream. MultiMedia Data Switch, excuse me. And in fact that year we actually had one receiver with a PC controller. We decided that the custom— Simon in particular decided that the custom controller concept was a little expensive and rather unwieldy to deal with. So we decided that perhaps a PC with a general-purpose operating system running some custom software might be a little bit easier to deal with. And certainly an easier development platform to work with. But using a PC as a frontend, using some 8‑bit audio cards, and just a standard Ethernet card with packet driver support, he was actually able to develop the hardware and software necessary to take an audio source and just pump it out over the Ethernet using UDP datagrams.
We in turn developed a switch that would allow you to be able to act as effectively a phone switch for people being able to call each other up using these PCs. Or be able to distribute— Neither of us really were worried about supporting multicast at the time, so if you wanted to have multiple people listening to a given audio source—a radio station, for example, you had to have the audio streams redirected through the switch to multiple listeners. And I guess the crowning achievement of that was being able to actually play radio stations from Australia on the Interop show floor.
And I guess the most amazing thing about the whole the whole process was when we actually tuned in an Australian radio station and we discovered that they listen to the same songs that we do, they have many of the same format radio stations that we do. And the most amazing thing is that their DJs are just as obnoxious. They just talk with Australian accents. I mean you could easily imagine an Australian DJ doing the old noxious— Well I’m from the South so we used to have monster truck shows down there. And I could just easily imagine one of the Australian DJs saying, [adopts a thick Australian accent] “Friday! Saturday! Sunday! Come down to the big monster trucks stomp!”
They had quite a few commercials that were in exactly that vein.
Malamud: Would it make sense to take radio stations from around the world and put them all on the Internet so we could tune in? Would the Internet be able to handle that? Do we have the infrastructure for something like that?
Vance: I think we will soon. Certainly on the over the NSFNET backbone, if you look at the bandwidth they have nowadays. They’re looking at T3 now—or they have T3 now, excuse me. They’re looking at gigabit speeds in the near future. It’s certainly possible. I think the real issue is… Well, one is sort of the acceptable use issue. Is this really an acceptable or even a good use of providing the Internet? Certainly it would be an interesting cultural experiment to provide access to radio stations. But…I mean gee, where do you draw the limit? The interesting thing about the Internet is that since there’s no centralized management per se, there’s nobody saying when to stop doing something. And we certainly know that doing too much of a good thing… If you have eighty or ninety radio sources scattered around the Internet, eventually you’re going to end up sucking down the entire backbone, potentially, doing strictly audio stuff. And most people when they’re paying for their Internet access are paying to actually be able to do scientific and research work.
I think however in a controlled fashion it could certainly be an interesting cultural experiment. And I think—at least in my mind that’s also one of the one of the purposes of the Internet. There are a number of technological issues that need to be dealt with. You really do want to have widespread multicast support. Because you don’t really want to have to house sixteen or seventeen or twenty or thirty different streams running in parallel to different destinations for every input stream. Yeah, I think as multicast IP becomes a little bit more widespread that won’t be so much of a problem.
But yeah, I would actually found it pretty fascinating. I would love to be able to tune into the BBC, or to be able to dial around and pick an Australian radio station to listen to at times. Or even, on perhaps a more exotic note, listen to some stations in Japan. Or Italy. I think there are a number of interesting possibilities.
Malamud: Well, we’ve been talking to Stuart Vance. This is Carl Malamud and you’ve been listening to Geek of the Week.
Malamud: This has been Geek of the Week, brought to you by Sun Microsystems, and by O’Reilly & Associates. To purchase an audio cassette or audio CD of this program, send electronic mail to radio@ora.com.
Internet Talk Radio, the medium is the message.