Carl Malamud: Internet Talk Radio, flame of the Internet. 


Malamud: This is Geek of the Week, and we’re talk­ing to Stephen Casner, who’s a project leader for mul­ti­me­dia con­fer­enc­ing at USC’s Information Sciences Institute, ISI. Welcome to Geek of the Week, Steve.

Steve Casner: Welcome, Carl.

Malamud: You’re one of the archi­tects of the mul­ti­cast back­bone, and maybe if you could give us a brief intro­duc­tion to mul­ti­cas­t­ing and then tell us why we need an MBone, a mul­ti­cast backbone.

Casner: Well, mul­ti­cas­t­ing is the dis­tri­b­u­tion of a traf­fic source to a num­ber of des­ti­na­tions with repli­ca­tion at branch points in the net­work rather than hav­ing to make the traf­fic source send a sep­a­rate copy to each des­ti­na­tion, which would require more band­width than is usu­al­ly avail­able. So, the mul­ti­cast pro­to­col was actu­al­ly defined a few years ago and has­n’t real­ly been deployed very quick­ly. It’s tak­en a while to get peo­ple to imple­ment it in end sys­tems and even longer to get it put into router so that we can achieve this mul­ti­cast, multi-destination deliv­ery. The advent of the IETF mul­ti­cas­ts of audio and video has sort of served as the impe­tus for mul­ti­cast to be imple­ment­ed and get deployed.

Malamud: Wo if we don’t have mul­ti­cas­t­ing in the router—so this is a vari­ant of IP that sup­ports mul­ti­cas­t­ing. If we don’t have that, tell us how the mul­ti­cast back­bone over­comes that prob­lem, how it builds mul­ti­cas­t­ing on top of the—

Casner: Ah. Essentially what we’ve done is built a vir­tu­al net­work on top of the phys­i­cal net­work, and the links of that vir­tu­al net­work are called tun­nels. Because of the notion that the mul­ti­cast pack­ets are tun­nel­ing through a sequence of IP routers—IP uni­cast routers. The mech­a­nism for doing that tun­nel­ing is sim­ply to take the mul­ti­cast pack­et and stick it in a uni­cast IP pack­et that’s address to the oth­er end of the tun­nel, and essen­tial­ly then we’re using IP as a link lay­er pro­to­col for the higher-layer mul­ti­cast transmissions. 

The nodes of that net­work are typ­i­cal­ly work­sta­tions, although there’s now an imple­men­ta­tion in one com­mer­cial router. The nodes are work­sta­tions that receive trans­mis­sions either from a local Ethernet or from a link from anoth­er router through a tun­nel, and then repli­cate the pack­ets on each of the out­go­ing tun­nels towards the des­ti­na­tions of the tree.

Malamud: Now isn’t all this encap­su­la­tion and tun­nel­ing inefficient?

Casner: It does cost the length of an IP head­er. But in fact most of the cost that we expe­ri­ence in this for­ward­ing is the cost of pro­cess­ing a pack­et, not real­ly so much the num­ber of bytes in the pack­et. There is an effi­cien­cy ques­tion in that. audio pack­ets in par­tic­u­lar tend to be small so that they rep­re­sent a small amount of time. And that means that adding on the length of an IP head­er is more than a triv­ial addi­tion. Still it’s not so much that it makes this impractical.

Malamud: So the typ­i­cal appli­ca­tion of this MBone has been audio and video.

Casner: Primarily but not exclu­sive­ly. There’s also still images that are dis­tinct from video in that they’re sent in a means where retrans­mis­sion is pos­si­ble to con­struct a full image if some pack­ets are lost and then pro­duce the whole image once you have a com­plete set. There has also been some real-time visu­al­iza­tion data. The Jason project had an under­sea vehi­cle crawl­ing around and you could run a visu­al­iza­tion pro­gram get­ting in real time infor­ma­tion accord­ing to the track­ing of that vehi­cle and it would draw a pic­ture on a map show­ing where it was going. It did­n’t go very fast because it was under the ocean.

Malamud: So the basic tech­nol­o­gy is mul­ti­cas­t­ing and then we have appli­ca­tions sit­ting on top of that like video and audio. When we think of the Internet, we think of a data­gram ser­vice with no guar­an­tees. How are we able to run an isochro­nous ser­vice like audio over a pack­et net­work like the Internet?

Casner: For one thing, the applications…we’re real­ly not gen­er­at­ing isochro­nous ser­vice. We’re not real­ly deliv­er­ing an isochro­nous ser­vice over the net­work. The trans­mis­sion of the audio and video depends on using good-quality net­works, or net­works that are light­ly loaded. And in the case where there is con­ges­tion then the audio and video are going to be dis­turbed at this point in the imple­men­ta­tion. But rather than assume that the end nodes must get every bit of data at exact­ly the right time to play it, we’re also mak­ing the end appli­ca­tions more adapt­able. And in the VAT audio pro­gram for exam­ple, pro­duced by Lawrence Berkeley laboratory—Van Jacobson—the adap­ta­tion algo­rithm is fair­ly com­plex to be able to adjust to the delay vari­ances that are seen and play the pack­ets out still continuously.

Malamud: What are some of the tech­niques used to adjust for this vari­ance and this delay in the packets?

Casner: Essentially the key is to build up just the right amount of delay so that you accom­mo­date in that buffer­ing delay how much vari­ance you have in arrival times. It’s easy to put in a plen­ty large delay and then all of the pack­ets that aren’t actu­al­ly dropped will arrive with­in that large delay. But tak­ing the sim­ple approach to just mak­ing the delay large is not suf­fi­cient, because for inter­ac­tive con­ver­sa­tion as you know it’s very impor­tant to keep the end-to-end delay as low as possible.

Malamud: Is the delay low enough that we can have a com­fort­able phone con­ver­sa­tion over the Internet?

Casner: Sure. Over rea­son­able parts of the Internet. Across DARTnet, for exam­ple, we have a round-trip delay when you mea­sure with pings that is on the—like, between California and Boston, round trip delays of seventy-five mil­lisec­onds or so. And the addi­tion­al delay that’s added on for recon­sti­tu­tion of audio and video is anoth­er like forty mil­lisec­onds. So, we may have 100, 150, maybe 200 mil­lisec­onds for typ­i­cal paths across good por­tions of the net­work. That is, por­tions where you’re not see­ing a lot of con­ges­tion delay.

Malamud: Now, the MBone goes to places where there is con­ges­tion delay, where there is highly-loaded links or very low band­width. How do we deal with this fact that there is a core of high con­nec­tiv­i­ty and a periph­ery also wants to participate?

Casner: There’s a cou­ple of meth­ods that are used. I mean, there’s noth­ing that we can do to reduce the delay if there’s a path that has a lot of con­ges­tion and in order to man­age that con­ges­tion there’s a lot of delay that has to be insert­ed at the receiv­er. You have to be able to tol­er­ate that in your con­ver­sa­tion. I mean, in fact there are some paths of the MBone that are over satel­lite links, and so that’s going to put in anoth­er 250 just for prop­a­ga­tion delay. People have become fair­ly tol­er­ant of that in learn­ing to speak over satel­lite tele­phone links. And since there’s no charge for this I guess maybe that increas­es their lev­el of tolerance.


Casner: the means to han­dle the vari­a­tion in capa­bil­i­ties of end sys­tems is to use dif­fer­ent cod­ing rights. For exam­ple there are some sites that are con­nect­ed by links of fair­ly low band­width. And we can use more com­pres­sion of the audio or video data so that it will fit with­in the avail­able band­width on that link. Sort of band­width and delay are some­what inde­pen­dent, but that is a method to get to places which are sort of disadvantaged.

Malamud: and I believe you use TTLs a way of say­ing what goes where?

Casner: That’s right. The—

Malamud: Could you explain what a TTL is and that would apply in this particular…

Casner: Okay. TTL is Time To Live, although the way it’s usu­al­ly used in the Internet and IP at least it real­ly refers more to hops. That is, in nor­mal uni­cast IP deliv­ery, Time To Live is how many routers you can go through. If the num­ber of routers that you need to go through to reach a des­ti­na­tion exceeds the Time To Live then the pack­et is discarded.

Malamud: And the idea was to pre­vent rout­ing loops.

Casner: Exactly. In the case of mul­ti­cast, we don’t need it to pre­vent rout­ing loops because there are oth­er mech­a­nisms in the mul­ti­cast rout­ing that accom­plish that. It’s used instead as a scop­ing mech­a­nism, and we can inten­tion­al­ly lim­it the estab­lished thresh­olds at dif­fer­ent points in the net­work, and the arriv­ing pack­ets must have a TTL greater than that thresh­old to pass that point. You may want to use these scop­ing lim­its for admin­is­tra­tive con­trol. That is to define com­mu­ni­ties, and also for this band­width man­age­ment. It’s real­ly a some­what lim­it­ed mech­a­nism, though, to try to use for all of these pur­pos­es, and a bet­ter method for deal­ing with the band­width con­straints is sim­ply to prune traf­fic that would exceed your capa­bil­i­ties across a link and to allow through only cer­tain chan­nels of traf­fic which meet the band­width constraints.

Malamud: And the chan­nels or defined by TTLs or by some oth­er mechanism?

Casner: The chan­nels are defined by dif­fer­ent mul­ti­cast address­es. So we actu­al­ly set up dif­fer­ent groups of par­tic­i­pants and sources that would oper­ate at the low­er band­width. The prun­ing mech­a­nism has not been in the…in fact is not wide­ly released in MBone so far, how­ev­er over the sum­mer it was devel­oped at Xerox PARC and is in beta release now. It will be deployed soon, we expect. It’s been part of the mul­ti­cast design all along but sim­ply had­n’t been implement.


Malamud: At the November IETF you were run­ning at a TV sta­tion. But you also have been work­ing in pre­vi­ous IETFs to reengi­neer the mul­ti­cast back­bone to make it work well in these sit­u­a­tions. Can you tell us a bit about how you choose where your tun­nels go and how you engi­neer an MBone to sit on top of the Internet?

Casner: Well the engi­neer­ing is a some­what approx­i­mate process. It’s not as exact as engi­neer­ing in some of the oth­er phas­es of our busi­ness, but basi­cal­ly to try to match the topol­o­gy of this vir­tu­al net­work to a rea­son­able sub­set of the phys­i­cal net­work. So, when there is a T1 line, for exam­ple, we try to have at most one tun­nel cross that T1 line, so there’s at most one copy of the pack­ets being dis­trib­uted in the mul­ti­cast tree. On the T3 back­bone there’s enough capac­i­ty there that we can afford to have more than one, and we may need to in fact because we don’t have enough nodes to avoid—we don’t have nodes at all the pos­si­ble branch­ing spots so we have to in some cas­es have mul­ti­ple tun­nels run­ning across a giv­en T3 link. But still it’s a goal to try to put nodes wher­ev­er we need to have branch points and to have a sin­gle copy go across a link.

Malamud: How big was this mul­ti­cast back­bone at the November 93 IETF?

Casner: The last check I did was about 640 sub­nets par­tic­i­pat­ing in the mul­ti­cast backbone.

Malamud: Any idea how many peo­ple or com­put­ers were actu­al­ly watch­ing the IETF proceedings?

Casner: I haven’t tak­en a count yet. I intend to do that at the end of the IETF. The one pre­vi­ous in Amsterdam was 518. And for the first time that exceed­ed the num­ber of phys­i­cal atten­dees, the local atten­dees, which was 490-something. I don’t know that we will… I don’t know what the rel­a­tive num­bers will be at this meet­ing. There prob­a­bly was an extra draw at the Amsterdam meet­ing because many of the MBone nodes are in the United States, and that meet­ing was out of the United States for the first time.

Malamud: You run a TV sta­tion at the IETF. Tell us a lit­tle bit about IETF TV and how it works.

Casner: Oh okay. First I should make a lit­tle cor­rec­tion there because the TV sta­tion is now pri­mar­i­ly run by the local host site, or has been for both the Amsterdam and Houston IETFs, and that to a large extent for Columbus pre­ced­ing that as well I was large­ly involved in the first two or three of them and arrang­ing for the video and audio equip­ment and work­sta­tions to actu­al­ly gen­er­ate the data and send it.

The process of set­ting up a mul­ti­cast from IETF real­ly involves a num­ber of com­po­nents. Setting up the sort of video pro­duc­tion capa­bil­i­ty, which is per­haps one of the hard­er parts giv­en that the peo­ple who are involved are most­ly com­put­er geeks and know about that part and have access to resources for that part but don’t know so much about the audio and video. And cou­pling in with the audio sys­tem of the hotel. So there’s some learn­ing that has to be done by the peo­ple who are run­ning it and also get­ting coop­er­a­tion from the hotel folks. 

But then we have to have work­sta­tions that can be wheeled around to the rooms where the broad­cast will occur and set up the cam­eras and every­thing, hook the audio and video into the workstation—sitting on a cart typ­i­cal­ly, and have a net­work con­nec­tion in that room that heads back off to the Internet. Since the IETF meet­ings have now grown a fair­ly sub­stan­tial ter­mi­nal room appendage at each loca­tion, then there’s usu­al­ly net­work­ing avail­able that we can tap into for the multicast.

Malamud: Are you find­ing that you had to learn things about TV pro­duc­tion in order to do this?

Casner: Well actu­al­ly in col­lege I did have an oppor­tu­ni­ty to do a lit­tle bit of student-level TV pro­duc­tion. We have observed that cer­tain kinds of cam­eras work bet­ter than oth­ers, and in fact it’s often the case that the more consumer-oriented cam­eras work bet­ter than fan­cy pro­fes­sion­al ones.

Malamud: Why is that?

Casner: Not real­ly sure. It may be that we’ve had just bad luck in the instances where we had the pro­fes­sion­al cam­eras. It is a good idea to have cam­eras which have man­u­al aper­ture and gain con­trols so that when you have peo­ple mov­ing into and out of the field of the over­head pro­jec­tor for exam­ple it does­n’t because the whole image to increase and decrease in bright­ness, which is a prob­lem for the video cod­ing algorithm.

Malamud: What about audio. Do you mic the rooms your­self, or do you just feed off the hotel audio?

Casner: Generally we just have the hotel audio. THe sys­tem that would be put in that room to pro­vide ampli­fi­ca­tion for the local par­tic­i­pants is suf­fi­cient. We pick up the sig­nal from their micro­phone, tap it into the com­put­er for trans­mis­sion, and sim­i­lar­ly tap into the signal—bring the sig­nal in to the com­put­er from the remote sites and then cou­ple that into the ampli­fi­er which goes to the PA sys­tem in the room. It is a prob­lem that we have more strin­gent require­ments for mic­ing the room than is nec­es­sary for just help­ing the local par­tic­i­pants. And we do have trou­ble get­ting the par­tic­i­pants in these work­ing group meet­ings to be con­sci­en­tious to go to a micro­phone, giv­en that usu­al­ly all we have is one floor stand micro­phone and a lava­lier micro­phone say for the per­son lead­ing the talk.

Malamud: Are peo­ple able to effec­tive­ly par­tic­i­pate in the meet­ings from remote locations?

Casner: Yes, they are. And in fact we have a few inter­est­ing exam­ples. John Curran was telling me that in one work­ing group meet­ing at the November IETF there was a key par­tic­i­pa­tion from a remote par­tic­i­pant who was not able to attend, and they man­aged to reach con­sen­sus because the mul­ti­cast was there. That was the first one that I’ve heard of which was at that lev­el, but we have had in pre­vi­ous meet­ings of my work­ing group, the Audio/Video Transport Working Group, we’ve had good inter­ac­tions where we have a per­son ask a ques­tion, get an answer, get a new ques­tion. Genuinely inter­ac­tive participation.

There’s also at the ple­nar­ies usu­al­ly been one or two ques­tions from the field. So we do have trou­ble encour­ag­ing peo­ple out in remote sites to par­tic­i­pate. It is some­what inhibit­ing to be at the end of a long wire. And that’s a prob­lem. But I believe the poten­tial is there for inter­ac­tive participation.

Malamud: Is this the mod­el for the next-century IETF meet­ings in which we don’t get a hotel any­more and we just all sit in front of our computers?

Casner: Certainly that’s a pos­si­bil­i­ty, once we have the nec­es­sary infra­struc­ture. You know, it’s always nice to meet with your friends and shake hands. I don’t think it’ll be entire­ly replaced.

Malamud: Oh yeah, absolute­ly. I sup­pose we could all crack a beer at our each loca­tion after the—

Casner: Either that or devel­op the Beer Transfer Protocol.

Malamud: That’s right. [chuck­les] You were talk­ing about your work on audio/video trans­port protocols. 

Casner: Right.

Malamud: Currently a lot of the trans­port work is done over UDP, or TCP, or exist­ing trans­port pro­to­cols. What would a new trans­port pro­to­col do for us?

Casner: Well, what we’re try­ing to pro­vide is the sequenc­ing and loss mea­sure­ment func­tions which are the task of TCP for reli­able com­mu­ni­ca­tion, say, but in a way that does not use retrans­mis­sion like TCP does to achieve ulti­mate reli­a­bil­i­ty. That is, we don’t need all of the bits to get through, it’s more impor­tant for them to get through in a time­ly man­ner. And if we just use UDP, it does­n’t pro­vide the sequenc­ing mech­a­nisms, it does­n’t pro­vide through sequenc­ing the means to detect loss­es of pack­ets. So you need some addi­tion­al pieces. The Real-time Transport Protocol that’s been devel­oped by the Audio/Video Transport Working Group pro­vides some of those func­tions in a way that we believe will be use­ful in com­mon to a vari­ety of appli­ca­tions for audio and video and oth­er areas. 

Malamud: Steve Casner, if we go on MBone audio or MBone video, it’s essen­tial­ly a room in which you can walk in and any­body can speak. And that’s a very use­ful mod­el for peo­ple want­i­ng to col­lab­o­rate and talk to each oth­er. For oth­er func­tions, let’s say the President of the United States gets on and does an Internet town hall, you need more con­trol. Can you tell us a lit­tle bit about some of the efforts going on to add a lev­el of con­trol on top of some of the audio and video con­fer­enc­ing tools that are out there.

Casner: So, part of what you’re talk­ing about is secu­ri­ty, say, to ensure that… Well I guess that’s real­ly [crosstalk] a dif­fer­ent piece…

Malamud: Well there’s secu­ri­ty, there’s group for­ma­tion, there’s a vari­ety of issues that I know you and your col­league Eve Schooler have been deal­ing with down at ISI.

Casner: Right. The ses­sion con­trol issues address more the needs of pri­vate meet­ings as opposed to the open broadcast—multicast is a bet­ter term—of IETF meet­ings, for exam­ple. But there’s anoth­er issue that you were dri­ving at. With the President’s town hall, you may need to pre­vent peo­ple from talk­ing back and inter­fer­ing. And cer­tain­ly one mech­a­nism that you can use is con­trol at the point where you’re play­ing out. That’s easy, you can avoid play­ing back to the President what­ev­er is out there on the net­work. But there’s a prob­lem of poten­tial­ly hav­ing peo­ple inter­fere with the recep­tion of oth­ers of the intend­ed sig­nal [crosstalk] com­ing from the President. 

Malamud: That’s right. So you mute every­body out there, but on the oth­er hand you’re only mut­ing em at your site—

Casner: Right.

Malamud: —and if some­body is out there talk­ing, every­one else in the room is hear­ing them.

Casner: Right. Now, each receiv­er has the oppor­tu­ni­ty to mute any sources that they don’t want. There’s the dan­ger that some­one could over­load your sys­tem by throw­ing data at it so that you would­n’t have enough pro­cess­ing pow­er left to process the infor­ma­tion that you want­ed. But that’s real­ly only a par­tic­u­lar case of a much more gen­er­al prob­lem. The same is true that I could pum­mel your com­put­er with UDP pack­ets hav­ing noth­ing to do with audio and video and pre­vent you from doing your file trans­fers, for exam­ple. So, I don’t know that we have any spe­cif­ic mech­a­nisms for avoid­ing that prob­lem, but we do want to work on mech­a­nisms for get­ting guar­an­tees or assur­ances of good-quality trans­mis­sion for the audio and video. That might involve reserv­ing band­width, ver­i­fy­ing that your pack­ets flow along a path where… Well it may be impor­tant in some cir­cum­stances that your pack­ets only flow through a net­work that you have con­trol over, for exam­ple, though that’s a more unusu­al case.

The ques­tion of reserv­ing resources for audio and video is an impor­tant one, and one… There’s a lot of activ­i­ty cur­rent­ly under­way in IETF and in the research com­mu­ni­ty to define meth­ods to man­age resources in the net­work. The main pur­pose of it for audio and video is to achieve low-delay trans­mis­sion, and real­ly to answer the ques­tion that you asked ear­li­er about how can we pro­vide this isochro­nous ser­vice through the network. 

Malamud: And what are some of the strate­gies that peo­ple are look­ing at as pos­si­ble solutions?

Casner: Well I guess all of the strate­gies will involve hav­ing a func­tion in the net­work nodes to give pref­er­ence to some pack­ets over oth­ers. That’s the basic idea. And so the vari­a­tion in strat­e­gy is how you install in those net­work nodes the nec­es­sary state to tell it how to how to dis­tin­guish which pack­ets are which, and to tell it what its poli­cies, what its pri­or­i­ties should be. One method for doing that has been around for some time is the ST pro­to­col with a more rigid, fixed or hard-state mech­a­nism. And a recent devel­op­ment is the RSVP pro­to­col which has a more soft-state and refreshed mechanism.

Malamud: And how’s that work? Tell us a lit­tle more about the RSVP protocol.

Casner: In the RSVP pro­to­col, there are path mes­sages that flow from the source along the mul­ti­cast tree to receivers who have joined that mul­ti­cast tree. And then reser­va­tion mes­sages that flow from the receivers back up along that tree to cause resources to be reserved. The mes­sages flow­ing back up from the receivers only have to flow so far as to the point where they join into a tree and join into reser­va­tions that are already there. That’s the mech­a­nism for allow­ing this to scale to a large num­ber of receivers in a prac­ti­cal manner.


Malamud: Multicasting has been around for a while, and as you said it’s tak­en a while to get that tech­nol­o­gy out there. How far away are we from hav­ing mul­ti­cas­t­ing ful­ly deployed in the Internet, or is that even a desir­able goal?

Casner: Well I think it’s a desir­able goal but it’s some­what hard to pre­dict how long it will be. There cer­tain­ly is inter­est from the routers— Excuse me, the router man­u­fac­tur­ers are inter­est­ed in this prob­lem and are pay­ing atten­tion to it more than they used to. 

The rea­son why it’s tak­en a long time is there was no good rea­son to do it. And like many prob­lems it’s a cycli­cal prob­lem. If it exist­ed there’s a lot of poten­tial appli­ca­tions that would use it, but they can’t use it until it exists. That’s the key of these IETF mul­ti­cas­ts. It seems to have pro­vid­ed that kick. People said hey yes, there is some­thing we can mul­ti­cast for. We can real­ly send audio and video over the Internet if we have mul­ti­cast, if we do devel­op the reser­va­tion mech­a­nisms and deploy them and see the light and it’s begin­ning to happen.

Malamud: At Internet Talk Radio we get a lot of phone calls that are real­ly meant for you. And those phone calls basi­cal­ly go, I’m doing a won­der­ful thing and I want to broad­cast it live over the Internet.” Now obvi­ous­ly broad­cast” is not the right word, and live” in many of these con­fer­ences is prob­a­bly not the right word either because some are pret­ty bor­ing. But we’re also begin­ning to hear about you know, live con­certs and live movies and live this. Is the Internet going to be able to han­dle that lev­el of traffic?

Casner: Clearly the amount of band­width in the Internet is much less than is installed in the exist­ing tele­phone net­work at large. And the Internet is not pre­pared to take over the job of pro­vid­ing tele­phone ser­vice to all of the peo­ple who use the Internet. But it may well be that Internet tech­nol­o­gy is the right way to inte­grate togeth­er all of these ser­vices at some point in the future. Quite a ways away, I would guess. 

Malamud: Is it sim­ply band­width that we’re missing?

Casner: Well. We need to build up the infra­struc­ture of ser­vices as well, real-time ser­vice which, is not there but I think we will do that. Beyond that, then yes it’s a ques­tion of bandwidth.

Malamud: Thank you very much. We’ve been talk­ing to Stephen Casner from Information Sciences Institute. And thanks for being a Geek of the Week. 


This is Internet Talk Radio, flame of the Internet. You’ve been lis­ten­ing to Geek of the Week. You may copy this pro­gram to any medi­um and change the encod­ing, but may not alter the data or sell the con­tents. To pur­chase an audio cas­sette of this pro­gram, send mail to radio@​ora.​com.

Support for Geek of the Week comes from Sun Microsystems. Sun, The Network is the Computer. Support for Geek of the Week also comes from O’Reilly & Associates, pub­lish­ers of the Global Network Navigator, your online hyper­text mag­a­zine. For more infor­ma­tion, send email to info@​gnn.​com. Network con­nec­tiv­i­ty for the Internet Multicasting Service is pro­vid­ed by MFS DataNet and by UUNET Technologies.

Executive pro­duc­er for Geek of the Week is Martin Lucas. Production Manager is James Roland. Rick Dunbar and Curtis Generous are the sysad­mins. This is Carl Malamud for the Internet Multicasting Service, town crier to the glob­al village.