Carl Malamud: Internet Talk Tadio, flame of the Internet. This is Geek of the Week and we’re talking to Dr. Clifford Lynch, who’s Director of Library Automation at the University of California. Welcome to Geek of the Week, Cliff.
Clifford Lynch: Glad to be here.
Malamud: You got your doctorate in databases from Dr. Michael Stonebraker no less, the guru of databases. And yet you’re working in the library community. Do computers and libraries come together? Is there a coalescence there?
Lynch: Well, let me answer that a lot of different ways. Certainly this I think is going to be the decade when information technology really moves into the public side of libraries on a very large-scale basis, far more than we’ve seen so far. So from a library perspective yes, information technology is invading, and we can say a lot more about that in a few minutes.
From the computer science side, though, I think it’s interesting to note that computer science has paid relatively little attention I think to some of the problems that come up with very-large scale library automation and public access to information. I think that these are hard problems and also fruitful problems from a computer science point of view. So much of my work in computer science and interest in databases has sort of been motivated from the application back to what we need in the technology to create or facilitate those applications.
Malamud: And what are some of those technical issues, some of the technical requirements that we need out of networks in order to support large library archives?
Lynch: Well, it goes the whole gamut, of course, from databases to networks. The database side has a lot of very specific problems with large textual databases. From the network side, you’ve really got a full gamut of questions from technical to things that really go beyond technical to almost intellectual issues. Technically many of the key issues really revolve around establishing standards and standards that work. Interchange standards for various forms of information—text, multimedia, things like that. Um—
Malamud: But don’t we have those already—MIME messaging for example, isn’t that an interchange standard for multimedia messages?
Lynch: To an extent it is. Although it’s important to note that that’s quite new, too. One of the things people I think often overlook when you think about library-type problems is the simple issue of scale. It’s easy enough to introduce multimedia in the sense that you upgrade somebody’s mailer to do multimedia. And next thing you know, mail is coming out with some pasted-in bitmapped images or a bit of voiceover. When you start thinking about well, I have a database of you know, umpteen million volumes of stuff that I need to convert, format changes and things like that happen quite slowly. And right now, I think a lot of the not just library community but information community more broadly is sitting on a huge mass of content and is sort of on the verge of moving this to digital forms in a big way.
Malamud: Does that means scanning in the books, or does it mean retyping, or waiting for new books to be produced?
Lynch: Well, I’m thinking here of the sort of existing base of information—and don’t just think books, think also sound recordings, and movies, and all of these types of material archives as well. We have massive massive archival collections of manuscripts and things that have been relatively inaccessible. The scholar had to actually physically go someplace and you know, spend years in a dusty room mining this stuff. Once we start moving this into digital form, suddenly these archives will be accessible nationally and internationally, and I think this will make a huge difference to scholarship. But one of the issues right now is that you’re only going to want to do those conversions once. They’re ferociously expensive. So, people are very nervous about making sure that there are sensible standards in place before they invest in those conversions.
There’s another interesting phenomenon, too, which is the more you capture intellectual content as opposed to surface form, the more expensive it gets. We talk about converting most old print materials by basically scanning and creating bitmapped images. Even OCR at its present level of quality is largely considered out of the question because of the error rate except as a supplement to support searching on the material in some cases. There have been people who have taken certain collections of material and put them in SGML markup, for example. There’s a company called Chadwyck-Healey that has got SGML markup of some of the key works in late ancient and early Medieval Latin. I’m told they’ve spent several million dollars creating that database.
Malamud: Well putting early Latin works into SGML is somehow appropriate.
Lynch: Yes. It is.
Malamud: Is SGML going to be the language of the future? Are we gonna expect at least future books to be coded in SGML and posted on the net in that language?
Lynch: I think you will see some use of SGML. Now whether they’re posted on the net in that form is a very interesting question. If you talk to for instance many of the large scientific and technical publishers, companies like Elsevier or Springer-Verlag, their editorial processes now are being converted and upgraded in many cases to create SGML material as part of the production of the journals. However, it’s not at all clear that they are going to market material in SGML form. In some ways they’re thinking of this as an internal database out of which they can spin multiple products.
One of the things that perhaps I shouldn’t be surprised at but I’ve found a bit surprising in talking to some publishers about the transition to the age of electronic information is that they are very concerned about presentation integrity of their material. These are print publishers. And they are quite horrified at the thought of distributing SGML and having consumers or repackagers of that information do say typesetting on the fly or other types of reformatting to adapt it to different display environments. They feel that’s a loss of control of their information that could threaten the perception of the quality of their publications, and they’re very nervous about it. In that sense, one finds them a lot more sanguine about distributing bitmapped images.
Malamud: You’re listening to Geek of the Week. Support for this program is provided by O’Reilly & Associates, recognized worldwide for definitive books on the Internet, Unix, the X Window System, and other technical topics.
Additional support for Geek of the Week comes from Sun Microsystems. Sun, the network is the computer.
We’re talking to Cliff Lynch, Director of Library Automation at the University of California. Cliff, we’ve been looking at the question of bitmap images versus SGML as a way of moving data out onto the network. Do you think publishers will ever put their data out on the network in revisable form?
Lynch: Well, it’s clear of course that they’re converting to revisable forms and SGML seems to be a popular one inside their own processes. I know that some of the large scientific and technical publishers, people like Elsevier and Springer-Verlag, are investing heavily in such conversions at this point. However it’s not clear they’re really going to market this outside of their companies. They may use this as a database to spin off a series of products. One of the things that I’ve found in talking to publishers, particularly publishers of costly scholarly journals, is that they have a great concern about the presentation integrity of their material. They come from a print world where they invest a lot of money in nice typesetting, attractive artwork, quality paper and printing, and they’re very concerned about moving into an electronic world where people are retypesetting their material on the fly, leaving out pictures, or otherwise presenting a poor image of that material. So, they feel I think to some extent more comfortable with bitmaps as a way of controlling the integrity of the presentation of their material.
Malamud: I know as a programmer, when I’m implementing programs and I’m looking at network standards, which are a library of documents, I want the revisable form. I want be able to plop out the definition of an object and stick it into my code. Is there a way to balance the presentation integrity with the need for a revisable form on the network?
Lynch: Well, first let me say I absolutely agree with you on the need for revisable form. And it’s quite interesting to see what’s happening inside the scholarly community. For example there’s an activity called the Text Encoding Initiative, which is primarily scholars in computing and the humanities from…about twenty-two I think professional associations are involved in it. And what they’re doing is defining a set of SGML tags to essentially support [depth?] markup that would be suitable for computer-driven linguistic analysis, things like deep text analysis of variant editions of classic texts, things such—
Malamud: What does that mean? What is deep text analysis?
Lynch: Well, the thing would be for example think of Shakespeare. Now, many of his plays exist in multiple versions and scholars are very concerned with the variation between versions. They’re concerned with the way words are used throughout the versions. They’re concerned with allusions or stories that follow on the words. Shakespeare’s plays derive to some extent from other plays, like say the Spanish revenge tragedies. So there’s a thought that they will be able to develop corpora of things like a Shakespeare play in all its versions, and you’d have a very intelligent viewer which would allow you to say things like, “I’d like to see all the versions intercut here,” or, “I’d like to see this as it was in the First Folio,” and then, “No, I’d like to see it as it was in the Second Folio.” They’re starting to do real interesting things like that. Now this is not an area where I’m expert, particularly, but I do find it interesting that there is so much energy being devoted to coming up with schemes to really capture this kind of intellectual content and make it widely available for the scholarly community.
Certainly this sort of thing underscores the need for material in revised form. Now, I think we can hope that as publishers become more comfortable with the networked environment that we will see them becoming more comfortable with distributing revisable form material. Because it’s not just the presentation integrity issue, they’re also worried that in some sense the revisable form is easier to steal than a bitmapped image and is more valuable in some sense than a bitmapped image. And certainly publishers have many many concerns about their ability to control their intellectual property on a networked environment.
Malamud: Is it possible that the current generation of publishers, the firms like Prentice Hall and Addison-Wesley, just won’t survive the transition and it’s gonna take a new kind of publisher, a new type of firm, to be able to handle publishing in the next twenty or thirty years?
Lynch: Um, I think that one useful way to think about this is a set of analogies that I first heard from Peter Lyman and Paul Peters, which was a way of thinking about the introduction of new technologies is going through a stage of modernization, where you essentially take what you’re doing already and do it more efficiently by applying technology. And then proceeding from there to innovation which is where you use the technology to do fundamentally new things that you couldn’t do prior to that technology. And then ultimately up to a transformational stage where you’ve kind of absorbed the technology into your processes and the things you do, and it starts fundamentally changing those.
Now, I think we can view a lot of what’s happening right now with the relationship between the traditional print publishers and the networked information world as modernization. For example they’re thinking basically about things where the user interface of choice is still paper. And we may talk about putting it on the network, storing it, and transporting it through the network. But the presumption at least right now is for anything other than pretty casual browsing much of this will be printed back very close to the end user onto paper. That in my view is really just sort of a modernization activity.
We’re starting to see innovative things that begin to explore the sort of indigenously new capabilities of the electronic media, and those range from the sort of thing you’re doing with Internet Talk Radio through some of the multimedia things. Much of the multimedia stuff that’s most interesting I think up till now has been on standalone workstations, often using CDs or various kinds of video disks. I think we’re going to see that change pretty quickly now that the network is getting faster or the standards are coming along, and multimedia will become much more of a networked commonplace. As that happens, I think people will start exploring more of those possibilities.
Even in the sort of text-constrained network that we’ve been accustomed to, we have seen a number of creative people do interesting things to look at what you can do with the traditional journal as a point of departure moving into the networked environment, things like very heavily-linked citations backwards and forwards from article to article. The ability to gather up readers’ comments and reactions and attach them to a primary article, those sorts of things. Those are only the beginning of the innovation we’ll see.
Now, going back to the question about publishers, it’s clear just given the body of rights that these publishers control, their brand name recognition if you will, and certainly in scholarly areas there is very much of a brand name recognition on certain journals as very prestigious journals to publish in, these will clearly moved forward and modernize into the networked environment. And I think they’ll be with us for a long time. How many of those publishers are prepared to take leadership positions in exploring really innovative uses of the network I think is an open question. And my guess is that we’ll see a whole new set of publishing and information creation industries coming up alongside the old ones on the net. A few of those will be traditional publishers or information producers more generally sort of reinventing themselves, others will be new upstart firms.
And I wouldn’t focus just on the publishers. There’s been a lot of buying and selling of things like vaults from large movie houses and those sorts of things, which could be quite interesting in a multimedia world.
Malamud: Do you think an individual can be a publisher? AJ Liebling as has said that freedom of the press belongs to those who own one. Do you think we’re entering an era where the individual can be a publisher, or are there professional skills that you have to learn before you can be one?
Lynch: Well, I think we are already at the point where an individual on the network can very casually become a publisher and many do. You don’t have to be a rocket scientist or to invest very much to set up a modest FTP archive. Or in the most minimal case, you can think of people simply setting up mailing reflectors and sending mail into them as being publishers in a certain sense. So, clearly the network has moved forward the democratization of publishing.
Now, we should point out it’s not that hard to be a publisher in print anymore, either, given that there’s a copy place on every corner and they’re not really that expensive.
Malamud: It’s easy to be a bad publisher.
Lynch: Um, it’s especially easy to be a bad publisher. And we see a lot of badly-published things in print. I think it’s very challenging in the network environment because we don’t know exactly what it means to be a good publisher, nor do we have as many exemplars as we do in the print world. Certainly some of the integrity issues about being a good publisher clearly extend across all media, but there are some issues I think that are perhaps more specific to the networked environment which we’re still understanding.
Malamud: You’re listening to Geek of the Week. Support for this program is provided by Sun Microsystems. Sun Microsystems, open systems for open minds.
Additional support for Geek of the Week comes from O’Reilly & Associates, publishers of books that help people get more out of computers.
Cliff Lynch, you’ve been active in the Coalition for Networked Information, a body that brings together librarians, and academic computer center managers, and architects, and a wide variety of groups. What’s the purpose of the of the Coalition?
Lynch: Well, the stated purpose of the Coalition in its charter, in the short form, is basically to advance scholarship and intellectual productivity through the use of information technology and specifically by exploiting the promise of networks.
Malamud: Could you translate that into action items? What do they do?
Lynch: Well, this is tricky to translate into action items because some of the action items I think tend to be short-range, some tend to be long-range, but I think it’s important to have that general context.
Now, to understand a little about the Coalition, it was formed back in 1989, towards the end of ’89 by CAUSE, Educom, and the Association for Research Libraries, which is a group of the about 110 biggest libraries in North America. Now, if you think back to that time, we were in one of the cycles with the Gore bill at that time, and we were…many of us in the higher education community were very hopeful that it was going to make it through Congress. Now I guess it ultimately made it through Congress on the next round, not on that round, but certainly by ’89 I think that was the second or third incarnation of the Gore bill and the higher education community had been on board for a year or two. I believe the idea of an NREN really started about ’87 and it was originally to be mostly national research networks. The higher education community got into it in ’87, ’88 and underscored the role of education. The library community began to wake up to it about ’88, ’89 and started asking questions about what’s the appropriate role for libraries in here.
Backing off from that one step, there was this sort of empty feeling in certain people’s stomachs as it occurred to them that they might actually get this NREN concept moved ahead, create this research and education network, they’d get all the scientists and scholars on it who would have a wonderful week sending electronic mail to each other, and then ask, “Well, where’s the world’s literature? Where are the information resources? What can I do really do with this thing besides send electronic mail to my colleagues and maybe use a supercomputer which I might or might not be interested in?” So there was a lot of emphasis on getting a focused group together which included library people, information technologists, and also people like publishers, to start talking about what do we need to do to really increase the amount of content accessible through the network, and to provide tools to allow people to locate content, navigate from resource to resource, and to really use these electronic resources.
So that was a lot of the sort of themes that were floating around at the time the Coalition was formed. And of course it wasn’t just scholarly information, the Coalition is very interested in improving access to government information at all levels as well, just to take one more example.
Now, the Coalition is pursuing a wide range of activities. These range from sort of policy-related activities looking at some of the initiatives involved in things like the GPO Window Bill, and in some cases providing testimony to Congress on these sorts of things, helping the parent organizations and the institutions to formulate policy positions on these.
At the other end of the spectrum, there are some substantially more technical things that the coalition’s been involved in. Things like there’s a working group on directories which has been doing something called the Top Node Project, which is an attempt to start understanding what sort of data elements you want to describe networked information resources. I lead a group that does architectures and standards, and one of the main things that we’ve been concerned with is interoperability issues. Much as I think the IETF has always had a strong theme of interoperability, libraries and information providers have started to realize that in order to do information access in a distributed environment, interoperability is going to be absolutely critical if this is gonna work and markets are going to be created and people are going to be able to have access to these resources.
So, my group has been looking a lot at interoperability issues in a protocol called Z39.50 that you may have bumped into at some point.
Malamud: You’re listening to Geek of the Week. Support for this program is provided by Sun Microsystems. Sun Microsystems, open systems for open minds.
Additional support for Geek of the Week comes from O’Reilly & Associates, publishers of books that help people get more out of computers.
Z39.50 is a library automation protocol. It’s often referred to that way and that’s one of those nebulous sets of phrases strung together that describe absolutely nothing. Can you give us a better description of what Z3950 does?
Lynch: Yeah. I mean it’s actually a real tragedy and also a real irony that it has been characterized to the extent it has as a library automation protocol. There’s a lot of funny history with Z39.50. I guess before we go into that I should just explain a little bit about what it is. Z39.50 is an application protocol which deals with information access and retrieval. Now, I want to differentiate that fairly carefully from things like distributed databases in the sense that a Z39.50 client-server interaction is talking about really information in terms of semantic meaning not data layout. Whereas in a database application you might ask for Column X of a relational table, in Z39.50 you speak about things for example in a bibliographic context like keywords in a title or authors with this last name; things that are in terms of the intellectual content of the information rather than the specifics of how a given site chose to store it and lay it out in a database.
Malamud: So it’s a way of saying “I have some keywords I’m interested in, give me back all the bibliographic records that you maintain that match those keywords.”
Lynch: Yes. It allows you to say things like that although of course with much greater precision because you can, and typically do in large databases, restrict those keywords to certain fields. It’s very important to have this degree of abstraction because when you look at how complex large textual or bibliographic information bases can be, it’s really impractical on an interoperability basis to start doing distributed database access. As a client you have to know far far too much about a very complicated structure on the server. This moves us up one level of abstraction and keeps us out of a lot of those potential rat holes of details of servers.
The sort of vision that we have with Z39.50 is that a Z39.50 client should be able to access a wide variety of information resources through a consistent user interface that might run on a workstation, might run on a time-shared host. But it would give the user a common view of multiple information resources around the network. That’s sort of step one. Step two is since it gives you a common protocol interface to information resources, I believe it’s going to enable the development of all sorts of intelligent client technology, since you’ve got suddenly a clean interface to retrieve information from multiple sources and you get structured records back rather than trying to interpret a screen in a terminal emulation as a program, which as we know doesn’t work well. You can start thinking about programs that correlate information from multiple sources on behalf of the user, do periodic searching and build personal databases on ones workstation, all sorts of things. One of the most you’re taking things right now is there are a lot of overlapping information sources around, and as human beings trying to search comprehensively we do a lot of deduping intellectually, which should be turned over where feasible to computer programs.
So, that’s a little bit of the picture we have in our minds for Z39.50. Now, Z39050 is as I said an applications layer protocol that was developed under the auspices of NISO, the National Information Standards Organization. That’s an ANSI standards-writing body that serves the publishing, library, and information services community. Z39.50 has some unfortunate heritage. It’s written as an OSI application layer protocol. It has incidentally a parallel international protocol ISO 10162 and 10163, which is sort of a subset of Z39.50 as done in the US, which is also of course in the OSI framework.
Very few implementers, not surprisingly, are using it in the OSI framework. The Library of Congress is doing something with OSI, and the Florida State Center for Library Automation is doing something with the OSI. Several with the vendors, the library automation vendors, have indicated they’re going to do a OSI as well as TCP-based stacks because they believe that’s going to be necessary to market in Europe. But within the US, certainly the main action is on the Internet running this over TCP/IP.
Malamud: So Z39.50 over OSI is one of the statements of direction of political correctness? Or are there actually implementations out there that do that?
Lynch: Well, there certainly is a political correctness issue here. And some of it too is a market response where I think you have to be a little careful about how to interpret it. Particularly in Europe, many libraries, particularly national libraries and things like that, still are writing RFPs that say “We have to have the OSI.” So, many of the vendors trying to position themselves to be responsive to those are saying they will do or intend to do or are working on OSI. There’re not too many I think delivered and running because there’re not too many OSI things delivered and running generally.
Malamud: OSI, tomorrow is the future.
Lynch: Right. Now, I think that in some ways the OSI heritage of Z39.50 has been unfortunate because the implementor community spent a lot of time back in the late 80s and early 90s struggling with what to do with this OSI baggage and how it fit into the networked environment that at least many of us viewed as reality, which was the Internet. There were people who believed that the right thing to do was take all the OSI things from transport up and run those over TCP, as has been done in systems like ISODE. There were others of us who really felt that that was kind of an ugly solution, particularly since Z39.50 has the interesting attribute that it tries to really make use of the presentation layer and in order to work requires features of the presentation layer that do not appear to be implemented in any known OSI implementation. Certainly we know they’re not a ISODE, they’re not in IBM’s OSI/CS product, things like presentation context alteration on the fly. You can see where you need that when you come into a server that’s got 400 databases you really don’t want to list a transfer syntax for records in each of those databases up front when you open the connection.
Malamud: Or reestablish the connection each time you switch the type of media.
Lynch: Precisely.
Malamud: Or then say, “Well, I’d like to look at microfiches now,” and they’ll say, “Well, call us back.”
Lynch: Mm hm. Yeah. This sort of thing is clearly a non-starter. So we dissipated a lot of time worrying about what to do about this and there were many camps. What we ultimately chose to do was to throw out all the lower-level OSI stuff, run Z39.50 directly on top of TCP, and get on with it. And in the last year we’ve seen at least seven or eight interoperable implementations that were independently developed up and running on the net and talking to each other. It’s really quite gratifying after all these years of talk.
Malamud: This is Geek of the Week, featuring interviews with prominent members of the technical community. Geek of the Week is brought to you by O’Reilly & Associates and by Sun Microsystems.
This is Internet Talk Radio. You may copy these files and change the encoding format, but may not alter the data or sell the programs. You can send us mail at mail@radio.com.
Internet Talk Radio, same-day service in a nanosecond world.
Lynch: I would like to believe that some of the standards developers are getting more realistic. Some of the formal standards developers, the NISOs and the OSIs of the world. While they’re not ready to say, “Well, OSI maybe isn’t going to quite work,” at least some of the applications protocols are starting I believe to think a little bit in terms of running over multiple protocol stacks. Certainly as we’ve done the work on drafting the new version of Z39.50, which we hope to take to ballot within the next eighteen months or so, we’ve put a few things into the protocol that I think will make it a lot easier to run in a dual-stack environment with minimal changes to the application’s code if people really do need to do that.
Malamud: WAIS, the Wide Area Information Server that was developed by Brewster Kahle and Thinking Machines and is now fairly widely available in the public domain, also comes out of a Z39.50 heritage. Are those implementations interoperable with the Z39.50 that you’re talking about for libraries?
Lynch: As of right now they are not. When Brewster developed WAIS he used something called Z39.50–1988, or Z39.50 version 1. That was the first version and the standard came out in ’88, and really was not implemented much by the library community. Furthermore, Brewster had to do quite a bit of…what shall we say, carpentry, on that version of the standard in order to get a working application out of it. So, he sort of took ’88 and extended it as he needed to build WAIS. Meanwhile the library community was working on what ultimately became version 2 of the standard, or Z39.50–92, and implementers had been working off drafts of those standards. Unfortunately Brewster didn’t quite get in cont— Brewster and his folks really didn’t get together with the library community till about ’91, at which point Brewster already was pretty far down the path.
Now, while technically these things won’t interoperate, the amount of work necessary to upgrade WAIS is not great. It’s my understanding that Brewster’s new company WAIS Incorporated will do a Z.39.50–92-compliant version of WAIS. In addition there is work going on at the Center for Networked Information Resource Discovery and something—it’s CNIDIR—down at North Carolina—George Brett, Jim Fulton, and those folks—on taking the existing WAIS system, doing a lot of upgrading to it, and putting it on Z39.50–92. That work is fairly well down the pike as I understand it, so I expect in 1993 to see probably several interoperable WAIS implementations. This is going to produce some kind of weird things because you should be able to take your standard WAIS client and point it at your library catalog if you want, if that speaks Z39.50, or to log on to one of the online catalogs on the Internet and run that interface against WAIS databases. There’s going to be a lot more mixing and matching of interfaces as this gets down the pike.
Malamud: You stand at the cusp between the library and the networking communities. Currently there’s a national debate about a National Research and Education Network asking whether that network is a few very fast pipes for leading-edge researchers, or whether the National Research and Education Network means getting networking connectivity out to everyone. Are those two goals fundamentally opposed to each other? Can we have one NREN that solves both the researchers and the kindergarten kids?
Lynch: Well, um, I think technically you can have one national or international network that serves that range of constituencies. Um, cer—
Malamud: Can we afford it?
Lynch: Certainly just before we get off the technical thing, the key to getting away with that is the whole concept of internetworking. Logically it may look like one network, you may be able to do applications across all components of it, but it may be multiple constituent networks serving as different constituencies.
Now, can we afford it? What are our public policy priorities here? That’s a real good question. If you look at the NREN legislation—what is it, Public Law 102–194, I guess. That’s pretty clear about the NREN being a place that’s hospitable to libraries, to K through 12, to state and local government. That calls out lots and lots of groups that are welcome on the NREN. It’s a lot vaguer about whether it’s going to fund any of them to get on the NREN. I mean it’s kind of a curious thing because it says, “Well, if you can find your way on we’re glad to see you here. And we’ll send out the welcome wagon for you.” But it doesn’t say that it’s gonna fund it.
Now, I think there’s some things that need to be underscored there. If you look at many of those communities—libraries, K through 12 particularly, which are both big communities, relatively little of their funding comes from federal sources. These have traditionally been funded more at the state and local level than at the federal level. It’s not clear that it’s necessary or that it’s going to be politically acceptable, particularly in the current budget climate, for the federal government to take on the responsibility of networking these communities. And certainly in some states I would say the states are stepping up to the challenge pretty aggressively—in Texas, for example, in the K through 12 world.
So I think that you may see a lot of these people getting on through state and local initiatives. If we follow the money certainly most of the NREN money so far, with the exception of some connectivity grants out of NSF, has been aligned along the high-performance computing and communications axis and really has been about let’s connect a few high-end, fast, advanced applications.
Malamud: But you think that local governments and state governments should begin taking responsibility for networking their communities and their states?
Lynch: I think they have to.
Malamud: This has been Clifford Lynch on Geek of the Week. Thanks.
Lynch: My pleasure.
Malamud: This has been Geek of the Week. Brought to you by Sun Microsystems and by O’Reilly & Associates. To purchase an audio cassette or audio CD of this program, send electronic mail to radio@ora.com.
Internet Talk Radio. The medium is the message.