Carl Malamud: Internet Talk Radio, flame of the Internet.
Malamud: This is Geek of the Week, and we’re talking with Steve Kille, who’s President and CEO of the ISODE Consortium, which is—ISODE of course is the ISO Development Environment. Welcome to Geek of the Week, Steve.
Steve Kille: Thank you Carl.
Malamud: Why don’t you start and tell us what the ISODE Consortium is?
Kille: Well perhaps the first thing I should do is to talk a little bit about ISODE and what that is, and where it came from, and then I’ll talk about the consortium and what exactly it’s doing with ISODE.
ISODE is a package of software which is targeted particularly at OSI applications, X.400, X.500, and was initially a research community activity to examine OSI and to see if it was viable. There was contributions from a wide range of environments. From Marshall Rose in particular, who did a lot of the FTAM and the stacks associated with that. There were various groups in the UK—my own group of University College London, and a group at Nottingham University who produced the X.400 and the X.500. And became very much the widest-spread OSI implementation. It’s been used a lot in the research community. And it was becoming successful on the research side, and there was also quite a bit of interest on the commercial side. But there were firms that were interested to use this technology as a basis for products. In fact several of them did.
But in general, being a research project and a research activity it really wasn’t positioned right for that sort of development. And so the real motivation behind consortium was to take this research development and put it in a position where it was appropriate that products and services based on this technology could appear. And there was a real frustration among several of the people doing this that we had some technology that was exciting and interesting and in many senses better than anything that was available commercially, but yet we weren’t really able to sell strongly in the commercial environment.
Malamud: So the code was there but it wasn’t bulletproofed, it wasn’t production quality code. Was that the problem?
Kille: The bulletproofing and issues of production quality were initially I think to an extent that… There’s always an image of research code. And in fact ISODE in many aspects was a very good indeed, although there are some that could do with a lot of improvement. I think that the more significant thing is that if you’re going to build a product, you want to have something that’s going to be there in five years’ time, in ten years’ time. You don’t want to be tying yourself to piece of technology that’s a research project that’s going to vanish. You won’t have an organization there that’s going to be able to back it, provide the support you need as a development organization. But also it’s going to be there, it’s going to add in the functionality to track the standards, and to put in all the new things that as a vendor of OSI products you want to have.
Malamud: So did you take ISODE out of the public domain and you’re selling it now?
Kille: We… The technology that we’re taking is no longer public domain. But our release is available under license to commercial organizations. The mechanism by doing that is that you join the consortium to buy access to the technology. And then as an OSI vendor you would pay an additional royalty to the consortium—not a large royalty, but sufficient to give the consortium a continued existence and so we can put the resources that are needed to make this technology happen.
The reason that we chose the consortium approach rather than the more conventional commercial structure was that I think ISODE has become quite a special implementation in the research world. And it would have been unfortunate I think to do a straight commercialization and in some sense let the public domain tree rot because there are a lot of people in the research community that’re using and are running services on ISODE. And in very many ways it’s that usage in the research community that’s ISODE’s real strength rather than just being a stack that’s been tested in the lab and has been benchmarked against all various conformance tests and has lots of good ticks associated with it, it’s something which has been demonstrated in the real world, it’s been field tested, and people are using it for their day-to-day work. And so from a commercial point of view it’s important, because you see this as sort of a marketing publicity, you get field testing, and something which none of the other OSI vendors—the Retixes and the [Marvins?] and so on—they really can’t offer that. It give the viability, it gives a user orientation for the technology.
But I think also on the research side you know, we have the research heritage. And I think it’s important that we have, as the consortium, a higher-level mission in terms of promoting this technology and to make it available. And for that reason I think that the linkage with the research community is important. So we allow research organizations to join our consortium, and the membership is a somewhat lower costs than would be for a commercial organization that’s going to base products on the technology.
We’re also going to provide a zero-cost distribution for two research organizations. And we’ll be doing an online distribution that’ll be encrypted-source. And all that a university or a research institution needs to do is send us a fax signing that they will take good care of the code, and then we’ll return a key and they’ll be able to use the technology from the ISODE consortium.
Malamud: Do they have to give you any enhancements they make to the code so you can use them for your distribution?
Kille: No, there’s no such requirement. There are conditions about what is done with the code. We in particular wanted to protect against a situation where a company makes use of a university to do its software development. I don’t think that’s an appropriate thing to do. But in general the companies or organizations are encouraged to do what they like to do with the code. They can keep things for their own internal usage.
Universities in particular are also able to put things they do relative to the consortium released into the public domain. And I think that for advanced development that’s a more appropriate way to handle them than for the technology just to try to be put back. So we’re not trying to squirrel things back into the consortium, we’re trying to facilitate research activities and advanced developments.
I think in very many ways the consortium is a technology transfer organization. We’re taking this base technology for the X.400 and the X.500 in particular from the research side and we’re putting it over as an appropriate product base. And we’ll continue to take a flow of technology. I mean in particular the X.509 security activities that’re going on from various research initiatives at the moment, we’ll likely take from those and then we’ll make them available commercially. And I see that as an ongoing model for the working of the ISODE consortium.
Malamud: You mentioned people using ISODE in their day-to-day work. Can you give me some examples of that? Do you use it, for example?
Kille: Yes, absolutely.
Malamud: Do you have an X.400 address when I send you mail?
Kille: I do not use X.400 addressing. I use X.500 to look up information. I use a mailer which has X.400 support. And so I use an RFC 822 mailbox for my for my day-to-day work. But I use X.400 gatewaying capabilities of our MTA to access certain X.400 services and that means commercial people who’re working with X.400, and it means researchers, in particular in Europe, who’re starting to move from RFC 800 backbone onto an X.400 backbone service.
Malamud: Is it fair to say that the vast majority of the uses of ISODE are X.400 and X.500? Are there many FTAM users, for example.
Kille: The majority of usage is X.400 and X.500. There are cases where the FTAM is used. I think particularly some of the European countries have found that the FTAM provides access into certain archives which they come get at using FTP. But that’s more an issue of the lower-level connectivity than the particular services often offered by the FTAM.
Malamud: ISODE started out as an experiment. And basically the idea was “Let’s take the upper layers of OSI with all that rich functionality and build it on the lower layers of TCP/IP.” And so we’ve kind of ripped out the lower layers of OSI, and we we’ve got a couple applications on the top, but we’ve got these fairly complex, fairly large set of code that’s in ISODE. Is all that overhead worth it? Is this an efficient way to be doing these applications of directories and messaging?
Kille: That’s an interesting question. And I think that there are cases where the answers is yes and there’s cases where the answer is no. I think particularly for the X.400, the performance overhead of the layers is not an issue. That you’re establishing a connection, and you can implement the code very straightforwardly and efficiently. And rather than argue about well we could tune it, we could do something a little bit simpler that was doing the same job, it really isn’t that significant an issue when we have the standards and the specifications there, there’s an agreement so we might as well use them.
The X.500, I think there are some environments where it is fine. I think particularly for DSA/DSA interconnection where DSAs will very often maintain permanently-open connections, the overheads are acceptable. I think for DUA to DSA communication, particularly where DUAs are going to be running on small machines, or are going to be integrated into other applications to use the X.500 support, the overheads of the full stack X.500 are too high. And and I think the model which the consortium is promoting for that, and which is gaining increasing commercial acceptance is using a protocol called LDAP, the Lightweight Directory Access Protocol, which provides a much simpler protocol implementation but gives access to almost all of the X.500 services. And so you can build a simple user agent on a Mac or on a PC, and you can concentrate on providing sort of a good graphical and interaction, rather than building a protocol engine.
Similarly, if you want to integrate X.500 into an application so you can do some name lookup and some basic searches in the context of another application you can link in a relatively small LDAP library. And that will give you access into an X.500 DSA, and then beyond that the X.500 infrastructure will work in the wide area to give you the services that you need. And you have the consensus of the international standards for access outside of the research community.
Malamud: You talk a lot about X.500. And X.500 if you look at the international standards is the directory. It’s the global directory, starting with a route which is actually run at your former home, University College London and goes down from there to the rest of the world. Some people that look at resource discovery—Professor Michael Schwartz at University of Colorado for example, argues that is not a scalable solution and X.500 just won’t work in the long run. What do you think about the role of X.500? Is it going to be the directory?
Kille: I believe that X.500 and a framework based on X.500 will be the directory. There’s an interesting distinction, though, between the directory and the only technique for doing resource discovery. I think Professor Schwartz is doing some very interesting work at Colorado. And I see it as very much a complementary rather than as an alternative to X.500.
In many ways if you’re looking for things and you’re trying to do resource discovery, you need to index to access things in a very wide and complex set of mechanisms. And there isn’t going to be one right or one technique to do that. That you’ll be using techniques—databases, indexing, and approaches which are going to be tailored to the problem that you’re trying to solve, that’s going to give access to databases or to information sources which are going to be helping you to solve your problems.
I see the X.500 infrastructure as something rather more low-level from that. But despite being lower-level it’s probably a more important part of the basic infrastructure for building network services. The reason why I think you’ll want to have a single global directory is exactly the same reason that you want to have a single telephone service. I mean if you imagined trying to find a telephone number via resource discovery, that you ahd to somehow know which telephone service somebody’s using and it’s a prerequisite to be able to call them, it would be quite hopeless. You basically need to have a scheme where there’s effectively a single global number space which defines all the telephones in the world so you can pick up a telephone with a little bit of information on how you get the local access, but beyond that you can basically take somebody’s local telephone number and you can call them.
And I think you need exactly the same basic naming framework for computer resources so that a person or a computer can be labeled in a global framework. I mean, the Internet community does similar things with the DNS scheme. It’s used in the research and other aspect of the Internet world for labeling resources, but I think that the X.500, because of its positioning and because of the additional functionality is going to be the correct directory service for the broader commercial world.
Malamud: Will X.500 replace the Domain Name System?
Kille: I believe in the long term, yes it will. I think that there will be a long coexistence, and I think that there are certain aspects of the DNS particularly in the short term and particularly without the Lightweight access protocols where it would not be appropriate to use X.500 as a plug replacement. But I think in the long term, as the size of the world grows and you try to encompass a large number of people, that the framework and the political acceptance of X.500 will cause it to succeed. I think you can think of the DNS as sort of analogous to a company phone system which has got nice short numbers because the company is smaller and it’s convenient to use. But in the long term you’re going to move to a nationally-defined telephone service.
Malamud: How long’s it gonna take to get that global infrastructure in place, where we can actually use X.500 as a way of finding names, or public keys, or things of that sort? Is this a five-year rollout?
Kille: It’s probably longer than that. But…it isn’t a technology problem. In the technology, we can understand how to build directory servers today. I think it will be exactly the same sorts of problems, and in many ways much worse than building the initial telephone system of getting the collaborations and connecting things together. There are a lot of very serious problems with building a directly service, and one is the issue of multiple service providers and that it isn’t possible for directory to be run by one person. You’ve got to allow competing organizations to share in the provision of those services. And so they have to be both competitive in the sense that they can compete with each other and bring in different customers, but there also has to be a measure of collaboration so that overall, as with the telephone service you have a sort of single number space which can be referred to.
Then perhaps a much more stronger problem, you have to define naming structures that, you have to have a registration services where organizations can get names which will either be new allocation mechanisms or as is being proposed by the North American Directory Forum is basically leverage off a civil infrastructure to get name allocation from names that have been assigned already. And then you have to plug in, you have to take data which is in whatever existing sources and to coerce it into this information framework that’s provided by X.500. And the information framework is one of the strengths and one of the weaknesses of X.500. In fact it’s going to be a problem that’s going to occur with any reasonably rigidly-defined directory service. The strength is that when you have it in place, a user of the directory service has this very clear model of the data that’s in there so it can provide a quality and uniform and a very sophisticated access to this because the information is [typed?], so it can access different types information, it can present them in different ways.
But the flip side of that is you have to take your existing information, you have to coerce it into this format. And as I’ve said, real information is very much a mess. It’s not neatly laid out and structured. It’s all over the place. And to take it and to bring it into this format is initially very much a lot of work. And then to be there in a way that it will be maintained and updated. And it’s not going to be as sort of an ad hoc effort, somebody does it one evening and it stays there. It’s something that’s got to become a part of the process of operating the directory services and operating the business, so that when you make changes to…somebody arrives on the staff or somebody moves offices, that as a part of the management procedure the information gets changed and the directory gets changed as a consequence of that.
Malamud: Well, what are the chances of that actually happening on a consistent basis? Of people really maintaining their data, and of this global set of cooperating directories actually being put in place? It’s obviously a desirable goal, but is it something we can realize?
Kille: You will realize it because as the usage of it grows, and as more data gets put into the directory, that the benefits of maintaining it and keeping it up to date become very much stronger. And you’ve seen this on a smaller scale at the university where I was at. And when I arrived in the in the department initially, that the mail system was very much an ad hoc approach, and there were some people who were in there and some people weren’t. And the data was basically maintained by somebody independent of the department. And over that period we’ve moved to a much more systematic approach. When somebody arrives they get put into the database, and that as a consequence creates them an account and creates them in the mailing entry system. So it basically becomes a part of the way the system operates. And that’s important, because the mail infrastructure is a part of the way people do business. You can’t actually work effectively in the department without having mail.
Malamud: But you also had an institution and people that were willing to go through that process. Is it fair to think that other institutions will be similarly conscientious in keeping their data up to speed in the database?
Kille: I think… The analogy, and the important one, is it’s when there is sufficient benefit to be gained from it. That if it’s only…the only reason that you try to keep the data up to date is to maintain the X.500 directory it’s not going to happen. That in itself is not a goal. But when it becomes part of the way of doing business; that you find that the directory infrastructure is being used for routing and delivering mail which are relying on it; it’s used to the faxes; it’s part of the management infrastructure; it becomes an essential part of the business operation that anybody in the department is correctly registered in the directory because it’s not just an extra lookup of people in the department, it’s just part of the way that the department, the organization, does business. And at that stage it’s not that people want to do it, it just becomes part of the way of operation. And that’s the way that the directory will operate effectively.
Malamud: I’ve heard two theories. One is that the reason X.400 isn’t more widely deployed is because we’re waiting on X.500 to be there as a support basis. And what I just heard you say is that X.500 will be more widely deployed when X.400 is used more. Is that a bit of a circular argument there?
Kille: I didn’t think that I ever introduced X.400 in there. I said mail services and infrastructure. I used mail as an example of the reason why this sort of technology is going to become important and to permeate. But the real driving thing behind the X.500 is not mail per se but things that need a directory infrastructure in order to operate. And that’s going to be an increasing number of office applications.
Malamud: Public key cryptography, for example, is based on certificates being stored in some form of a directory. Is that an example of a basic service that needs X.500?
Kille: That’s a very good example, yes.
Malamud: And do you think maybe public key cryptography and Privacy-Enhanced Mail will be the thing that will spark X.500 usage? Because obviously certificates do you no good unless you have someplace to get them from.
Kille: I think that Privacy-Enhanced Mail will not be deployable until X.500 has been deployed. But I think the means that they’ve defined at the moment, they use all the X.509 certificate infrastructure and the way that X.500 says how do you do security, but it shies away from using the full X.500 directory. But that means that you can do the mechanisms of PEM without X.500. But in fact to deploy it, if you want to find someone’s certificate in a systematic way, I think there’s going to be a lot of problems. And so…you know, it’s a good example of an application that would be facilitated by the presence of an X.500 directory.
I guess I don’t believe, unlike some people, that there is some single course what’s going to mean that X.500 will be deployed. And the real benefits of X.500 is it’s something that can— It’s a very general and flexible directory service, and it can be used to solve a broad range of problems. And it’s going to be the sum rather than the individual that really is the strength in the long run. That it’s been commented if you want just to do white pages lookup, you wouldn’t use X.500 because you could do something targeted at that particular problem pretty much…much much simpler. But if you are going to build an infrastructure with X.500, you solve this, and then suddenly you have a framework whereby you can solve an increasing range of different problems.
And at the OSI DS working group on Monday this week, we were looking at the use of X.500 for solving a whole range of registration and management problems. And I think if you had something which had been targeted at white pages you wouldn’t be able to talk about that simply because you didn’t have a framework that was well-enough defined and extensible to be able to go down that path. And I think it’s going to be as we start to build X.500 infrastructure we’re going to be able to leverage off a whole range of additional services.
Malamud: You carefully separated X.500 from X.400. And you very carefully talked about the global directory. Are we gonna have the global messaging system based on X.400, or are we gonna see multiple messaging protocols, connected with gateways?
Kille: I think that we’re going to see both. Gateways—
Malamud: Global and local.
Kille: Gateways are a real fact of messaging life. And… As I’ve seen messaging evolve, people run wide ranges of different protocols. And the only thing I can see that’s going to mitigate against that is that whenever you run a gateway you get problems. You get management issues with the gateway. You get loss of functionality across the gateway. And so there is a lot of operational benefit from working in a homogeneous rather than a heterogeneous environment. Although, the realities of doing that are very difficult at the moment.
I think that X.400 is definitely going to become the global messaging backbone. I think the reasons for that is it’s politically positioned right. It is a rich service, and by offering a backbone service which has got rich features you’re going to allow anything else to connect onto it effectively. It has an addressing structure which although awkward deals quite effectively with the problems of multiple service providers. And in a commercial world, if we’re going to move away from a model where you basically have a network infrastructure and the mail just runs over it without any real mail service support— And that model works fine for research organizations and for people that are technically aware, I think it works much less well for peop—for organizations which are not so technically aware. I think that the benefits of doing your remote interconnect through a mail service provider and having somebody looking after your mail services rather than just operating SMTP over the network are quite substantial. And for that, you have to have mail service providers and you have to have competition and multiple service providers.
Malamud: There have been extensions to the SMTP base. There’s an SMTP extension for binary transfer. There’s the MIME definition of different body parts and different character sets. Doesn’t that address many of the issues of the feature-rich aspect of X.400?
Kille: In terms of provision of local service, absolutely it does. And I think somebody who goes and says you should use X.400 because it has features X is onto a loser because if you’re operating some other protocol, be it Microsoft Mail, be it SMTP, you could go and you could add that feature. And because you have a much tighter environment you could probably do it in a way that’s cleaner and simpler than the way it’s done with X.400.
The strength of X.400— It is feature-rich, and that means that whatever you’re doing you have in it a way that you can gateway it through. So you’ll be able to make use of the backbone. You don’t have a problem— If had got a low-feature backbone, you would not be able to do that.
And the second is the political positioning of it. That it is a standard that is recognized and will give a framework that the operators and the international service provides will recognize as providing the backbone. And I’m just not convinced but in an international framework that organizations are really wanting to recognize SMTP as the way of working.
Malamud: Why is that? What why is the—the standards coming out of the IETF, why are those standards not being recognized politically as being appropriate or fair, or somehow international and endorsed?
Kille: It will depend where you sit. I think if you are a government organization in Europe it’s going to become issues of change control, who your organization are, what they’re representing. And that then reflects onto organizations— I mean, if I was a large chemical organization in Europe, I want to see things which— I’m not in the business of doing research or working with the research community, I want commercial service provision, and I want to look to the standards that are going to be adopted by the commercial service providers. And that’s why—
Malamud: But doesn’t that mean we end up with a compromise standard. Instead of getting the best we can, we’re trying to bring things down to a lowest common denominator?
Kille: I think there is definitely a measure of that. In any standards process, be it an IETF one or an ISO one, there is always compromise. But there is a need within whatever your constituency is to achieve a compromise between the participants. Research community standards have exactly the same problem. It’s usually their constituency is somewhat smaller so the measure of compromise is not as large as usually has to be gone through for international standards.
So, I don’t think it’s an issue of perfection. No standards are perfect. They’re going to be the result of compromise, and it’s a question more of adequacy. Has the standard got fundamental technical reasons why it’s unacceptable? Or is it something which is basically there and able to do the job. And I think both X.400 and X.500 amply satisfy the criterion of meeting what’s needed to do the job. Sure, you could go and specify something that would do the job better. You could do improvements on it. But you’re looking at things where could change details here, we could clean out some of these features to make something that was more focused and would target better onto the problems that I’m solving. But you wouldn’t come in and say there are such fundamental deficiencies here that we really can’t use this stuff. There are very clear demonstrations of both X.400 and X.500 that they’re perfectly viable and manageable and usable solutions.
Malamud: So by accepting those compromises, you’re seeing the ability to scale this out to a bigger network.
Kille: Exactly.
Malamud: You see a tradeoff there between the two.
Kille: Exactly. And over the last year I’m seeing a lot of movement in commercial X.400. I think a lot of firms are talking very seriously about X.400 solutions. That they find that using things such as the cc:Mail, and the PC solutions, and it’s those rather than the Internet-based solutions that are the moment are the most serious commercial competition to X.400.
The Internet has by large much less penetration into organizations. They see the political need for X.400 and we’re starting to see products that are of acceptable quality in terms of deployment within organizations. Most organizations moving down that are accepting that the X.400 and X.500 products or not yet of the quality that they are seeing from their PC vendors but see the benefits of strategically setting that path and then pushing on their suppliers to provide the X.400 and the X.500 solutions.
Malamud: Is the Internet going to be the underlying infrastructure that this X.400 backbone runs on, or are we going to see another network of networks form?
Kille: I think that’s that there will be a range of underlying technologies. But it’s my belief that the Internet TCP/IP is going to be the dominant interconnect technology. I don’t think it’s going to be the dominant interconnect in the way that the Internet researchers have wanted and hoped that the picture there would be this single connected IP Internet with everybody in the world linked onto it. I think that it just isn’t like that and most commercial organizations have far too much paranoia to allow something at the packet level to penetrate into the organization. So we’re going to see commercial organizations running TCP/IP networks but without external connectivity. That the TCP/IP is an internal solution with applications that will gateway to the external world. So internally you would X.400 and X.500 operated over TCP/IP, and then connected out over public providers.
Malamud: So you agree with Dave Clarke’s vision that without adequate security, adequate—other improvements in the Internet we end up with a world of application-level gateways, and you see those being X.400-based?
Kille: I see those as being X.400-based, yes.
Malamud: Okay.
Kille: I also think that X.500 will for some organizations at least be an acceptable entry because they will use it basically as a means of presenting their vision of the world. It won’t be lists of all their employees, but it will be the people they would want to publish in a telephone book that they would allow people outside the organization to see basically as a means of facilitating communication with the organization, not as a means of publishing the entire structure and people in the organization.
I think the security thing at the network level is not something of detail. It’s actually much more fundamental. That before something as low as a packet or an arbitrary connection penetrating from somewhere outside into the organization onto some workstation without the boundary of the organization having a very tight control over what is going on, is just unacceptable. But mail is a much more constrained thing. It’s basically like a packet arriving at the door of the organization, and the organization accepts the contents of the packet. There’s no complex interaction or penetration of facilities. So that is really is a much more acceptable means for most organizations of how they operate.
Malamud: Thank you very much. We’ve been talking to Steve Kille, and this has been Geek of the Week.
Malamud: This has been Geek of the Week, brought to you by Sun Microsystems, and by O’Reilly & Associates. To purchase an audio cassette or audio CD of this program, send electronic mail to radio@ora.com.
Executive producer for Geek of the Week is Martin Lucas. Our system administrators are Curtis Generous and Rick Dunbar. Our production manager is James Roland.
Geek of the Week is made possible through the generous support of our sponsors, including Sun Microsystems and O’Reilly & Associates. Network connectivity for Geek of the Week is furnished by UUNET Technologies. This is Carl Malamud for the Internet Multicasting Service.
Internet Talk Radio, flame of the Internet.