Items

Chronicle and slashdot on LOCKSS

Tagged:  

slashdot is running this story on the Highwire Press-led LOCKSS project for distributed long-term journal storage. The /. blurb references the months-old Chronicle piece on the topic. The LOCKSS page hasn't changed in a while... hope we see some source code and reports soon.

LITA OSSIG@ALA meeting notes

Tagged:  

as seen at the osdls site: "The Open Source Systems for Libraries Interest Group of LITA held its initial business meeting at ALA Midwinter in San Antonio, Texas, on Jan. 15 from 5:30 PM to 6:30 PM. 24 people were in attendance..." More info, including how to join the OSSIG listserv, is available here. Great turnout, but I guess Jeremy didn't bring his camera this time... :(

now playing: www.oss4lib.org

Tagged:  

It's true; click on over and check out www.oss4lib.org. Everything should be working ok but let me know if anything seems broken.

Ahhh... :)

your ideas sought for oss4lib articles

Tagged:  

There is now a place for you to post articles, whitepapers, opinion pieces, etc. related to oss4lib and Open Source projects for libraries. Anyone can post comments on your article, too. Click -articles- above, check it out.

docster: instant document delivery

Tagged:  

docster: instant document delivery

(c) April 2000 by Daniel Chudnov
You may reproduce this article in any format and for any purpose, but only in its entirety, including this statement.
Background: the attack of napster

Have you seen napster yet? If not, take a look. Napster is two things: one part distributed filesystem and one part global music copying tool. It works incredibly efficiently and is very easy for users.

A typical session with napster might go like this:

  1. sit at a fairly new computer with a decent net connection
  2. think of a song
  3. install a napster client (easy to find and do)
  4. connect and search for your song
  5. download your song (probably a 3-5Mb .mp3 file)
  6. install an mp3 player (easy to find and do)
  7. play your song

That's it. Do not go to the record store. Do not respond to the monthly selection. Look for what you want to hear, click, download, listen. And everybody's doing it. So many people are using napster, in fact, that several college campus network administrators are cutting out all napster traffic because the traffic is flooding their internet pipes.

Why is napster so successful? Because it's simple. Behind the scenes, it works quite simply also. You have mp3 files on your machine, and your machine is on the net. When you connect (usually by just starting your client application), the napster server knows what files are on your machine (if you tell it where to look). And the napster server knows what files are on the machines of the other two or three thousand people logged in at the same time. There's a song you want to hear? Search the napster server... it knows who has it, and napster will send you a copy of some other bloke's copy of that song.

Upon connecting to napster (usually late at night on weekends; the university I'm at doesn't allow napster traffic during business hours, a reasonable restriction), there are normally about 2,500 users logged in, and over 600,000 songs (files) available. Probably 80% of these files are duplicated at least once, and 20% probably account for 80% of the traffic and so on, but I've found some fairly obscure, groovy stuff. The key thing is that if I can think of a song, I can probably find it, even though napster applies no organizational scheme to its catalog of connected songs.

What of it?

The questions any librarian would ask at this point are obvious: "what about copyright?" and "doesn't it need to be organized?" The simple answer for question one is that while the napster folks state that they seek "to comply with applicable laws and regulations governing copyright," napster is widely used for making copies of songs in blatant violation of copyright. I know... I've done it. This doesn't seem to stop thousands of folks from using it; evidently folks aren't losing sleep over it. Napster is being sued, but the service hasn't been slowed at all yet. To put it simply, we all know that it's wrong, but somehow this is still-too-new for many people to dismiss as morally corrupt so therefore plenty of users remain.

(Note (2000-4-14): today the news broke about Metallica suing napster and my employer. Maybe it's time to post that Kirk Hammett pick I got at that 1989-7-4 Pine Knob show on ebay. :)

As for organization, it doesn't seem to matter. Nobody's organized a bit of it. Catalogers should turn red when seeing how poor (read: absent) the indexing is. 100% brute force, not even stop word removal and no clear record editing. Applying a few simple techniques to this problem would make searching for songs more reliable but not really any easier, because people are already mostly finding what they want to find and that's adequate for most. If you don't believe this, ask your nearest university network administrator.

And napster isn't just about music. As is well stated in this cnet article (and in this week's Nature, 13 April 2000, "Music software to come to genome aid?" by Declan Butler), this is a groundbreaking model of information delivery. It changes things. It's a killer app if ever there was one. The napster model shows that it's simple to share movies, music, anything that can live in a file on a connected box. All you need is a simple protocol and some clients that can speak that protocol. And fast net connections and big, cheap hard drives aren't going away anytime soon.

Some might wonder how napster is different from what the web already provides. The difference might seem minor, but cuts backwards through everything librarians know about giving people access to information. The difference is that while anybody can put anything up on the web and share it with friends, few people can provide the necessary overhead. Even if you can run a web server, for instance, a certain amount of centralized description or searching is still necessary (directory sites, search engines, etc.) before anyone can find your files.

With napster, however, you only have to be connected and willing to share your files. Napster does the rest by keeping track of what you've got so others can find it. You don't need to do anything to let thousands of other people copy files from your machine via napster.

So put aside security and copyright and organization concerns for a moment and consider... does this remind you of anything? Hmmm... I've got a song/movie/file that I've enjoyed and other people might like it too. Maybe other people have things I would enjoy. I wouldn't mind letting other people have my song/movie/file if I could also use theirs in return. This kind of cooperation could work. But how can I be sure that such cooperation would continue? And how would we organize it all?

Ever hear of a lending library?

Paper shall set you free

Funny how napster doesn't care about dublin core or MARC. It doesn't need a circulation module. It doesn't even matter what kind of computer you have, as long as you have a working client and decent bandwidth. Think of the implications of applying this model in our libraries. With all the advances in standardization of e-print archives and such (see the Open Archives initiative), we already have high hopes about the future of online publishing. With that solved, maybe the napster model could help us deal with our favorite legacy format: bound journals.

Have you ever worked in or near a busy InterLibrary Loan office? Do you know that sometimes we libraries photocopy and use Ariel to send the same document ten times in a month for patrons in other libraries? It seems terribly wrong that we've got to do this work over and over when we could just keep a copy on a hard drive, but we know well that the legal precedent today prevents us from creating such centralized storage. Heck, often we can't even fill an ILL request out of our already-digital ejournal collections because we sign restrictive licenses. So we have to go back to the stacks and photocopy and scan it through again instead of clickclickclicking to a lovely pdf for which we've paid so heftily.

But looking at napster, there's a key thing to consider about how its file sharing model might apply to document delivery. In napster, there is no centralized storage. In napster clients (gnapster, at least) users see from search results that there are listings of other users' copies of a song you want. You click to download. In the background napster echoes to its status line "requesting La Vida Loca from user hongkongfooey" and sometimes hongkongfooey says no... which is okay, too, because you can probably ask somebody else for it. When somebody (more precisely, their napster client, depending on how they've set their preferences, as napster doesn't wait for human approval if the right options are already set) okays a file transfer, they are giving you a copy, not napster.

In walks docster

Imagine all the researchers you know, with a new bibliographic management tool that combined file storage with a napster-like communications protocol -- docster. Instead of just citations, docster also stores the files themselves and retains a connection between the citation metadata and each corresponding file. Somewhere in the ether is a docster server to which those researchers connect. They're reading one of their articles, and they find a new reference they want to pull up. What to do? Just query docster for it. Docster will figure out who else among those connected has a copy of that article, and if it's found, requests and saves a copy for our friendly researcher.

Of course, we cannot do this. Libraries depend too much on copyright to attack the system so directly. But what if we focused instead on altering the napster model enough to make it explicitly copyright-compliant? After all, many cases of one researcher giving another a copy of an article are a fair use of that article. Fair use provides us with this possibility and it's not a giant leap to argue that perhaps coordinated copying through such a centralized server could constitute fair use, especially if docster didn't compete with commercial interests.

Well, it's still a big leap, but think of the benefits. Say there's an article from 1973 that's suddenly all the rage. It doesn't exist online yet, so a patron request comes to you from some other library, and you've got the journal, so you fill the request. But forty-eight other researchers want that article too. If that first patron uses docster, any of those other folks also using docster can just grab the file from the first requestor. If others don't use docster, they can request a copy from their local libraries, who -- I hope -- do use docster. Nobody has to go scan that article again, and suddenly there is redundant digital storage (see also LOCKSS).

Let the librarians librarize

Still, though, we're not doing enough to enforce copyright. Currently, a research library filling tens of thousands of document requests for its patrons per year makes copyright payments to the CCC when their fills demand. This system keeps publishers happy but keeps librarians chasing our collective tails. And even though systems like EFTS have automated even the copyright payment transfers, we're still continuing our massively parallel redundant (wasteful) copying and scanning efforts.

But because we're so good at making sure we make payments, we could leverage that structure within docster. We could federate the docster servers at the institutional (or consortial) level. For the several hundred or thousand researchers in a given field with a departmental library at a big institution, their docster requests go through their library. The requests, that is, not the files. It might look like this:

  1. Researcher A at institution X needs an article
  2. his docster client is configured to use the X University Library docster server
  3. X University Library docster server searches for his article at friendly other University docster servers
  4. Y University Library knows that local Researcher B has it
  5. Y University Library claims the request
  6. Y University Library tells Researcher B's docster client to send a copy to Researcher A
  7. Researcher A reads the article, applies to research, wins Nobel, donates winnings to X University.

In this transaction (which might take only a few seconds for queries and download time) both libraries know about the article being requested, but X Library can keep A's identity private. Likewise Y Library can keep B's identity private. Thus the transaction might consist of identification at the institutional level, ensuring the privacy of both parties. But if a copyright payment needs to be made, X Library can pass that through to EFTS for clearance and then charge Researcher A's grant number (assuming, of course, that Researcher A knowingly signed up for the service). Y Library didn't have to pull anything from the stacks, and Researcher B might have been cooking dinner through the whole thing. Neither library ever stored or transmitted a copy directly; rather they only determined who had a copy (Researcher B), and had a copy sent to the requestor (Researcher A).

And the paper publisher gets paid. Everybody's happy.

Concrete steps and benefits

The necessary infrastructure for making this work is mostly in place. There are variants of the napster protocol under development (see the cnet piece now if you didn't read it before ;), none of which would require significant modifications. Some sort of federation protocol would need to be established, but it wouldn't be any more complicated than the routing cell structure (target library priority lists) implemented for Docline. Docster client software would need to be integrated with bibliographic metadata, but that's an easy hack too.

To address security concerns, it might be necessary to carefully define the protocol so that it would not compromise any individual user's machine. Additionally some sort of basic certification authority might need to be used to verify the identities of source and target institutions. While these are not trivial tasks, there are well-known approaches to each.

Think of the time we would save, and the speed at which articles would move around. For any article that had ever been filled into the docster environment (and that still lives on a connected machine), there would be no more placing a request, verifying the cite, placing the order, claiming the order, pulling from stacks, copying, Arieling, Prosperoing, emailing, etc., not to mention all the logging and tracking we rekey when moving requests from one system to the next when we don't have an ILL automation system (and even if we do). Your happy researcher would just need to type in a search and -- hopefully -- download what she needs. If not, you or your neighborly peer library make the copy and send. Once. And nobody else has to again.

Indeed it is easy to imagine building some local search functions into docster clients to avoid even making a request whenever possible. A local fulltext holdings database might be queried first (through the help of something like jake), then a local OPAC for print holdings. If these steps fail, a request could be automatically broadcast, and any request that bounces due to bad data or lacking files could be corrected, rebroadcast, or sent through existing systems like OCLC, RLIN, or Docline, and then Ariel'd back and delivered through the local docster server. The next such request would hit the now docster-available file (if the first requestor keeps his machine online).

Why wouldn't researchers create workarounds and bypass the libraries altogether? Well, technically, they already can use napster-like services for this, and obviously there's nothing to stop them from doing so. But libraries would play several vital roles in this equation: first, our patrons trust us with private information because we've safeguarded their privacy carefully and reliably for years; second, libraries can provide many value-added services such as integrating docster searches with additional functions as described above; third, the clientele we serve (at least where I work, a medical library) includes researchers working on deadlines and clinical staff saving patients' lives. Many of these patrons are fortunate enough to not have to care about costs; they wouldn't think twice about paying a reasonable charge for immediate, accurate delivery of crucial information.

Beyond the fact that building EFTS payments into this model would help our accounting procedures become even more automated, consider the copyright question one more time. As docster grows, more and more articles would be fed into the system. Some of those articles will be old. Some will even be old enough to qualify as public domain. As each year passes, a slew of articles will pass into such exalted status. For these public domain articles, nobody has to do any accounting at all.

Put this together with today's increasingly online publishing, and a window begins to close -- the window between today's e-prints (which will increasingly follow the open archives specifications and be accordingly easy to access) and yesterday's older print archives (which will increasingly be public domain). In between is a growing pool of documents available through docster, instantly accessible within complete compliance of (read: payment for) copyright.

It certainly wouldn't take very long to construct and conduct a limited trial. If we approach docster from day one as a good faith effort to comply with copyright while creating efficiences, we might be challenged by publishers but we'll at least have a good case going for us that we're not taking any money away. And best of all, we'd certainly have our patrons on our side. In all likelihood, the amount of revenues publishers would receive would probably increase significantly. Any librarian will tell you that the minute access gets easier, more people want access.

Think it through. I'll say it again: if you still don't believe this can happen, ask your nearest network administrator about napster traffic.

Acknowledgements

I am very grateful to KB, AB, MW, RKM, RMS, MG, AO, and KP for their insight and feedback on early drafts, and in particular to JS for allowing me to test this idea out in public and on an unsuspecting crowd. -dc

Open Source Library Systems: Getting Started

Tagged:  

Open Source Library Systems: Getting Started

(c) 1999 by Dan Chudnov

note: a significantly edited version of this article appeared in the August 1, 1999 issue of Library Journal. You might prefer their edited version to this version. You are free to reproduce the text of this version for any purpose and in any format, provided that you reproduce it in its entirety (including this notice) and refer to the url from which it is available: http://oss4lib.org/readings/oss4lib-getting-started.php

Introduction

The biggest news in the software industry in recent months is open source. Every week in the technology news we can read about IBM or Oracle or Netscape or Corel announcing plans to release flagship products as open source or a version of these products that runs on an open source operating system such as Linux. In its defense against the Department of Justice, Microsoft has pointed to Linux and its growing market share as evidence that Microsoft cannot exert unfair monopoly power over the software industry. Dozens of new open source products along with regular news of upgrades, bug fixes, and innovative new features for these products are announced every day at web sites followed by thousands.

The vibe these related events and activities send out is one of fundamental change in the software industry, change that alters the rules of how to make software--and how to make money selling software. What is all the noise about, and what does it mean for libraries?

Open Source: What it is and Why it Works

If you've ever used the internet, you've used open source software. Many of the servers and applications running on machines throughout the wired world rely on software created using the open source process. Examples of such software are Apache, the most widely used web server in the world, and sendmail, "the backbone of the Internet's email server hardware." [TOR] Open source means several things:

  • Open source software is typically created and maintained by developers crossing institutional and national boundaries, collaborating by using internet-based communications and development tools;
  • Products are typically a certain kind of "free", often through a license that specifies that applications and source code (the programming instructions written to create the applications) are free to use, modify, and redistribute as long as all uses, modifications, and redistributions are similarly licensed; [GPL]
  • Successful applications tend to be developed more quickly and with better responsiveness to the needs of users who can readily use and evaluate open source applications because they are free;
  • Quality, not profit, drives open source developers who take personal pride in seeing their working solutions adopted;
  • Intellectual property rights to open source software belong to everyone who helps build it or simply uses it, not just the vendor or institution who created or sold the software.

More succinctly, from the definition at www.opensource.org:

"Open source promotes software reliability and quality by supporting independent peer review and rapid evolution of source code. To be certified as open source, the license of a program must guarantee the right to read, redistribute, modify, and use it freely." [OSS]

Software peer review is much like the peer review process in research. Peer review bestows a degree of validity upon the quality of research. Publications with a high "trust factor" contribute ideas in published works to the knowledge base of the entire communities they serve.

It is the same for software. As described in the seminal open source work, "The Cathedral and the Bazaar" by Eric Raymond, author of the popular email program fetchmail, the debugging process can move faster when more individuals have both access to code and an environment in which constructive criticism is roundly welcomed. [ER] This leads to extremely rapid improvements in software and a growing sense of community ownership of an open source application. The feeling of community ownership strengthens over time because each new participant in the evolution of a particular application-- as a programmer, tester, or user--adds their own sense of ownership to the growing community pool because they are truly owners of the software. This community effect seems similar to the network effect seen across the internet, whereby each additional internet user adds value to all the other users (simply because each new user means there are more people with whom everyone else might communicate). For open source products which grow to be viable alternatives to closed-source vendor offerings, this growing community ownership begins to exert pressure on the vendors to join in. [NYT]

This tendency shares a striking similarity to the economic value of libraries. A library gives any individual member of the community it serves access to a far richer range of materials than what that individual might gather alone. At an extremely low marginal cost to each citizen expensive reference works, new hardcover texts, old journals, historical documents and even meeting rooms might be available through a local library. The library building, its collections, and its staff are infrastructure. This infrastructure serves as a kind of community monopoly in a local market for the provision of information. Instead of reaping monopoly profits for financial gain, however, a library returns the benefits of its monopoly to individual users. The costs of maintaining this monopoly are borne by the very community which holds the monopoly. To the extent which this model works in a given community, a library is a natural yet amenable monopolistic force. If this sounds mistaken, consider whether your community has libraries which compete or cooperate.

Library Software Today

No software is perfect. Office suites and image editors are pretty good; missile defense systems are, for all we know, appropriately effective; search engines could use improvement but usually get the job done. While there is constant innovation in library software, for many of us online catalog systems mean a clunky old text interface that often is less effective than browsing stacks. Often, this is due to the obstacles we face in managing legacy systems; new systems might be vastly improved, but we are slow to upgrade when we consider the costs of migrating data, staff retraining, systems support, and on and on. Sometimes, new versions of systems we currently use are just not good enough to warrant making a switch.

This is not surprising. The library community is largely made up of not-for-profit, publicly funded agencies which hardly command a major voice in today's high tech information industry. As such, there is not an enormous market niche for software vendors to fill our small demand for systems. Indeed the 1997 estimated library systems revenue was only $470 million, with the largest vendor earning $60 million. [BBP] Because even the most successful vendors are very small relative to the Microsofts of this world (and because libraries cannot compete against industry salary levels), there are relatively few software developers available to build library applications, and therefore a relatively small community pool of software talent.

What are we left with? Some good systems, some bad. Few systems truly serve the access needs of all of our users, failing to meet a goal--access for everyone--that most public libraries strive to achieve at more fundamental levels of service. Because libraries are community resources, we tend to be quite liberal about intellectual and physical access issues, including support of freedom of speech and ADA-related physical plant modifications. At the same time, librarians are very conservative about collections and data (remember the difficult issues when you last weeded?). Is it not odd, then, that market forces lead us to be extremely conservative about online systems software? After all, online systems are no less about access to information than having an auto-open front door or an elevator in a library building.

We read of exciting technological innovations in library-related systems. Innovations in advanced user interfaces and metadata-enabled retrieval environments and other areas have the potential to make online access more and more seamless and easy to use. Our systems, though, are too old--or not standardized enough, or too familiar to change--to take advantage of these advances. And creative ideas from exciting research seems not to make headway in real systems.

Libraries, if they indeed hold the kind of community monopoly described above, might do well to enhance their services by leveraging community-owned information systems--which open source seems to promise.

Open Source and Libraries

How could open source improve library services? First, open source systems, when licensed in the typical "general license" manner, cost nothing (or next to nothing) to use--whether they have one or one thousand users. Although the costs of implementing and supporting the systems on which software runs might not change, imagine removing the purchase price of a new search interface (or ILL tool, or circulation module, etc.) from your budget for next year. Rather than spending thousands on systems, such funds might be reallocated for training, hiring, or support needs, areas where libraries tend toward chronic shortfalls.

Second, open source product support is not locked in to a single vendor. The community of developers for a particular open source product tends to be a powerful support structure for Linux and other products because of the pride in ownership described above. Also, anyone can go into business to provide support for software for which the very source code is freely available. Thus even if a library buys an open source system from one vendor, it might choose down the road to buy technical support from another company--or to arrange for technical support from a third-party at the time of purchase. On top of this flexibility, any library with technical staff capable of understanding source code might find that its own staff might provide better internal support because the staff could have a better understanding of how the systems work.

Third, the entire library community might share the responsibility of solving information systems accessibility issues. Few systems vendors make a profit by focusing their products on serving the needs of users who cannot operate in the windows/icons/menus/pointer world. If developers building systems for the vision impaired and other user groups requiring alternative access environments were to cooperate on creating a shared base of user interfaces, these shared solutions might be freely built into systems around the world far more rapidly and successfully than ever before.

A Three-Step Process

If you are still reading, you probably suspect something here might be a good idea. You might even want to help make ideas discussed above happen. Where to begin?

Understand the Phenomenon

Axiomatic business notions have shown weaknesses throughout the information age; the utility of the internet for knowledge sharing demanded rethinking of what constitutes an information product. If nothing else, it is important for the international community of librarians to understand the open source phenomenon as part of the technology-driven shift in our understanding of the nature of information. Because the ethos and style of the open source initiative is so akin to the traditions of librarianship we hold at the core of our professionalism, we should find within open source the appropriate points of entry for the similar service and resource-sharing objectives we choose to achieve every day.

The seminal works on open source are mostly technical, but they provide an envigorating view of the current state of software engineering. All are available on the internet, and they form a core of knowledge that might one day be fundamental to our discipline. "The Cathedral and the Bazaar," by Eric Raymond [ER], is widely cited as the pivotal tome describing the technical and social processes open source entails. "The Open-Source Revolution," by Tim O'Reilly [TOR], founder of O'Reilly and Associates, Inc., a highly respected publisher of pragmatic computer-related titles, gives a broader view of the social phenomenon, in particular relating open source software development to the scientific method. Finally, www.opensource.org is a central point of focus for the Open Source Initiative. It is led in part by Mr. Raymond and appeals to both the technical and non-technical sides of the community.

To foster communication regarding open source systems in libraries, we have created a web site, www.med.yale.edu/library/oss4lib, and a listserv, oss4lib@biomed.med.yale.edu. They are intended as forums for announcement, discussion, and sharing of broad information; look for instructions on how to join the list along with a list of current open source projects for libraries the oss4lib site.

Use Open Source Systems Where You Are

Armed with understanding, we can find opportunities to leverage existing open source systems in our own institutions. The Linux operating system [LINUX], Apache web server [APACHE], and MySQL database [MYSQL] form a powerful, free platform for building online systems. Consider the value of these and other open source systems when making design and purchase decisions at your institution; you might find tremendous savings and increased product performance at the same time.

Beyond merely using open source products, however, we must create them. Are you already working on any new applications at your institution? Perhaps you've put a year or two into a homegrown search interface, or an online reference services tool, or a data model and retrieval code for an image archive. Is there a good reason why you wouldn't want to share that work? For those of you who realize that someone else might benefit from what you've done--and that you might benefit from the ability to share in the work of others--consider thoroughly the implications of releasing your code under an open source license. [FH] If the benefits outweigh the negatives, get started sanitizing and documenting your code as well as you can, and set it free.

Another ideal opportunity at this stage is for library and information science researchers to open their projects up for the entire community to review and develop as appropriate. Grant-funded systems builders might find an afterlife for their work by releasing their source. Faculty might design courses around building a retrieval system or improving an existing open source tool. Indeed this model is already widely used by computer science professors--at Yale, for instance, undergraduate students might work on aspects of the Linux kernel in their Operating Systems course.

Grow the Phenomenon

As the library community moves in this direction, there will be many roles for individuals in our profession to fill. Most visible is application development; there is a major need for software engineering resources to be devoted to creating community-owned library systems. This does not in any way marginalize those of us who are not programmers or database administrators. In the open source community there exists a tremendous need for exactly the skills librarians have always used in making information resources truly useful. In particular, systems testing, evaluation, and feedback to open source designers is welcome and even sought after; documentation for open source systems is always needing improvement; instructional materials for open source products are often lacking. These are all areas in which librarians excel. For the more technically minded among us, www.freshmeat.net provides constant updates and announcements of general open source projects replete with contact information for those wishing to participate. For all of us, the oss4lib listserv and website will highlight additional library-specific opportunities as they come around.

Playing a role in the larger open source community will strengthen our ability as professionals and service providers to understand how best to shape our own systems. Additionally, it might make significant inroads in demonstrating how the ethics and practice of librarianship is more vital to the movement of information than ever before. As the software industry shifts to appropriately incorporate open source models, systems in other industries might even grow to utilize products the library community creates.

Conclusion

An argument I have already heard against these ideas is based on experience: "We tried building our own OPAC in the eighties--it was an impossible project and we gave it up after a few years because it just cost too much." In 1999, however, we know that the internet has changed the landscape. Because it is so very easy to share ideas and software and code using the internet, software developers have already found that the old way of doing things--particularly building monolithic homegrown systems in our own institutions--makes no sense anymore. As the open source vision and culture continue to mature, librarians would be remiss not to find our profession playing a major role in that culture. For all we have done so far, our online systems are not good enough yet. We can do better.

References

[APACHE]Apache Server Project
[BBP] Barry, J, Bilal D, and Penniman WD. "The Competitive Struggle," Library Journal, April 1, 1998, p. 43.
[ER] Raymond, ES. "The Cathedral and the Bazaar"
[FH] Hecker, F. "Setting up Shop: The Business of Open-Source Software," Aug 3, 1998. [MS] Microsoft analysis of open source
[GPL]GNU General Public License
[LINUX]Linux Online
[MYSQL]MySQL Home Page
[NYT] Harmon, A and Markoff J. "Internal Memo Shows Microsoft Executives' Concern Over Free Software", New York Times, November 3, 1998, Sect. C, pg. 8, col. 1.
[OSS]Open Source Initiative Home