Chapter 1 | History
To read the emerging literature on hypertext, hypermedia and cyberspace textuality and culture is to discover that these media are tremendously popular. Not only are online media becoming more obvious in mainstream or popular culture wherein businesses and organizations sport their World Wide Web addresses in television and magazine advertisements, on billboards and the sides of delivery trucks, but the effects of such media are increasingly profound in intellectual culture, affecting its epistemological methods--its ways of knowing--as much, perhaps, as its means of production--its ways of doing. One might readily point to higher education in the United States as well in many other countries and say, with a certain degree of conviction, that the Internet and its graphical offspring, the Web, are rapidly altering the manner in which people involved in education or scholarship read, write and do research.(1)
That these fundamental means of doing are, at least to empiricist philosophers, also ways of knowing is not an unlikely revelation, but against the backdrop of a consumer culture that tends to privilege production over knowledge, it is in likely need of emphasis. Not to belabor the point, but the epistemological shift cannot be emphasized enough in higher education, where the consumer culture idea of "student as customer" and "education has product" has taken hold.
Much of the emphasis of the literature on hypermedia and cyberspace textuality has been on production, on how to write hypertext code, on how to put information online, while disproportionately fewer works have pursued very seriously or critically the ramifications of such production. In a discipline such as the literary humanities, the critical works are slowly amassing, but with the notable exception of Virtual Realities and Their Discontents (1996), a collection of essays compiled and edited by Robert Markley, a professor of literature at West Virginia University, the majority of these critiques embrace the new media and are inclined to locate themselves hyperly--over, beyond and above what English professors Paul Delany and George Landow have called the "linear, bounded and fixed" (3) attributes of book culture. It is a strange phenomenon, to say the least, as these champions of hypertext--Espen Aarseth, Jay Bolter, Silvio Gaggi, George Landow and Janet Murray, to name a few--contribute generously, egregiously, to the book culture by writing and publishing books on hypertext. Of course, the inclination to move beyond books while also writing them is, however apparently mendacious, perfectly logical, expected even, especially in light of the compelling need of these spirited proponents of hypermedia to share, in the academic arena, their thoughts and ideas.
The resounding answer to the question George Landow posed to his colleagues in his introduction to Hyper/Text/Theory (1994)--the now familiar, entreaty: "What is a critic to do?" (36)--has been, as Landow himself predicted, to write criticism but a criticism that features the essential characteristics of hypermedia and cyberspace textuality, especially when the criticism takes on as its subject hypermedia and cyberspace textuality. Landow's critical resolve, however, is worth noting for its tentative language: "Hypertext theory," he writes, "finally, might have to be written in the electronic docuverse" (36, emphasis mine). The somewhat wily juxtaposition of the words "finally" and "might have to be" is business as usual for the hypercritic who, dedicated to poststructuralist tenets of nonlinearity and multivocality, cannot help but speak with a "forked tongue''; in virtual reality, after all, "the end" might precede "a beginning" that is not yet and may never be conceived. What Landow knows, even as he promotes the production and study of hypermedia, is that the digital word commits itself to no particular form or tradition and can come spilling out over the Internet in a chat forum as readily as it might be printed up and circulated as a hard-bound monograph.
In Hypertext 2.0: The Convergence of Contemporary Critical Theory and Technology (1997), Landow is clearly experimenting with the idea of going hyper in the print medium, of infusing the restrictive, old technology of the book with the highly adaptable, newer technologies of hypermedia. A quick glance at the book's title and cover suggests a good deal. First, Hypertext 2.0 is not a reprint of Landow's seminal Hypertext: The Convergence of Contemporary Critical Theory and Technology (1992) but is, instead, that which is printed on the book's title page: a "revised, amplified" version of the original Hypertext. Landow and his publisher, the Johns Hopkins University Press, are evoking the software industry idea of a "release," a reproduction that is new and improved, more sophisticated yet easier to operate. The implication that the hard code of software and the hard code of texts are beginning to overlap is further reinforced by Glen Burris's book cover designs, which have the combined look and feel of software packaging and Wired magazine. Burris's design for Hypertext 2.0 is very similar to that of Hypertext; both feature bright yellow backgrounds with bold lettering, smatterings of purples and pinks, and iconic representations of hypertext links and nodes.
The similar designs--like the similarity in packaging for programs such as Windows 95 and Windows 98--imply a similar product line but with revisions/upgrades and augmentations/extensions.(2) Curiously, though, Hypertext 2.0 comes without the now familiar uniform resource locator (URL) or Web address that is generally stamped on the covers or title pages of software books. And the half-expected readable CD-ROM version of the text (if not rewritable CD-ROM version of the text) that should be glued to the book's back flap and ensconced it own plastic envelope is also, notably, absent. Whether Landow is paying some homage to the book culture of his recent past by leaving Hypertext 2.0 offline or whether he is vying for future returns by selling as many hard copies of the book as he can, he is all the while signalling the interminable, ever present nature of hypermedia.(3)
Whether published online, on CD-ROM or in journals and books, the vast majority of hypertext criticism dwells unabashedly in the realm of the here and now; its attitude is invariably carpe diem, in part because instructional technology is currently all the rage in higher education but in part because hypertext criticism is facilitated by an ethos that tends to understand its past (and I would conjecture its future) in terms that are often too narrow and often too strictly associative. That hypertext and hypermedia studies are fast becoming "trendy" is not so much a concern. In fact, the recognition of hypertext scholarship and hypermedia curricula is, in my estimation, somewhat overdue, and especially so in the arena of the arts and humanities. Of concern is the trend in hypertext criticism (in fact, in any criticism) wherein the criticism turns blindly from itself.(4)
is, in its undaunted quest for insight, in its desire to master
the flurry of technologies that affect computer-mediated communications,
in its aspiration to be rigorous and progress-oriented like science,
it can forget to be socially and politically aware and may grow
increasingly more content to ask the engineer's "question of how"
rather than the philosopher's "question of why."
matter of "why" this particular line of critical inquiry is so inclined
to focus on "how," in fact, has everything to do with engineering.
As almost any volume on hypertext or cyberspace will disclose, hypertext
and cyberspace are decidedly the inventions of engineers; they are
concepts that owe their initial and continued existence (albeit
virtual existence) entirely to technology. Despite this fact, however,
the role of engineering and technology in books on hypertext criticism
is generally relegated to a few paragraphs and annotations, abiding
in relation to the story proper--the story of hypertext theory and
practice--as an isolated node of information, as a past that, while
often linked to hypertext, serves little in the way of an integral
function. The irony is that the very mechanism Vannevar Bush(5)
envisioned in 1945 as operating like the human mind--as operating
via association in order to preserve human memory--may be having
the opposite effect and inducing, instead, an ever-so-human, ever-so-fleeting
memory. Bush felt sure that man could not
Bush's concept of the "memex"--which led to Theodor Nelson's theory of "Xanadu" and, later, to Tim Berners-Lee's the "World Wide Web"--reflects a shifting in epistemology that externalizes and makes accessible the wealth of human knowledge to such a degree that human beings may eventually demonstrate a dependency on collective, mechanical memories to the extent their own thinking, growing ever more attuned and accustomed to the paths of reasoning or the associations of others, will be inclined to neglect or dismiss alternate ways, if not more independent ways, of knowing.(6)
One of the great neglected or dismissed details of the hypertext saga is certainly the story of Vannevar Bush. "Almost forgotten today," writes his biographer, G. Pascal Zachary, "he essentially invented the world as we know it: not so much the things in it, of course, but the way we think about innovation, what it means, and why it happens" (Wired 152). In more than a few volumes about hypertext, Bush's now famous 1945 Atlantic Monthly essay, "As We May Think" is invariably cited but invariably glossed without context. To go back in time to an original copy of the July 1945 issue of the Atlantic Monthly is to find Bush's essay tucked away in the back of the magazine (on page 101), following far behind the month's lead reports on World War II--the "European Front" (page 3) and "The Pacific War" (page 8)--as well as the ensuing essays by war heroes and refugees: "Come You Home a Hero" by Lt. Laurence Critchell and "Should Jews Return to Germany?" by Paul W. Massing and Maxwell Miller (pages 71 and 87, respectively). Interspersed in the issue are a series of full-page advertisements connecting the military-industrial complex with innovations for everyday civilian life. A General Electric advertisement features a sailor eating ice cream kept frozen by its refrigeration equipment, standard "on battlewagons and most combat ships" (see Figure 1-6). DuMont Precision Electronics and Television hawks its wares with the slogan: "DuMont's war-born advancements will soon be yours" (see Figure 1-7). And Westinghouse valorizes and commingles the professions of army tank gunner with engineer and scientist with bombardier, explaining that "wartime developments will be turned to peaceful uses," but that at present "Westinghouse is producing vital war equipment and weapons, many of which must remain secret until after final Victory."
however, at the time of publishing "As We May Think," keeps the
biggest secret of all, one that allows him to thank American scientists
for a job well done even as the American people are told that "Japan
has the productive capacity to wage a long war" ("The Pacific War"
8). In July 1945 the opening paragraph to "As We May Think" is blithely
adjourning and, eerily, past tense:
What happens next, of course, is the defining of an age: that August week in history which decimates the Japanese populations of Hiroshima and Nagasaki and invites the specter of atomic warfare into the homes of people worldwide.
That Bush, identified by the editor of the Atlantic Monthly as the Director of the Office of Scientific Research and Development (OSRD), was also the developer and director of the Manhattan Project, is a fact worth noting. That during the years of trenchant military secrecy (from 1939 to 1945) Bush was all the while tinkering with the idea of a tool that would extend to civilians the power to access and to distribute information freely is another dark irony.(7)
Even as he took on the role of research and development coordinator for mass destruction, he saw in his mind's eye "a new profession" that would "find delight in the task of establishing useful trails through the enormous mass of common record"; as many of the great libraries and museums of Europe were consumed by the fires of warfare, he envisioned an intellectual community wherein the "inheritance from the master becomes, not only his additions to the world's record, but for his disciples the entire scaffolding by which they were erected" (108).
one might argue that the necessity of secrecy during the war led
Bush to a passion for its opposite, it is more likely that Bush's
memex idea was but another in a series of mechanisms (theorized
or realized) that would fulfill the engineer's desire for useful
and efficient machines. When on 16 July 1945 Bush viewed the results
of his work at the Trinity site in the New Mexico desert, he was
characteristically pragmatic. Unlike the Manhattan Project's chief
physicist, Robert Oppenheimer,
For Bush, the success of the atom bomb illustrated the success of American science and engineering. But it also illustrated the necessity of serious, continued government investment and involvement in American research and development, for in this new age, the edge in science and engineering would mean the edge in advanced weaponry; an arms race was certain and a cold war was already brewing. It was during this time that Bush "played his hand" at a bit of social engineering and won. His vision of "a technologically advanced America governed by the masters of science and technology" was finding, incrementally, a place in the real world. "If this scientific elite could not actually fill the seats of power," Bush was determined that "it could at least advise those who did" (Zachary 224).
The advice he gave to the American government for the postwar period would encourage a technology-driven American economy and alter the American system of higher education in a number of profound ways. His carefully orchestrated volume, Science--The Endless Frontier, was released on the heals of "As We May Think" on 19 July 1945, the same day that Truman, Churchill and Stalin were meeting in Potsdam to decide the future of Europe. To compete with the news of the world, Bush distributed "copies of the report to Washington insiders" and took to "wooing...newspaper and radio columnists" (Zachary 257). In a matter of days, his introduction to Science--The Endless Frontier would become the manifesto to elevate and enfranchise the role of science and engineering in higher education in the United States. Bush knew how to seize the day and was "eager to trade on the public's high regard--even awe--for science and engineering while it lasted" (Zachary 224). In response to President Roosevelt's (and, later, President Truman's) questions about how the sciences might be transformed in times of peace, Bush selected a committee of eighteen members who would report on scientific and technological progress in a manner that would satisfy his own, as the Harvard University biologist, R.C. Lewontin, argues, "manifestly self-serving" (15) agenda. That Bush's "endless frontier" committee was comprised of the "leaders of elite universities, corporations and foundations, many of whom had been recipients of OSRD contracts," and featured no "women, minorities, consumers--in short, ordinary people" and no one "who was not a corporate executive, a lawyer, a professor or an administrator of a university, research institute or foundation" (Zachary 221) did not seem to matter very much to an American public that, while pensive over the bomb, was nonetheless happy to see a quick end to the war.
America, like the American military, was primed for the announcements that would allow Bush to elevate and enfranchise the disciplines of science and engineering well beyond everyone's wildest dreams. His arguments that "[n]ew products, new industries and more jobs require continuous additions to knowledge of the law of nature" and that "our defense against aggression demands new knowledge" which "can be obtained only through basic scientific research" won the day. And his belief that the "most important ways in which the Government can promote industrial research are to increase the flow of new scientific knowledge through support of basic research and to aid in the development of scientific talent" would be pivotal, eventually taking shape with the establishment of the National Science Foundation (NSF), an agency that would "promote research through contracts or grants to organizations outside the Federal Government" as well as promise to support "basic research in public and private colleges, universities and research institutes" while leaving "the internal control of policy, personnel, and the method and scope of the research to the institutions themselves" (Bush "Summary of the Report"). Were it not for Bush, the "deck" could never have been so prominently "stacked" in favor of college and university research, and the frontier of generous and unfettered government science funding--making research professors free agents who "no longer worked for universities, but in universities" (Lewontin 29)--might well have been short-lived instead of endless.
Even though Bush quit the advisory board of the NSF and left Washington to retire to his native Massachusetts in 1955, he had laid the groundwork, in engineering as well as in politics, for the development of the Internet. In fact, he may have had a direct hand in preserving the life of Richard Bolt, a young Massachusetts Institute of Technology (MIT) engineer and reserve officer in the Army who was, in 1941, about to be drafted. That Bush used his clout to end Bolt's military career was quite possibly the touch of good luck behind Bolt Beranek and Newman or BBN, the small Cambridge-based research firm (begun by Bolt in 1948) that would, in the late sixties and early seventies, develop the nationwide computer network for the Pentagon's Advanced Research Projects Agency (ARPA): ARPANET, the Internet in its infancy.(8)
Bush had little interest in digital computing,(9)
his memex, which he presumed would be a work station that utilized
microfilm and dry photography primarily, nonetheless anticipated
the digital computer as well as the Internet-connected desktop,
the one device that has allowed people to rise above their "ineptitude
in getting at the record," a problem Bush felt was "caused by the
artificiality of systems of indexing" (106). For Bush one of the
great quandaries of research (scientific or otherwise) stemmed from
the fact that people had not been able to perform research fluidly,
as they may think:
memex would be a hypertext system "in which an individual stores
all his books, records, and communications, and which is mechanized
so that it may be consulted with exceeding speed and flexibility"
(106-107). In Bush's imagination, it is, above all else, a personal
device, an "intimate supplement to his memory" that might also be
shared or, in more modern terms, networked:
importantly, though, the memex "affords an immediate step...to associative
indexing, the basic idea of which is a provision whereby any item
may be caused at will to select immediately and automatically another"
(107). Remarkably sage, Bush anticipates a "code word" association
to enable these "trails" (107), a concept not too far from the Hypertext
Mark-up Language (HTML), which renders hyperlinks and creates the
effect of reading that Bush describes: "It is exactly as though
the physical items have been gathered together from widely separated
sources and bound together to form a new book. It is more than this,
for any item can be joined into numerous trails" (107).
While the idea of the memex was well received by Americans and within the American scientific community, the memex itself was never assembled. Despite Bush's hard work and advocacy, the significant achievements of science and engineering were beginning to fade in the country's collective imagination. Life was getting back to normal in the United States when on 4 October 1957 Sputnik entered the scene, heightening American fears that Russia could launch nuclear missiles with such incredible range and precision that entire cities might be devastated instantly. In a special section of the 21 October 1957 issue of Newsweek, entitled "Satellites and Our Safety," Bush, in an interview with the magazine's associate editor, William Brink, seized, again, the opportunity to reinforce his vision of centralized, government-funded technology research, especially in light of the "communist threat." "I'm damn glad the Russians shot their satellite," he tells Brink. "We are altogether too smug in this country." Reiterating many of the arguments he presented in his introduction to Science--The Endless Frontier, he contends: "We do not have unified military planning.... We put great big projects into being at great expense without anyone reviewing them to determine whether they fit into a unified plan" (30).
month later Neil McElroy, President Eisenhower's newly appointed
Secretary of Defense, went to Capitol Hill to testify on the country's
ballistic missile programs and to announce that "he had decided
to set up a new 'single manager' for all defense research" (Hafner
and Lyon 19). By January 1958 Eisenhower had encouraged Congress
to allocate the funds that would establish the Advanced Research
Projects Agency (ARPA), an agency that would be housed in the Pentagon
but civilian in its make-up and operations. Much to the dismay of
military officials, ARPA would also have total contracting authority
and a presidential guarantee that no limits would define the scope
of its research endeavors. The agency
Shortly after the inauguration of the National Aeronautics and Space Administration (NASA), ARPA's mission became even more free-wheeling. With the burden of the "space race" and missile research consigned to NASA, ARPA became decidedly less military and even more academic. Under President Kennedy, ARPA's second director, a military man, Brigadier General Austin Betts, resigned and was replaced by Jack Ruina,(10) who like Bush, was an electrical engineer and a university professor and, like Bush, believed wholeheartedly in the generous funding of science for science's sake. Ruina ushered in the "golden era for ARPA," effecting a "relaxed management style and decentralized structure" (Hafner and Lyon 23). Most importantly, however, Ruina hired on J.C.R. Licklider to oversee ARPA's Command and Control Division as well as its Behavioral Sciences Division. Licklider, who held degrees in psychology, mathematics and physics and had a wealth of research experience at Harvard, MIT and BBN, became ARPA's first computer guru. He held the optimistic worldview that computers could amplify human thinking and would extend and encourage a more democratic government. Almost single-handedly, he would change the agency's thinking about computers, for Licklider believed that computing would become much more than advanced computation; computing, he felt, would come to mean communication.
his article "Man-Computer Symbiosis" (1960), Licklider's vision
was that computing would help people in their thinking. "It seems
entirely possible," he wrote, "that, in due course, electronic or
chemical 'machines' will outdo the human brain in most of the functions
we now consider exclusively within its province" (2). In his characteristically
optimistic fashion, he saw human beings and computers coming together
to produce better thinking, observing that
He concluded that there was "great value" to "integrating the positive characteristics of men and computers," that such "symbiotic cooperation" would have everything to do with the future (6). It wasn't long before Licklider recommended that his division at ARPA, the Office of Command and Control Research, change its name to the Information Processing Techniques Office or IPTO (Hafner and Lyon 39). The kind of thinking that computing had begun to facilitate, after all, would develop a strategy to sustain research and development, wherein "commanding" and "controlling" became less effective than "communicating."
In the 1960s IPTO is where the computer experimentation at ARPA was being housed and where the research of Licklider met that of Robert Taylor, the third director of IPTO (after Ivan Sutherland), and the first to suggest that ARPA build a computer network. The purpose of the network was, as Taylor argued, to save ARPA time and money. He believed that the network would allow ARPA to cut costs on computers (as the majority of their contractors, mostly research universities, had computer requests in their budgets), but, even more significant, the network would mean that "researchers doing similar work in different parts of the country could share resources and results more easily" (Hafner and Lyon 39). The dream that Licklider and Taylor would write about jointly in "The Computer as a Communication Device" (1968) was beginning to take shape. Soon better computer-enabled thinking would evolve into better computer-enabled communication. "In a few years," Licklider and Taylor wrote, "men will be able to communicate more effectively through a machine than face to face" (21).
Like many other large, expensive government-funded technology projects, however, the network project that Taylor proposed would need more than a gentle nudge to see it completed. Because there was, in the early 1960s, no conceivable for-profit industry application for such a network, the work loomed in the minds of its scientists more so than in the corridors of the Pentagon. Like Bush before him, a young electrical engineer by the name of Paul Baran saw the scientific and political connection between technology and military defense. The arms race that Bush had, in some respects, invented had become a nuclear aftermath problem for Baran, who, while working for the RAND Corporation, became interested in the "survivability of communications systems under nuclear attack" (Hafner and Lyon 54). Significantly, the arms race had become a defense race. The question of superior weaponry was somewhat moot in light of the nuclear arsenals of the United States and the Soviet Union. And the question that was most on the minds of military strategists was how to survive in order to retaliate, and how to retaliate effectively--that is, without blowing up the rest of the planet.
Baran believed that the possibility of a nuclear holocaust was real but that a de-centralized, redundant system of communications might not only maximize the possibility of retaliation but minimize the risk of cultural annihilation.(11) After a series of long discussions with Warren McCulloh, who was a psychiatrist at MIT's Research Laboratory of Electronics, Baran developed a networking scheme that would pattern itself after the human brain, a highly efficient and durable system in itself capable of bypassing damage as it regenerates its neural nets. Such thinking led to Baran's publication of "On Distributed Communications Networks" (1964) and to a United States Air Force commitment that the network would be built. It wasn't until 1967, however, that Baran met with ARPA's Larry Roberts, an IPTO employee who had been drafted from MIT's Lincoln Labs to build the ARPA network. And it wasn't until 1968 that Roberts awarded the ARPANET contract to BBN to begin network construction in earnest.
Tim Berners-Lee's invention of the World Wide Web in 1992, the remainder
of Internet history constitutes network expansion and a litany of
technical enhancements. Throughout the seventies and eighties, a
number of new networks came online, such as USENET and BITNET, and,
as the number of users grew, the operational management of the Internet
was continuously restructured. Protocols of Internet use were also
refined and standardized, and in 1982 the InterNetworking Working
Group (INWG) established the Transmission Control Protocol (TCP)
and Internet Protocol (IP), the now familiar TCP/IP standard which
allows for a network of networks or an internet. In the mid-eighties,
a Domain Name Server (DNS) system was devised so that host servers
could be recognized by names instead of by numbers, but it wasn't
really until the 1990s, when the commercial electronic mail carrier
(MCI Mail) was born, that networked computing became truly "humanized."
1990 was the year that Internet culture began to creep out of the
shadows and into the mainstream; it was also a significant year
in that ARPANET, which had been split off into MILNET (the Defense
Data Network) in 1983, ceased to exist. While the NSF would, in
collaboration with a number of other agencies, continue to operate
the Internet, it would come to provide a virtual space for nonscientists,
for people in general, as well.
the Internet would feature, with the release of Berners-Lee's the
World Wide Web, a global hypertext system, the idea of such a system
was (and still is) the long beautiful dream of Theodor Nelson. Nelson,
who earned his B.A. in philosophy at Swarthmore College, became
interested in computers while he was a masters student in sociology
at Harvard. Coining the term "hypertext" in the 1960s to mean "non-sequential
writing--text that branches and allows choices to the reader" (0/2),
Nelson conceived of a software system that would effect the "magic
place of literary memory" (1/30).(12)
He called it Xanadu, aptly named for the imaginary place
in Coleridge's poem, "Kubla Khan":
The name of the software is appropriate, as it turns out, because Xanadu is as fictional as Coleridge's poem, as "measureless to man" as the unimplemented ideas of poetry.
In his 1995 Wired article on Project Xanadu, Gary Wolf calls the "global hypertext publishing system...the longest-running vaporware story in the history of the computer industry" ("The Curse of Xanadu") and chronicles, in seventeen chapters, Nelson's failure to develop and release the software. The tough criticism is likely deserved, as Nelson himself kept promising the product he could not deliver:
Vannevar Bush's successes, Theodor Nelson's failures are notably
absent from the discussions emerging in critiques of hypertext,
hypermedia and cyberspace textuality and culture. Nelson is generally
cited as the man who coined the term hypertext and, as in Silvio
Gaggi's From Text to Hypertext (1997), his Project Xanadu
is typically summarized. Because of Nelson's persistence that Xanadu
will, someday, be completed, many critics tend to treat his body
of work with a kind of inexplicit caution, if not circumspect brevity.(13)
gives Nelson the benefit of the doubt by discussing his yet unreleased
software in the present tense, explaining to his readers that Xanadu
"operates as a utility that provides instantaneous access to nearly
unlimited electronic documents (including traditional documents
entered electronically into the network)" (109). Landow has been
patiently philosophical, relying on and interpreting Nelson's work
more as literary theory than as software engineering. Janet Murray,
who was a systems programmer for IBM before she became an English
professor at MIT, limits her discussion of Nelson in Hamlet on
the Holodeck: the Future of Narrative in Cyberspace (1997) to
a paragraph; he is simply the man who "has spent most of his life
in the effort to create the perfect hypertext system" (91).
For Wolf, however, Nelson requires more critical attention, and not just because "Nelson's writing and presentations inspired some of the most visionary computer programmers, managers, and executives--including Autodesk Inc. Founder John Walker--to pour millions of dollars and years of effort into the project" ("The Curse of Xanadu"). Nelson is culturally important: he is the utopian visionary of hypertext while also the embodiment of hypertext's potential hazards. Unlike Bush and Baran, Nelson envisioned a global communications network and data repository that would defend against the human predilection for self-destruction more than any threat of destruction from abroad. "By putting all information within reach of all people," writes Wolf, "Xanadu was meant to eliminate scientific ignorance and cure political misunderstandings." The Nelson assumption was that "global catastrophes are caused by ignorance, stupidity, and communication failures," so "Xanadu was supposed to save the world" ("The Curse of Xanadu").
Saving the world is no small task, and Nelson's rich conception of hypertext, as fleshed out in Dream Machines (1974) and Literary Machines (1987), is fraught with the kind of bold, ingenuous optimism that makes the world seem worth saving. While he borrowed a good deal from Vannevar Bush's conception of the memex,(14) Nelson assisted in extending the theory of hypertext, making it popular and, thus, bringing it closer to reality. He conceived of various "hyperlink" categories, including links to connect, expand and compare (on a split computer screen) texts; he also anticipated (correctly) several distinct hypertext structures: the hypertext book, a text that is "fresh" or original in its textual make up; the "anthological" hypertext book, a text that links together a variety of different works; and the vast, "grand" hypertext, consisting of "everything written about the subject, or vaguely relevant to it, tied together by editors (and NOT by 'programmers,' dammit), in which you may read in all the directions you wish to pursue" (Dream Machines 45).
In other areas, however, Nelson's ideas have been somewhat less successful and might be construed as somewhat counterproductive to saving the world. Unlike Berners-Lee's conception of an open, free system of media sharing on the World Wide Web, Nelson's Xanadu plans have been, for all his high-spirited rhetoric of romance, less than wholly generous. Nelson's plans have always included a for-profit publishing-and-royalties scheme. "Our copyright solution" writes Nelson of Xanadu, "has always been for the user to pay a small royalty to the media publisher for every byte sent. Instead of buying whole documents, this allows the user continually to pick and choose on line." That is, people who use Xanadu would pay for each and every byte of information they might download. "This is necessary in a hypermedia world, since the user will typically not buy whole publications, but move and ramble unpredictably. The royalty is paid automatically at the instant of delivery, only and exactly for the portion sent" ("Xanadu Australia").
All of this unpredictable moving and rambling on the part of the readers who might be Xanadu subscribers could translate into lucrative returns for media publishers. Additional profits might also be earned through a process Nelson describes as "transclusion," which is a kind of quotation system in Xanadu wherein referenced materials are not directly quoted or included in the text itself but represented virtually; "any publisher in our system may quote any other publisher freely since the 'quotation' is a pointer to the original publication. The user automatically sends for the quotation from the original publisher, paying the royalty to the original publisher." For Nelson, this "pay-per-view" method of publishing and republishing hypermedia is not only fundamental to Project Xanadu but the answer to many of the copyright problems that digital publication and communication present:
As Jay Bolter has pointed out, Xanadu would have "writers and readers throughout the world working in the same conceptual space" (102); despite the utopian appeal, however, the shared space is one of the problems with Xanadu. The formation of a centralized, for-profit network acquiring, storing and disseminating the literature of the ages could potentially put the world Nelson wants to save at greater risk. First, the problem of a centralized system is, essentially, the problem Paul Baran addressed in the 1960s. Baran's theory that a de-centralized, redundant computer system would offer the most resilience in the event of an attack on the U.S. military's communications network is now a widely accepted premise for safeguarding almost any communications network. Second, a for-profit network would certainly disenfranchise a sizeable portion of the world from Xanadu, making it difficult to eliminate the ignorance that presumably renders human beings so self-destructive. Also, the per-byte method of disseminating publications would do little to help distinguish between quality and quantity in the Xanadu database. A digital world wherein all bytes are equal would likely encourage the proliferation of substandard online materials while reducing the incentive to produce online materials of real quality.
Even more troubling, however, is Nelson himself, the aspiring owner and manager of Xanadu. "Nelson's life is so full of unfinished projects," observes Wolf, "that it might fairly be said to be built from them, much as lace is built from holes or Philip Johnson's glass house from windows." In a number of ways, Nelson is a living example of practice undermining theory. For all of hypertext's potential, there are drawbacks, and Nelson embodies many of the precarious side effects of a hypertext culture: surface knowledge, fleeting memory, fragmented awareness. Of his interview with Nelson, Wolf writes:
When George Landow calls Nelson "one of Bush's most prominent disciples" (Hypertext 2.0 7), he is making a fairly narrow association, as little else but the concept of hypertext link the two men together. Vannevar Bush, the highly productive, remarkably well-respected scientist and engineer might be uneasy about the likes of Nelson, who, for all of his interest in technology, is neither scientist nor engineer. "I have a terrific math problem," Nelson confesses in the Wolf interview, "I still can't add up a checkbook: I can add a column of figures five times, get four different answers, and none of them will be right. I'm very accident-prone and extremely impatient. I can't work my Macintosh--I have three that are completely unfunctional and one is marginally functional" ("The Curse of Xanadu").
What the history of the Xanadu Project reveals is a Nelson who fails at administering and implementing his own ideas. More the philosopher than the engineer, he is the Johnny Appleseed of inventors: moving through life, tossing out his ideas in books, to friends, at university computer labs. Like a character out of Neuromancer, the science fiction novel in which William Gibson coined the term "cyberspace," Nelson's present is overwhelmed by the future; making sense of the ever-emerging, ever-changing technologies and the enormous flood of information that these technologies facilitate has locked him into the "consensual hallucination" (Gibson 51), the virtual world, a world wherein nothing is real and everything is invariably disorderly and unfinished.(15)
idea is encapsulated (as well as the conventions of the print medium
will allow) in Nelson's 1987 revision of Literary Machines,
which includes the following subtitle: "The Report On, And Of, Project
Xanadu Concerning World Processing, Electronic Publishing, Hypertext,
Thinkertoys, Tomorrow's Intellectual Revolution, and Certain Other
Topics Including Knowledge, Education and Freedom." The book features
one Chapter Zero, seven Chapter Ones, one Chapter Two and seven
Chapter Threes. Nelson's directions to the reader in the Introduction
of Literary Machines are fairly open-ended, suggesting that
readers read one of the Chapter Ones, then Chapter Two, then one
of the Chapter Threes, circling back (always through Chapter Two),
choosing different chapters to read at will.
While George Landow's Hypertext 2.0 is decidedly more traditional than Nelson's Literary Machines (Hypertext 2.0 features eight chapters--numbered one through eight--as well as an introduction and a conclusion), traces of Nelson can be found throughout Landow's work. In fact, Nelson worked on a hypertext project at Brown University long before Andries van Dam, William Shipp and Norman Meyrowitz founded the Institute for Research in Information and Scholarship (IRIS), the academic organization at Brown that developed the hypertext software Intermedia, which was first used by Landow in one of his English classes in 1987. In 1968 Nelson, along with Andries van Dam and a few clever students, developed the Hypertext Editing System (HES). While Nelson lost interest in the project and moved on to develop Xanadu, hypertext had taken hold at Brown; IRIS would become a key player in the research and development of hypertext technologies, especially in the humanities (Keep, et al "Intermedia").
the plodding academic than Nelson, van Dam stayed on at Brown and
developed yet another hypertext system, the File Retrieval and Editing
System (FRESS), which would eventually run on the university's mainframe.
The system, which included two types of links ("tags" and "jumps"),
was promising enough that the National Endowment for the Humanities
(NEH) funded an experiment that utilized FRESS in an undergraduate
course on poetry.(16) In
FRESS, a "tag" constituted a uni-directional link which would appear
in its own window against the backdrop of the document proper; this
feature was appropriate for footnotes and other types of annotations.
A "jump" constituted a bi-directional link to another document;
so readers could move back and forth between documents, keeping
up to seven document displays open at once.
Out of the HES and FRESS experiments, however, Intermedia was born, a hypertext system that came equipped with virtually all of the features Vannevar Bush had envisioned in 1945 (and more). The software included a suite of applications that operated together in "an event-driven windowing interface," a graphical view of the program which facilitated the multiple use of multiple applications. The applications that made up Intermedia included InterText, a text editor; InterDraw, a graphics editor; InterPix, an image viewer; InterVal, a time line editor; InterSpect, a three-dimensional model viewer; InterPlay, an animation editor; and InterVideo, a video editor (Keep, et al "Intermedia"). One of the more resourceful features of Intermedia was a navigation tool called WebView, which would allow the user to view a kind of map or index of the hyperspace environment so that the user might promptly end the confusion or disorientation that is common to the hypermedia experience. Other later additions to the suite of applications included InterLex, which linked Intermedia users to the American Heritage Dictionary and allowed for full-text searching, and InterNote, a shared work space that allowed for collaborative writing (Landow "Intermedia: An Introduction").
though, Intermedia was limited to version 4.0. "In 1990,"
Suffering from a lack of funding and technical support, Intermedia died the ignoble death of elegant software. But living on, even in its death, was the notion that "the book as technology" (Hypertext 2.0 25) would survive, if not prevail. For the hypertext works that "died" with Intermedia would live again in (or as) another application. Eventually, Brown adopted the commercial product Storyspace, put out by Eastgate Systems. According to Landow, "the simple fact that Storyspace at first ran on any Macintosh--and later on IBM and IBM clones--created novel portability for all the webs originally created for Intermedia" (Landow "The Death of Intermedia...."). Landow's work had found a home again, and Eastgate had a new client. Software development and a growing interest in hypertext and hypermedia at Brown spawned a number of projects and organizations, the most important of which is the Computing in the Humanities User's Group (CHUG), founded in 1986. The group, which disseminates many of its publications via the Internet, has held hundreds of talks and meets regularly to discuss computing techniques and applications in the humanities ("Computing in the Humanities Users Group").
CHUG chugged along at Brown, however, corporations and universities
with large grants and endowments would develop a number of new products
that could also wrangle the digital word. Xerox PARC's Notecards
and Apple Computer's Guide and HyperCard would bring
hypertext authoring capabilities to people who had never before
heard the word "hypertext." Of particular note is Apple's HyperCard,
which, released in 1987, became a mainstay in university education
and humanities programs for several years. The software was central
to Larry Friedlander's Shakespeare Project at Stanford University.
"During a conversation in spring 1987 with Bill Atkinson of Apple,"
Friedlander comments, "I realized that with the new HyperCard
program, under development at the time, I could do most of what
I wanted quickly and inexpensively" (260). The Shakespeare Project
utilized a videodisc player that was connected to a Macintosh computer
running HyperCard. Friedlander, who "wanted to teach the
full process of theater in an environment that would not reduce
its richness" as well as encourage students to "develop their own
standards of judgment" and "learn to watch theater with their whole
selves--with mind, heart, and their deepest feeling for beauty"
(260), designed an intriguing system:
The HyperCard feature of the Shakespeare Project allowed students to take and store notes, create multimedia and browse Shakespearean texts.
At Simon Fraser University, Paul Delany and John Gilbert would develop the Joseph Andrews Project with HyperCard. Like the Shakespeare Project, the Joseph Andrews Project would supply students with much needed contextual information. Delany and Gilbert saw that HyperCard could be used to "present an unfamiliar universe of reference for a literary text. Students reading a modern novel in which an automobile is mentioned" they explain, "will effortlessly invoke a mass of relevant knowledge. But in Joseph Andrews they probably will not know the logistics of travel on horseback, the difference between a stage-coach and a chaise, and so on; and such matters figure prominently in the novel's various journeys between London and Somerset" (295).
While hypertext project development has occurred throughout North America and the world, an enormous amount of wealth supporting hypertext and hypermedia research and development would eventually locate itself at MIT, Vannevar Bush's alma mater. That in 1975, a year after Bush's death, MIT was ranked number one among universities that received federal research and development funds was hardly an accident of fate. In 1990 that ranking dropped to number three (edged out by Johns Hopkins University and Stanford University), but MIT is still, according to Lewontin, "deeply dependent on the federal government" (25) for its income. Home of the Media Lab and the World Wide Web Consortium and the work place of Nicholas Negroponte, the Media Lab's founder and director, and Tim Berners-Lee, the World Wide Web's creator, MIT has become a computer guru's paradise.
1985, the same year Brown University's IRIS began development on
Intermedia, Negroponte, who had helped organize the Media
Lab in 1980, was busy moving the concept (and its high-tech equipment)
into the new I.M. Pei facility called the Wiesner Building, after
Negroponte's friend and colleague Jerome Wiesner. The concept for
the Media Lab grew out of the work and deliberations of the Architecture
Machine Group (AMG) as well as the endeavors of a number of MIT
faculty ("MIT Media Laboratory: Information"). Like other basic
research projects at MIT and selected universities around the United
States, much of the AMG's funding was military. And like Bush, much
of Negroponte's initial support came from the Office of Naval Research
(ONR) and the Bush-inspired organization ARPA. The work was all
too familiar. "In the mid-1970s," Negroponte explains in Being
Digital (1995), "ARPA launched a major research initiative in
teleconferencing in order to address an important aspect of national
security" (121). Like Paul Baran's work in the late 1960s, Negroponte's
charge was to apply the powers of computing to the realm of communications:
The above scenario that led to the development of lifelike computer emulations, which Negroponte found "[a]dmittedly bizarre" (122), facilitated the Media Lab's leadership in the development of digital video and multimedia.
original idea for the Media Lab was very simple." Negroponte says
in an interview:
Although the original proposals for the Media Lab neglected the Internet, much of its research and development would become applicable when the digital environment of the Internet became a hypermedia environment. "The Internet for us was like air," says Negroponte, who began using ARPANET for email in 1972. "It was there all the time--you wouldn't notice it existed unless it was missing. But the Internet as a major social phenomenon didn't enter our radar until the advent of the World Wide Web, which was developed in Europe at CERN, beginning in 1989, by a team of physicists that included an alumnus of the Media Lab" (qtd. in Bass "Being Nicholas").
Also housed at MIT, in a "smallish, barren office" in which "his nonprofit group, the World Wide Web Consortium" helps to set "technical standards for the Web, guarding its coherence against the potentially deranging forces of the market" (Wright 64), is Tim Berners-Lee. It was in 1980 when Berners-Lee was working at the European Physics Laboratory, CERN, that he developed his first hypertext program, Enquire. It was an information system he developed as a "personal memory substitute" (66) to organize and comprehend the growing wealth of information necessary to his work. In 1989 Berners-Lee was back at CERN and, this time, confounded by the communications end of his research: "I thought, look, it would be so much easier if everybody asking me questions all the time could just read my database, and it would be so much nicer if I could find out what these guys are doing by just jumping into a similar database of information for them" (qtd. in Wright 66). And so the three "technical keystones" of the World Wide Web were born: the Hypertext Markup Language (HTML), the language for writing Web documents; the Hypertext Transfer Protocol (HTTP), the system for linking together documents via the Internet; and the Uniform Resource Locator (URL), the system for addressing hypertext documents (Wright 66).
Berners-Lee wrote the first Web server software, and on top of his desk at CERN, his NeXT computer became the first Web-content server. He also wrote the first graphical user interface or GUI Web browser, though it was Marc Andreessen, the co-founder of Netscape, who developed the technology (Mosaic, Navigator, Communicator) that would allow graphics and text to appear in the same window. With the World Wide Web, Theodor Nelson's long beautiful dream would be, to some extent, realized. And, perhaps, with luck, this universe of documents will be longer lived and even more beautiful under the auspices of the hypertext system devised by Berners-Lee. Unlike Nelson's "docuverse" vision, Berners-Lee's Web is decentralized, with servers operating as nodes along a distributed network of various network databases. Unlike Xanadu, the Web is non-profit; information is free, making the experience of Web use more library-like than bookstore-like. While Nelson has called the Web "a trivial simplification of his hypertext ideas, though cleverly implemented" (qtd. in Wolf "The Curse of Xanadu"), it is, for its brain-like complexity and redundancy, remarkably resilient.
Perhaps more philosophically akin to Bush and Landow than to Nelson and Andreessen, Berners-Lee conceived of a hypertext system that was primarily academic, a system that would facilitate the free flow and sharing of information. His original hypertext design was fundamentally interactive, rendering the access to and dissemination of information seamlessly, easily. Unfortunately, the market forces of production and consumption have altered his vision somewhat. "The Web," laments Berners-Lee "is this thing where you click around to read"; however, if you want to produce instead of consume, write instead of read, "you have to go through this procedure" (qtd. in Wright 68). Money, as Nelson anticipated and Andreessen discovered, is made not from people who want to produce and share but from people who want to consume. The commercial browsers, Netscape's Navigator and Microsoft's Explorer, were (and still are) primarily passive technologies through which consumers view and buy, increasingly, a variety of products--including, increasingly, information.
As life becomes more Internet-based, it is important to recognize how and why it got that way. For knowledge of the Internet and the World Wide Web is the best protection against its military and commercial origins. As a college or university education becomes increasingly Internet-based and especially reliant upon the World Wide Web for content and interaction, understanding the media will become an important part of understanding the message. Otherwise the future of education seems destined to become yet another feature of consumer culture wherein the product, an education, is efficiently dispensed to the consumer, the student. Unless that product is viewed and reviewed critically, education may begin to look and feel more and more like watching television.
1. Virginia Tech now requires that students submit their theses and dissertations in digital form. The institution argues "that if enough institutions adopt the idea, the improved access should lead to greater use of the hundreds of thousands of theses and dissertations completed each year, establishing them as a 'new genre' of widely distributed research. Electronic submissions also would...include sound and video clips, interactive simulations, and more color illustrations than are possible on paper" (Young, "Requiring Theses in Digital Form" A29).
2. Esther Dyson's book on the Internet and digital culture is entitled Release 2.0: A Design for Living in the Digital Age (New York: Broadway Books, 1997). Like Hypertext and Hypertext 2.0, the cover jacket is a bright yellow, featuring a profile picture of Esther Dyson on the front cover and the "@" symbol merged with a picture of Earth on the back. She addresses "the release" concept directly: "The very title of this book embodies the concept of flexibility and learning from errors: In the software business, 'Release 1.0' is the first commercial version of a new product, following test versions usually called 0.5, 0.8, 0.9, 0.91, 0.92.... If the product succeeds, the vendor launches Release 2.0 a year or so later. Release 2.0 is a total rewrite, hammered out by older, wiser programmers with feedback from thousands of tough-minded, skeptical users.... I have no illusions that there won't be need for Release 2.1, the paperback edition, and ultimately a Release 3.0 somewhere down the road (5).
3. A version of Hypertext (called Hypertext in Hypertext) created for a course Landow teaches regularly at Brown University, "Hypertext and Literary Theory," can be accessed online at http://www.stg.brown.edu/projects/hypertext/landow/ht/HTinHT2.html.
4. Paul de Man observed the inclination among critics and warned of the pitfalls of insight in literary analysis, making the argument that "insight" is "gained" when critics are "in the grip" of a "peculiar blindness" (105). See Blindness and Insight (New York: Oxford UP, 1971).
6. Studies on children and instructional technology are beginning to reflect a similar concern. At a conference held at the Teachers College of Columbia University in December 1997 called "The Computer in Education: Seeking the Human Essentials," a number of educators were decidedly critical of computer technologies in the classroom. Many expressed the concern that the trend in encouraging computer use early in life may be at the "expense of budding capacities in areas that are least machine-like, such as imagination, creativity, intuitive thinking, and the contribution of emotions to cognition." Some felt strongly that the "potential for computers to do more harm than good if introduced too early in childhood" was great and that use might be restricted "until the age of 12, if not later" (A25). As Esther Dyson says of multimedia and education, "I think it's bad for our minds. Images may sell, but they don't enlighten. We're in danger of getting a society where people don't bother to think or assess consequences" (93). See Colleen Cordes's "As Educators Rush to Embrace Technology, a Coterie of Skeptics Seeks to Be Heard" in The Chronicle of Higher Education (January 16, 1998): A25-A26 and Dyson's Release 2.0 (New York: Broadway Books, 1997).
7. In his biography of Bush, Endless Frontier (1997), Zachary notes that "Bush was elated by the reaction to 'As We May Think' if for no other reason than that the piece had been a long time in birthing. He had tried to publish an earlier version of the essay in December 1939. The precursor to 'As We May Think' was the culmination of nearly a decade of thinking by Bush about the organization of data and how human beings might devise better tools to manage their own ideas. Bush first revealed the direction of his thinking in Technology Review, which in January 1933 published his speculations about a futuristic machine that supplied information on demand. By punching a few keys, a user of this imaginary device would receive the exact page--of notes or printed matter--that he desired" (268).
8. Bolt describes the auspicious phone call from Bush in Zachary's biography, Endless Frontier: There "was a long silence, and then he [Bush] said, 'Bolt, it is my considered judgment that you would serve the country better by staying where you are.' Now I had never heard the expression 'considered judgment' before. Boy, was I impressed.
9. Bush's rather inauspicious refusal to fund the landmark ENIAC project when he was chair of the National Defense Research Committee in 1940 exemplified his limited understanding of things digital (Zachary 265-268).
10. The first director of ARPA was Roy Johnson, who was neither a military man nor a scientist; Johnson was vice president of General Electric, and is symbolic of the corporate, academic and military identities that have characterized ARPA.
11. Eugene Kashpureff's 1997 "hack" of Network Solutions, Inc. (NSI), a company that registers the majority of Internet domain names under the auspices of the National Science Foundation, demonstrates the vulnerability of centralized or non-distributed systems. For "five to six days in July the Internet had been 'repossessed' as users everywhere, heading for NSI's InterNIC domain name registry, ended up at Kashpureff's AlterNIC instead" (172). The big question, of course, is not how to but who will (other than the U.S. Government) distribute the registry. See David Diamond's "Whose Internet Is It, Anyway?" in Wired (April 1998): 172-177, 187,195.
12. The term "hypertext" first appeared in "A File Structure for the Complex, the Changing and the Indeterminate," a paper Nelson delivered to and published in the proceedings of the Association for Computing Machinery in 1965.
15. The term cyberspace is first mentioned in Neuromancer in the context of main character's longing for it: "A year here and he still dreamed of cyberspace.... All the speed he took, all the turns he'd taken and the corners he'd cut in Night City, and still he'd see the matrix in his sleep, bright lattices of logic unfolding across that colorless void..." (4-5).