The Origin of the Alphabet: Part 1

We all swim in the alphabet like fish in water or birds in air, so it is hard to appreciate what an astounding communications technology it still is even after thousands of years of use. So imagine what this new flexible technology must have seemed like when humanity first discovered it around 1500 BCE.  The easy literacy the alphabet enabled must have been at least as powerful and transformative in its time as the printing press, the telephone, the atom bomb, or the computer. These inventions produced rapid, breathtaking transformations of culture, shifts in power and wealth, disruptions of society, and creation of new ways for humans to relate to the universe and to each other.

Continue reading “The Origin of the Alphabet: Part 1”

Hyperversity Apocalypse: A 1989 Prophesy

We know the word “apocalypse’ to mean the end of the world.  But it originally comes from the Greek for “revelation from above.”  Apocalypses were a set of prophesies about things to come, many about how the world would end. The most intriguing of the apocalypses are the so-called “Intertestamental Apocalypses” written sometime between the last work to be included in the canon of the Hebrew Bible (The Book of Daniel” – possibly second century BCE) and before the canonical works of the New Testament (possibly letters by Paul, 1 Thessalonians circa 50 CE).

A common literary device of these Intertestamenal Apocalypses was anachronism. The author would pretend he had found a manuscript written by an earlier figure from the Bible, say Abraham, or might even claim to be that author.  “Abraham”established his credentials by accurately “predicting” events that already happened and were well-known before proceeding to tell about events to come.

The following would be a nice bit of pretexting like these apocalypses, except I found it this week in a file that I recovered from an old external disk drive, into which I had dumped an old Zip disk, into which I had dumped files from old floppy disks and obsolete Macs. (I bought my first in 1984).


TO: Jim Meindl, Provost; Bill Jennings, Chair,

Committee on Computing at Rensselaer

FROM: David Porush, LL&C                         DATE: February 1, 1989

RE: Some input into your deliberations about the future of computing at Rensselaer

Attached you’ll find a long memorandum regarding a speculative design for what I call the “hyperversity.” Although the design I provide is couched in rather hyperbolic terms and implies a total overhauling of the university system, in fact it would be quite reasonable to implement the sort of hyperlibrary and hypertext-based courses I suggest without significantly disrupting the shape and function of the university as it presently stands. Indeed, it’sd probably healthier to look at the hy perlibrary as an enlightened complement to all pedagogical activities rather than as a substitute.

Toward the HyperVersity: A Design for the University in 2000 and After

Disciplinary and bureaucratic functions have slowly increased their dominion over the modern university, obscuring some of the more basic values in the education of young citizens. Rensselaer is extremely well-positioned to capitalize on its native talents (among faculty and support personnel) and its favorable disposition to new computing technologies to correct this imbalance by building and implementing a new method of university education, flexible enough to encounter and adapt to any challenge, but conservative of the important basic pedagogical ideals and functions of the university as they are presently constituted. This new university would be founded on hypertext techniques in the archiving and accessing of knowledgeessentially functions of the traditional library to build a hyper-library at the practical and theoretical center of university life. I call such a model for the postmodern university the HYPERVERSITY.

This memo offers

  • a theory of pedagogy underlying my design for the hyperversity;
  • why the goals of this theory now can and should be set as a central university function with the aid of state-of-the-art computing machinery, some programming, and the collaboration of the faculty;
  • a practical design of the hyperlibrary;
  • a portrait of how education will typically function in the hyperversity,


  • the redefined roles for professors, administrators, librarians, students, and researchers entailed in such a plan.

The interdisciplinarity of knowledge: Multiplying contexts for understanding

Whenever I look at the bank of steel file cabinets lining one wall of my office, I’m reminded of the modern university. Each file cabinet represents a different school, each school contains a different drawer or department, each drawer contains several sub-sections of faculty specialties or files, and each file, finally, contains the specialized and applied knowledge of individual people: professors, researchers, theorists, students. The rigid organization of the file cabinets is frequently overcome in the faculty/staff dining hall, in social arenas, in public events, in increasing administrative urging to create bridges between disciplines, in the formulation of new, interdisciplinary research programmes around new problems in industry, technology, or theory, and in interdisciplinary- and team-teaching experiments, and in the larger university culture of shared loyalties, histories, and expectations.

But more often, on a day to day basis, the university is a deeply conservative organizational system that tends to preserve its own cabinetry and filing systems, implicitly discouraging too much cross-talk or file-hopping.

The motif everywhere in this conventional university is specialization: institutional rewards, the allocation of resources and departmental accounting procedures, even the physical arrangement of offices make it difficult, if not impossible, to operate across the disciplines. Sometimes a new department will form, often around a new interdisciplinary interest (like computing, artificial intelligence, chaos theory is on the horizon), and an old one will lapse for lack of relevance, but even these changes tend to be glacially slow and do little to alter or challenge the fundamental structure of the file cabinets themselves.

For the student, this rigid organizational plan and the model of knowledge it implies becomes translated into rigid courses of studies with narrowly prescribed boundaries for completions of major and minor requirements. Finally, within courses themselves we almost invariably find what I call catechism masquerading as real education. As you know, a catechism is a method of instruction in which the student receives a manual or text summarizing the key points of a doctrine, often in the form of invariant sets of questions and answers which the student must memorize word for word. In catechistic instruction, the formal test, with numbered answers and quantifiably divvied dollops of information, becomes central in the student’s mind as a measure of what he has learned and also as a set of obstacles he must overcome in order to effect his rite of passage. That is why the courses that do not rely on quantifiable grades or present pre-packaged information as knowledge seem so exceptional. It is also why the courses that do are plagued by cyncicsm. In a recent conversation with a group of my students, some estimated cheating across the board in RPI courses to be as high as 70%.

The moral of the story is that any system that is so mechanical and simpleminded is also simply and mechanically (and cynically) turned against itself.

Knowledge is not a steel file cabinet

At bottom, what’s wrong with this institutional organization is that knowledge simply does not work like a steel file cabinet: phenomena are not disciplinary.  Take a bridge, for instance. Is a bridge the expression of an designer’s blueprint or is it a transportation device? Is it an architectural creation or an economic one? Is it a social fact, linking two communities, or is it an ecological one, disturbing a riverbed and the river’s flow? Is it an historical fact, layered with accretions of symbolic associations, or is it a mere mundane fact, a convenience for those who use it? Quite obviously, a bridge is all these things and more. It is almost anything you make of it, and what you make of it depends on your point of view, what questions you ask of it, what’s motivating you to look at it and question it in the first place.

A colleague in Mechanical Engineering, Gary Gabrielle, asks his students to design a real-world device in order to complete his course. In one instance, his class works on building and designing a motorized wheelchair for children with muscular dystrophy. The student’s first goal is to define a set of desirable parameters. But before they do that, they must meet with the children themselves, interview those therapists who work with them, and try (though Gary doesn’t explicitly pose the problem to them in this way) to imagine themselves as afflicted with the same disease, forced to deal with conventional wheelchairs . Only then do some crucial aspects of the problem come into focus. The vehicle must be shorter than conventional wheelchairs. It should look more like a toy, in contrast to the forbiddingly technical look of conventional wheelchairs. It needs a low center of gravity for safety. It must be designed with adjustable controls so that the physically challenged children can learn how to use it. In one most poignant definition of the complexity of this problem, Gary talks about how at first the joystick controls must have self-limiting responses because if the chair over-responds to a motion and scares the user into reacting too quickly, the disease has the idiosyncratic effect of making the user freeze up in one total body seizure, which might in turn push the vehicle to respond dangerously, not to mention embarrass the user and make learning how to use the wheelchair even more daunting.

In short, what looks like a tough but manageable problem in mechanical engineering soon unfolds into a much tougher, much more complex, almost unmanageable interrelation of contextual problems that are physical, psychological, social, experiential, environmental, aestheticÑ or in other words, human. In addition, these students work in teams, and must figure out how to manage not only the contexts for their project, but each other: what if one student is lazy or ill-qualified? What if another likes to boss his teammates around? What if everything hinges on the completion of a minor task and the student who was supposed to do it falls ill?

By no means could a steel-cabinet approach succeed in Gabrielle’s course or solve these problems easily. While specialized knowledge is absolutely necessary, it is not sufficient: : most of the time the real world poses us problems where the solution relies on getting a three-dimensional grasp of the slippery human contexts.

The failure of the file cabinet approach is most easily and painfully seen in the careers of our students. As Jim Meindl is fond of saying, RPI graduates who go to work for larger corporations have little difficulty getting their technical (first and third) promotions but a lot of difficulty getting their managerial (second and fourth) promotions. And whether or not statistics actually bear this out, there is an important perception there about the skills, our students get Ñ or don’t get Ñ from Rensselaer. A manager is invariably the problem setter, the one who frames the contexts, the one who sees the bigger, more indefinable picture and asks the larger questions and knows how to express them succinctly for others.

If you accept my premise that most tasks, even apparently mechanical design tasks, are more like the hyper-wheelchair design problem Gabrielle’s class faces than like solving a tough partial differential equation, then you might agree that there is a fundamental mismatch between the steel cabinet and the world out there. Furthermore, this mismatch makes the university into a place that prepackages knowledge into easily consumable and digestible bits, grossly oversimplifying how the world works. Adult teachers and administrators with superior experience aren’t easily deluded into accepting this oversimplification, but I’m afraid that our students often are. It is very hard to persuade a student who is told he ought to declare a major at the age of 17 or 18 that the world isn’t a specialized place. Similarly, it’s very hard to persuade a student who is trained to give rote answers to pre-packaged questions that the world isn’t similarly catechistic.

Multiplying contexts for thinking

Opposed to catechistic models of learning are the exegetical and dialectical models. In exegesis, knowledge is viewed as something you acquire by inspecting and interpreting problematical, open-ended events (texts) for which there are no clear solutions. For instance, where catechism relies on student’s abilities to memorize the “right” answers, exegesis requires that students uncover hidden information, decipher systems of signs, piece together clues to a puzzle that might not have one proper arrangement, choose the right tools or formulas to “open the box,” and ask the right questions. Where catechism excludes alternatives, exegesis multiplies alternatives. Catechism is woefully limited, exegesis marvelously unlimited. In catechism, there is only one, higher final authority. Exegesis teaches the searcher to authorize (and be responsible for) his or her own answer. The dialectical element in learning in emphasized: students inquire of the field of study, engage it in a dialogue. The field of study – in the person of an instructor or professor – in turn, asks questions meant to spur the student to arrive at his or her own answer. This model of teaching passes the litmus tests that an Rensselaer educational fails: how easy is it to cheat the system?

The problem then, is how to get students to begin, even at the earliest stages of their college careers, to learn how to multiply contexts for their thinking, take responsibility for their solutions to problems, and appreciate the value and necessity for their future careers of doing so. Certainly, continuing to divide “knowledge” up into narrower and narrower foci, or simply multiplying file folders, or interrogating students by “plug and chug” catechisms not the solution, for even in its intrinsic structure, such an organization communicates the wrong message.

The solution, I would maintain, is within our grasp, and is especially important in the context of Rensselaer’s deliberations over the future of computing here.

Hypertext as curricular model

If we want to dispense with the steel file cabinet metaphor and the catechistic interaction between professor and student it too often requires, then what should take its place?

I would like to offer a new metaphor: the hypertext programs now popular in the microcomputer world.

Hypertext is, quite simply, a new, non-linear concept of accessing information made possible by the raw power of the contemporary computer and some imaginative programming. Hypertext has been expressed first as several high level programming languages, including the popular HyperCard (for the Macintosh) which make it extremely simple for the software designer to multiply contexts or different points of entry or ways of viewing the information held in large, complex databases. In its purest form, a hypertext system takes a finite but large collection of data, even data held in different media (video, audio, text) and enables any number of routes to cross-correlate – or build bridges between – aspects of those data. In common parlance, such multiplication of opportunities to see connections creates a “knowledge environment” and some hypertext programmers style themselves “knowledge architects”

SCENARIO 1: In a project already underway at Rensselaer, my colleague Pat Search and several associates are building a hypertext environment for the Massachusetts Museum of Contemporary Art. According to their description, a hypertext system will make the following scenario possible:

A visitor to the museum with no prior training in the use of computers is intrigued by a Robert Morris sculpture in one of the galleries. Wishing to find out more about this artist’s wrok, he walks over to a large computer monitor on a nearby gallery wall, calls up information menu, and chooses to see a series of images of Morris’ major works, many of which are in private collections abroad.

Stopping at an image that especially interests him, the visitor presses a key to zoom in on a detail, then requests and receives a 360-degree video view of the entire piece, as well as a (written) review of the work that appeared in a leading art magazine. In the middle of reading the review, which includes a description of the artist’s creative process, the visitor calls up a video of Morris making one of his sculptures. [From press release, 1/4/89]

Hypertext permits the student to continue this process almost ad infinitum. In the hyperversity, the only limit on the student’s field of inquiry would be his or her own curiosity ( or the problem set him) and the size of the database on-line. The student could call up a history of Morris’ development, an account of artists who influenced Morris and pictures and reviews of their work. Given a large enough resource (database) the student of the work could go even further. He could examine a history of sculpture in America, or find out more about the material used by the sculptor and its special properties, down to its mechanical and thermal coefficients. He could call up other videos, including tapes of people viewing the sculpture and artists creating other sculptures. He could discover relations between engineering and sculpting. In terms of his goals in a particular course, a student could request to see final reports, tests or work performed by students who have taken the course in the past. He could look, if the instructor was willing to offer the information, at the grading history of the instructor or at the instructor’s professional resume. At any point, the student could

(A) ask for a bibliography for further reference, and either call up entries in that bibliography or simply print it out for future reference; – or –

(B) ask for a printout of selected textual material or simple diagrams or images; -or –

(C) Ask the computer to suggest related topics and keywords (GUIDED SEARCH).   -or-

(D) Ask for definition or explanation of ambiguous words or for entries on particular names, things, places, dates, etc.

SCENARIO 2: Students in the Shakespeare Project at Stanford University can access three or four differnt performances on videotape of several of Shakespeare’s plays. They can watch the video as the text of the dialogue appears on the screen, and at any point stop the video ask for commentary on the text, access an archive of thousands of photographs and drawings of stage sets, costumes, and props. Or they can switch to the performance of the same scene by an entirely different company (for instnace, juxtaposing Nigel Williamson’s famous Hamlet with a recent and more sombre Russian performance. Furthermore, they can animate their own version, using stock animation characters that they can manipulate on the video screen, dress up in different costumes, and place in fornt of an array of different stage sets. (See “The Inter-Play’s the Thing,” MACUSER March 1989, pp. 108-114)

Now your reaction to this might be, “Fine, such a system works well for the fuzzy sets of liberal arts and social science courses, but what about technical courses where hard, invariant facts and formula are crucial to a student’s success?” Well, ignoring for a moment the discussion we might have over whether staging a production of Shakespeare is a highly technical enterprise or a mere liberal arts activity, let’s see how hypertext might apply to a more conventional Rensselaer course. Here the answer lies in the virtually infinite adaptability of the hypertext system.

SCENARIO 3: Imagine Calculus, first week. A student wants to know the meaning of the integral sign. She calls up a simple definition. But could also call up history of the choice of the letter ‘d’  by Liebniz, the history of Newton’s development of the fluxions (including animations of the exact physical problems they tried to solve), perhaps set in terms of a problem for the student to solve an interactive situation in which the wrong answers would also be animated. The student could also witness and manipulate animations of the increment under the curve, limits, changing the curves according to different algebraic formulas, velocity and acceleration, and alterations in the curve upon derivation and integration. Furthermore, illustrations of several real-world problems or applied calculus coudl accompany any theoretical discussion.

Technical skills are already taught in interactive hypertextual mode, as the famous diagnostic programs, with videodiscs, used in some teaching hospitals. In fact, the distinction between vocational-technical training, on the one hand, and liberal arts learning on the other, actually should becoming harder to make in the hyperversity. The positive feedback loop – the absolute symbiosis – between culture and technology/science becomes quite apparent if one follows the thread of any problem through all its labyrinthine twists and associations.

The advantages of a hypertext system for learning should be immediately apparent:

  • It permits students to proceed at their own pace;
  • It’s self-guided, allowing students to follow almost any avenue;
  • It’s knowledge-oriented rather than task-oriented; it spurs students curiosity first and gives a more realistic sense of the inexhaustibility of knowledge;
  • It’s multi-sensory, stimulating the brain through 3-d visualization, animation, video records of actual authors, events, objects in motion, and sound and color, through dramatization and dialogue, as well as through the linearity of written texts and taped lectures;
  • It’s open-ended and permits student creativity at all points rather than pre-packaged and finite;
  • It portrays the inherent inter-connectedness of disparate fields of research and different realms of learning;
  •  It contextualizes information, giving meaning to data by showing their place in the larger scheme of things;
  • It asks professors to redefine their own goals in teaching. Professors now become not so much conduits for bare facts and formulas, but are freed to become problem setters, resources for human knowledge that can’t be “uploaded” to the database, models for ways of thinking and styles of problem solving, rather than administrators and accountants of problem sets.
  • It brings students into more intimate contact with the real world, helping to break down the unrealistic conjurings a university education can sometimes inspire

But in my mind, the most important consequence of such a design is that

  • It puts the entire information resource of the university – and soon enough, I’m sure, all libraries that are digitized  – at the disposal of every student for the investigation of every problem.

The pedagogical problems with such a system, like most sophisticated tools, are a result of its very strengths. For immature researchers (our students) it can create the Alice-in-Wonderland Effect, bombarding the hapless student with an overwhelming amount of new information, unfamiliar viewpoints and dizzying contexts. However, in the structure of well-defined course goals (particular problems to be solved, a limited time in which courses to occur), the problem of infinite horizons can be controlled and managed. Professors can also easily design their own routes in the system which will create “more favorable pathways” for inquiry without blocking others.

Another problem is that it would be very easy to build inert hypertextual courses, recapitulating, in effect, the problems of our present system in a glitzier, fancier, more high-tech format: students could be made even more passive by relying on hypertext sessions like they do on TV (the “FEED ME” effect), thus losing the inter-active stimulation advantage such systems have. In short, we’d be left with a very expensive hypercatechism.

The way to avoid such a scenario is to embed deeply within the values and technology of the hypertextual system a sense of exegesis.

How to build a hyperversity

First and foremost would be to expand our understanding of the library as a central resource. With their skills at knowledge organization and resource management, librarians would work closely with professors to upload and archive everything relevant to courses taught here. They would then organize favored pathways for directing the researcher (student, user) from place to place, especially linked to particular courses. Pat Molholt informs me that there are already steps underway to build a hypertextual sort of administrative shell to help students select courses. She also has told me about plans to build a primitive hypertextual search procedure employing a sort of “shelf” search in on-line requests, in which a user could not only ask for titles with a keyword, but for titles one would find next to keyword titles on the physical shelves of the library. (This of course relies on the underlying Library of Congress system for organizing knowledge, but it is a good prototypical start of hypertext thinking).

Scanning technologies now enable us to upload text directly from books* and translated into ASCII code where it is manipulable and accessible by cross-referencing. This will pose important questions as to whether the integrity of a book – the information between its covers – should be preserved in favor of throwing everything into the stew, so that a request for information about Shakespeare’s King Lear could leap between sections of two distinct books, one about the King Lear image in all literature, another about Shakespeare’s tragedies, as easily as between chapters of the same book.

Optical disk and CD-ROM technologies already permit us to mix audio, video, and textual material in massive memory capacity. Professors could videotape lectures for replay, although the very concept of lectures themselves is likely to alter. Libraries of movies, tv science documentaries and interviews, such archived material as the Feynmann lectures, might all be uploaded and with some sweat, integrated into the system. Students would find that the need to learn computing languages or systems of linear commands (as MTS requires) in order to access archives or communicate via e-mail would disappear in favor of visually organized intuitive interfaces that require no special skills.

The main point is, though, that the system is expandable; it is tied to a concept deeper than individual systems of hardware and a concept of education more durable than the limits of educational software, which has been slow in coming anyway, as a recent Business Week (Oct, 1988) pointed out. I think because the investment of hours of labor per minute of presentation time is simply not worth it in most instances, let alone the lack of institutional support and reward for such non-research activities.

Hypertext systems, by contrast, require a fundamental reorganization of how one organizes and accesses information. And though to get a truly sophisticated archive like the one I describe above would require massive commitments of labor, a more primitive version, lacking some of the fancier features of stop-video accessing and commentary, could conceivably be implemented today.

But I would argue that it would be more efficient, more effective, more persuasive, more honest, more totalizing a re-designing of the university’s learning environment to plan to implement such a system for ten years and then make the leap all at once, say in 1999. Imagine if Rensselaer was the first on-stream hyperversity for the third millenium!

Hyperstudents, hyperprofs, hyperadministrators, and the Institute Requirements

The new Hyperstudent: In the hyperuniversity students are led to take an active and responsible role in choosing their course of study. They must have an inquiring mind in order to make the hyperlibrary work as anything more interesting than a dictionary. They are constantly in dialogue with their hyperprofs as they struggle through various pathways in the hyperarchives; they may often need to check with profs as they begin to explore solutions to the problems profs have set them. It is conceivable that their education will be come to be viewed as one long intertwined thread of voyaging through the archives (exploring, for instance, a new application for superconductivity, or a model for an artificial intelligence, or the ideal wing of a space vehicle, or the structure of a new polymer, or a new sintering procedure, or a new design for a bridge, or the structure of sub-atomic reality, or the writing of a software procedure for simulating a game, or the simulated marketing of a new product, or the organization of a hypothetical corporation, or the design of a new computer interface, or …..) rather than a series of incremental little steps, with their sweaty and trivial little hurdles and obstacles, concern about grade point averages, and next Monday’s tests. Though certainly the student will have to meet deadlines, even deadlines can be viewed as milestones. More and more, the hyperstudent will find his or her professor the source of information not found in any text or archived material, a collaborator in the building of a set of answers to be framed to specific problems.

The new hyperprof  finds that his or her more idealistic pedagogical functions and performances have been stripped of some of the more obnoxious tasks: he or she is no longer a gatekeeper, clock watcher, or administrator of obstacles. Rather, as implied above, he or she is now a combination expert resource, human reality checker, problem designer and setter, collaborator, and courseware designer or knowledge engineer, where designing a course may mean nothing more than posing some interesting problems – solved or unsolved – to students based on their level of skill or expertise or curiosity. The professor also becomes the standards controller (or quality control expert) who designates a system whereby the student can measure his or her relative success, if such a measure is deemed possible or even desirable (as it is in most circumstances).

The hyperprof will often meet with students but in varying venues and with more various goals. Large plenary sessions, lectures, presentations, etc. will be necessary as context-settings for different courses and for creating a socializing experience for students. Smaller discussion groups and seminars will be important for discussing special problems, designing teams for problem solving, and more intimate instruction. For many kinds of instruction, nothing will ever replace the classroom environment: for Socratic dialogue, for the creating of feeling and sympathy, for the establishment of shared values, for demonstrating styles of thinking and researching and creating, for performing and demonstrating certain procedures, for sharing experiences, for creating a spirit of competition and collaboration. The one-on-one meeting will often be crucial, especially as students find their way into uncharted territory in their search for answers to specific problems. And, of course, students can meet professors through the computer: through e-mail, through videotapes, through leaving a marked trail of their course through the computer which the professor can monitor (if parties agree or the prof makes it part of the course requirement).

Almost 50% of the new hyperadministrators might find they are, in one form or another, archivists or librarians, although the definition of the hyperlibrarian will be expanded to include software and hardware experts, artificial intelligence designers, communications experts, videotape artists and technicians, information guides, courseware designers, teaching assistants, research assistants, etc. Other administrative functions ought to remain the same, with the need for getting, maintaining, administering, apportioning and adding resources, fiscal, physical, and material, attracting students, sponsoring programs, communicating with the community at large, alumni, parents, new organizations, etc., arranging the calendar, scheduling meetings, etc.

Finally, we come to the shape of the student’s four-year career here, best expressed now in the bureaucratic mess we call the Institute Requirements. Since many of the disciplinary boundaries will be dismantled by the hyperversity system, there will be no need to require a distribution of courses in one or another domain. Students will find quite naturally that to find out everything they need to know to solve a problem or to research a subject or phenomenon, they will ultimately confront questions in domains represented by most, of not all, of what we now call the five schools. Similarly, the notion of a normal progress of courses from simple to complex will either be deeply imbedded in the hypertextual route a student follows in pursuing a topic, or else a trivial and irrelevant question.

Either a student already knows what he or she needs to know, or they will soon find out, or they will fail in their mission (because they couldn’t understand, didn’t ask the right questions or didn’t care to understand or ask). The idea of declaring a major will still hold some value, al though the idea of a ‘cum’ may not. In the place of a transcript of courses, a student could simply request a printout of the map of his/her tour through the archives, marked by dates and times, along with professor’s comments and evaluations appended at important landmarks along the route. The registrar’s function as gradekeeper ought to be obsolescent in the hyperversity, subsumed into the more enlightening task of tracking a student’s progress. A list of problems solved, devices designed and/or built, questions answered satisfactorily or not satisfactorily might also be supplied.

University as mini-universe

In the end, this is a vision of the university as a mini-universe we invite our students to explore with us.  Technology and the new way it invites us to interact with knowledge and learning turns the university as an open, interconnected, ever-expanding book – or multi-media event – rather than a forced march down a few canals or channels. I believe it is a much more enlivening, emboldening, and enticing view of how we can fulfill our mission to teach, learn, research and treat each other as humans into the next millenium.



The Origin of the Alphabet and the Future of All Media

The origin of the phonetic alphabet in the 15th c BCE

We all swim in the alphabet like fish in water or birds in air, so it is hard to appreciate what an astounding communications technology it is. The advent of this new flexible form of literacy must have been at least as powerful and transformative in its time as the printing press, the telephone, and the atom bomb were in theirs or the computer continues to be in ours. Each produced rapid, breathtaking transformations of culture, shifts in power and wealth, disruptions of society, and creation of new ways for humans to relate to the universe and to each other.

Yet, incredibly, there seems to be no record in ancient literature of this stunning breakthrough. We have to look forward at least six centuries to the Greeks (around the late 9th to 8th century BCE) who commemorate the origin of the phonetic alphabet in the myth of the Phoenician King of Thebes, Cadmus. This is even more surprising considering the alphabet presents the easily accessible means to record and celebrate its own birth.

I believe the record of this first, true advent of the alphabet has been hiding in plain sight, right before our eyes, in the Hebrew Bible.

Writing arises spontaneously and independently in many cultures across the globe before and since the invention of the alphabet. Pictographic writing was in use at least since 3300 BCE by the Sumerians, Egyptians, and Chinese. They built vast and enduring empires on the competitive advantage this technology gave them.[1] But to further add drama to the rise of the alphabet – and the apparent silence on its debut — archeologists agree it was only invented once in all history and in a specific place and time. Paleographers trace its origin, or at least its first known occurrence, to a busy, long-established mining site called Serabit-el-Khadem in the South Sinai sometime around the 15th century BCE. These early phonetic signs were apparently a sort of graffiti scrawled by slaves on the walls of Egyptian turquoise mines. Epigraphers identify this as a very primitive Hebrew called “proto-Sinaitic.”

Proto-Sinaitic developed into a later Sinaitic script recognizable as early Hebrew. This primitive phonetic alphabet – Hebrew is still written without vowels — spread northward in the ensuing centuries to Canaan and Phoenicia (northern Israel and Lebanon).

You were almost certainly taught, and take as gospel, that the alphabet was invented by the Phoenicians around the 12th-9th century BCE. They carried it to the Greeks, who commemorate its advent in the mythology of King Cadmus. But the inscriptions from Serabit-al-Khadem and other sites around Sinai show incontrovertibly that alphabet graffiti were in use hundreds of years earlier and further south than any possible invention by the Phoenicians.

However, that the alphabet might have been invented by Hebrew slaves, or is in any event traced to Hebrew, is a story that has been buried, effaced, and over-written by Western history.

What is true is that the Phoenicians significantly improved the alphabet by adding signs for vowels, so Classicists insist that any prior alphabetic inscriptions weren’t true alphabets. Since the Phoenicians transmitted the alphabet to the Greeks, this mythology of the alphabet has become orthodoxy, since the classical originators of Western culture. Even Israelis, despite their love of archeology, take this story for granted. When I was Fulbright scholar to the Technion in Israel in 1994, the University of Haifa had an exhibition on the alphabet that elaborately and definitively showed the Phoenician origin of the alphabet in the 11th century BCE. It was utterly silent about the strong evidence for its Hebrew origins centuries earlier and a few hundred miles south.

Therefore, what is also true of the received myth of the origin of the alphabet is that it was only invented once, though not by the Phoenicians. Since proto-Sinaitic and early Hebrew are Canaanite-Phoenician-Greek antecedents, then all alphabets including Cyrillic, Arabic, modern Hebrew, and Latin, and all their many variations and imitations, can trace their origins either directly to it were inspired by ones that already existed.[2] [3]

The genius of the phonetic alphabet comes from a simple but profound insight: a single sign can represent a single atom of sound, a phoneme, rather than a whole word or thing. Instead of thousands of characters needed by the pictographic or logographic scripts in use at the time, the alphabet required only 22-26 signs.[4]

You no longer needed to be apprenticed to the priesthood of scribes from an early age or have the luxury of a prince to learn to be literate. The phonetic alphabet could make anyone literate in a day or two. Even a slave could have the power of pharaohs. In fact, alphabetic literacy gave slaves a power that exceeded that of the pharaohs.

Re-reading the Hebrew Bible as the story of the phonetic alphabet

The alphabet and the universal literacy it enabled was the ultimate disruptive new tech of its age, especially in its environment of hegemonic empires and nomadic oral (illiterate) cultures . Because it was simple and made literacy universal, anyone could broadcast their expressions to a much wider audience. The means to own one’s own private publishing house was in anyone’s hands, much as anybody in the early years of radio and Internet could create their own broadcast. It could represent any language well enough. It was more abstract and enabled new cognitive powers to blossom. It invited self-reflection and self-empowerment and self-affirmation. It enabled the writing of any concept, emotion or abstraction that they could be said or thought in words.

Think of what the Internet did to the Soviet Union, how much energy totalitarian regimes like North Korea and Iran must put into controlling it, and how even an open, democratic society was shaken by independent operations like Wikileaks, and you get a flavor of what the alphabet must have done to Egypt and other aging empires in the region.

This is why it is astounding that no record of the invention of the alphabet can be found in ancient literature, at least so it seems. It would be surprising if the Torah didn’t record the advent of the alphabet. After all, it is the first document of any length to be written in the alphabet and remains to this day the most widely read text. It records the origin of a new tribe or culture, the Jews, and their liberation from slavery to fulfill a new destiny. It reimagines the story of the origin of the world and the human role in it as moral beings. It encodes a new moral order and a new direction for the task of human living.

It contains the story of the revelation of a new God, Whose new abstract Name is written in that alphabet as the Tetragrammaton. How could it not refer also to the origin of the transcendental technology that made all this possible? If we read the original Hebrew afresh and pare away traditional translations of the text, we find the story might be right there in plain sight.

According to the Torah, when Moses ascends Mount Sinai the first time, he is living among the Midianites as a shepherd, having fled there after killing an Egyptian taskmaster who was beating a Hebrew slave. Coincidentally, Serabit al-Khadem was in the province of the Midianites in those centuries. Is this just coincidence, or are there other clues that the story of Moses and the advent of the alphabet are intertwined?

Everyone knows the things God reveals to Moses in Sinai. He appears to Moses in the Burning Bush. He displays his ineffable nature. He reveals His ineffable Name that, by no coincidence, is an incomprehensible profundity expressed in four alphabetic consonants, the Tetragrammaton. He shows an ineffable aspect of His receding face to Moses. He charges Moses to go back to Egypt to liberate the children of Israel. But the text of the Torah, if we read it very closely, stripped of its vowels as it is in the original, tells us quite explicitly that something else happens: God teaches Moses the alphabet. He shows Moshe the “signs,” which in Hebrew is sometimes written את (pronounced ‘OHT’) and sometimes written אות (also ‘OHT’), a word that is more definitively associated with “letter.”

There is a remarkable convergence of meanings encapsulated in the two Hebrew letters aleph א and the tav ת. They are the first and last signs of the ancient script, its alpha and omega. The two letters together, depending on which unmarked vowels the speaker reads them with, can also mean either “you” (AHT – feminine); an otherwise empty grammatical marker of the accusative case (EHT – for which there is no translation in English). Most significantly for this reading of Exodus, the word for “sign” (OHT) is also the word for “letter” (OHT).

Moses humbly, or out of fear, protests that for all these metaphysical revelations, he is ill-equipped to be a liberator, for he is not good with language. God insists: show these letters to the Israelites, and “If they do not believe in the voice of the first letter (kol ha’oht ha’rishon) they will listen to the voice of the last letter (kol ha’oht ha’acharon).” Further, God promises, “And with these letter (signs) they will worship me on this mountain.” Moses returns to Egypt and does as he has been commanded. He shows the signs to the elders of Israel. In this reading, he has disseminated literacy among the Hebrews. He is now prepared to confront Pharaoh.

Aaron and Moses “do the signs” with their rod in Pharaoh’s court. Pharaoh summons his own wizards to show that this upstart technology is nothing special and they can reproduce Moses’ tricks. However, it is clear that this is a battle of writing systems, a contest of two powerful communications techs. They turn water into blood. Pharaoh’s guys do the same. The Hebrews summon frogs. So do Pharoah’s wizards. The ten plagues can be understood as ten demos of the alphabet’s agility, its power of abstraction.

To lend credence to this reading, strikingly, the word that is traditionally translated as “wizard” or “sorcerer” is the Hebrew chartoomeim, which literally means “stone writers.” To this day the Hebrew for “hieroglyphics” is c’tav chartoomeim. These wizards aren’t magicians in the sense we are led to imagine them by translations and tradition, in command of magic, they are Pharaoh’s hieroglyphic scribes, in command of the advanced technology of hieroglyphic writing. As Arthur C. Clark famously said, “Any sufficiently advanced technology is indistinguishable from magic.” Of course, they are alleged to be able to work wonders and have mystical abilities. To be a hieroglyphic scribe you must be trained from an early age to become literate in tens of thousands of signs. As keeper of Pharaoh’s most potent weapon, his ability to command and control a nation by owning the means of communication, of course working your devices seems to command magic powers and of course you are given special status as members of a priestly class.

For all their power, though, the hieroglyphic scribes discover they can’t compete with writers of the alphabet. They give up trying to compete with the third plague, when Moses strikes the dust and lice emerge from the dust throughout the land to plague the Egyptians. The scribes admit defeat and exclaim, “This is the Finger of God” (!)

But why do they give up at this point after having no trouble matching the trans-formation of water into blood or summoning frogs from the mud? A clue is in the nature of the transformation. Hieroglyphic signs for frogs and blood are well-known. What are dust and lice? In Egyptian, the word for lice is “tiny” or “diminutive” – the same word used for little girls, and they didn’t seem to have a glyph for it.[5] Nor does there seem to be a hieroglyph, even in later Egypt, for “dust.” Lice, like dust, are ubiquitous but nearly invisible little nothings. They are like the finger of a ubiquitous but invisible Deity stirring the pot of the universe and history. Kinim [כנמ],the Hebrew word here translated as “lice,” is used in Israel to refer also to those tiny gnats that make a buzzing sound but which can’t be seen. In the American South, they call them “noseeums.” Furthermore, the letters for plague are D-B-R [דבר]; plural D-B-R-M [דברם]. By supplying different vowels from those in traditional interpretations, these letters can also signify words or things or statements or even commandments, as in the Ten Commandments. As a word, DBR דבר is itself, like the את, a one-word demo of the power and facility of this new script.

The hieroglyphic scribes declare that the transformation of these two “noseeums” into each other, these two abstractions, is beyond all their power. It is the work of a much greater technology than that they command and the God of the Hebrews must therefore be more potent than all of their many cumbersome, substantive gods, their idols, combined.

A clearly-conflicted Pharaoh recognizes the threat this new power and the new God it has summoned or demonstrated poses to his grip on power, but he is torn. These newly-empowered slaves are also critical to his economy. They are currently employed in a massive public works project dedicated to his own glorification. How humiliating would it be if Pharaoh let them go so publicly and conceded defeat to a God of slaves? The historical event of the Exodus of slaves from Egypt is inexplicable, unprecedented and unique in history. When else has a powerful ruler let his slave population go in the middle of a large public works project? What awesome event could have compelled Pharaoh to do so?

The Hebrews flee after the tenth plague induces Pharaoh to let them go. The Red Sea parts and refloods to drown Pharaoh’s pursuing army. Moses leads the Israelites back to Sinai, where as God promised, they worship Him. What is the instrument of that worship? It is a new covenant between the human and the metaphysical, a new picture of the universe and history of the world, written in the new script, the Torah, filled with instructions for how to operate this new mode of living. One command is that everyone will learn to read and understand the Bible that is now accessible in this new communications platform.

As the document of wandering, dispossessed slaves charged with a mission from God, it becomes a revelation that everyone can directly participate in and whose mysteries anyone can try to delve. The alphabet contains the magic of abstraction and the ability to express everything that can be said. Because of its ease of use and infinite flexibility, it invites the expression of things that have never before been expressed, or perhaps even thought, by anybody. It creates a new ethical relationship among people by giving everyone an expansive way to express what they think and therefore, to recognize the interior life of others. Even if there were no explicit reference to the advent of the alphabet per se, the Torah is intrinsically a record of the total cultural revolution of the Hebrews.

In other words the medium of the Torah is, along with all its others meanings, also its message, implicitly coded in every word and letter. And that message is the advent of the alphabet and the way it changes everything.

The Torah tells us that it is the Word of God. Further, Jewish tradition enshrines Hebrew as a holy tongue, Lashon Kodesh, and reserves for it special power and layers of simultaneous and hidden meanings and correlations – we get a taste of it in דבר and את — that other scripts can only faintly imitate, if at all. Jewish mystical tradition holds that God wrote the Torah before Creation, an inscrutable recipe for the Universe.

Even if one does not ascribe this metaphysics to the Torah, one can understand why it felt to the Hebrews as if it must be metaphysical and why the Jews subsequently become that most text-obsessed people.

The arc of all media tends towards telepathy

Whether or not one believes that the Hebrew alphabet was a divine revelation to Moses on Sinai, we can understand why the cultural moment of its invention would be recorded as one of the most transformative revolutions in history. We can see how the conception of an omnipotent, omnipresent and invisible God is coeval with it. We can understand why a powerful leader would let those who possess this new technology would be torn between expelling and eradicating them. A culture of slaves who seem to come out of nowhere attribute to it mythologies of redemption, revelation, and revolution. That it coincides with the best evidence we have for the actual historic origins of this new technology of the alphabet lends force to the argument. As such, the origin of the alphabet becomes a model for understanding other moments in history that were wrought by sudden eruptions and deployment of disruptive technologies, especially technologies of communications. They inevitably bring a new ethos, new cognitive tools, new arts, new epistemologies, and new gods.

Today, in 2015, I believe we stand on the verge of yet another such breakthrough, with new advances in the ongoing computer-cybernetic revolution. We are rapidly taking steps towards the realization of mind-to-mind communication enabled by brain-to computer-to-brain technology. The journey to “Technologically-Mediated Telepathy” is latent in all the prior communications revolutions. From the time hundreds of millenia ago we started grunting symbols, or 20,000 years ago we started painting on cave walls, through writing and computers, we were already on the road to telepathy. After all, what are all media, what are all communications, all arts, all expressions, if not an attempt to trade subjectivities, to get what’s in my mind into yours faster, more faithfully, more sensationally, and to fulfill that universal human urge for intimacy and recognition.

All this would be science fiction speculation, an interesting theory. But events are catching up. Dozens of parallel research projects are engaged in getting computers to hook directly into and “read” brains, either to record and decipher what is being experienced in the brain or to enable humans to control various devices with their “thoughts.” Others are involved in getting that reading into a format that can then be transmitted to other brains, brain-to-computer-to-brain communication, or technologically mediated telepathy. And on June 30, 2015, no less a chacham than Mark Zuckerberg announced that he envisions the future of Facebook as enabling people to read each others’ minds, a natural enough goal for a technology devised to help people share intimacies.





[1] Some aboriginal people in Africa, the Americas and Oceania still use pictographs as their main writing system. Along the way, there are dozens of separate pictographic origins, including Olmec (900 BCE) or Zapotec (600 BCE) informing Mayan and Incan writing, Dongba or Ursu (Tibet), Mikmek (Eastern Canada), Nisibidi (Nigeria),

[2] Even Hangul, the Korean alphabet of 24 characters, was invented in 1444 by King Sejong most likely after he saw or heard of Latin examples (after all, Marco Polo had already explored Korea at the end of the 13th century).

[3] See recent claims for the Wadi El Hol alphabetic inscriptions in the southern Nile ca. 18th c BCE.

[4] Syllabaries, so-called because they used signs to represent the sounds of individual syllables, were widespread in Canaan, now Lebanon/Syria/Northern Israel, as Ugaritic in cuneiform (Linear B). Yet even syllabaries required hundreds of signs and therefore intense education to become literateBetween the evolution of purely pictographic scripts and the alphabet lie a spectrum of variations. There are scripts prior to the alphabet that tinker with phonetic representations. Early hieroglyphic and cuneiform all seemed to represent some sounds with signs even before the 15th century BCE.


[5] rom among the huge variety of Egyptian insects only a handful have been represented and names are known of only a small number. The dung beetle in an abstracted form was turned into one of the most numerous artifacts of antiquity, flies, click beetles and locust are at least occasionally found in reliefs and as pendants and amulets, but otherwise insects do not seem to have inspired the ancient artisans to any large extent.” hieroglyph


‘There is no word in Hebrew for fiction’

‘There is no word in Hebrew for fiction’  – Amos Oz

There was a moment in history, a parenthesis, which interrupted the cybernetic feedback loop between literacy and the growth of empires.

It occurs at the moment that the hieroglyphic/pictographic system is supplanted by the new invention of the alphabet. This event is so momentous that it only happens once in all of human history, so powerful that it eventually spreads, and is indeed still spreading, across all human cultures. The moment is brief, for it is quickly supplanted by improvements on its own fundamental innovation. Yet its legacy is captured and evolves along its own co-evolutionary path, in dialectic with the “totalizing” line of empire that is taken up again when the alphabet evolves enough to be harnessed to the work of the tech-writing pictographic scripts. I call this moment “Hebrew” or, better, for reasons that will emerge, “The Alep-Tav [] Event.”5 We can locate this moment, this interruption, eruption, parentheses, this invention on the margins in time and space, quasi-fictionally. Its legacy is an evolving cultural complex that has some stable morphological features we call “Judaism” or, for reasons I will explain later, an epistemology and metaphysics I call (with considerable irony to offset its narcissism) “porushia.”

The Phoenician, Ugaritic, Greek, Arabic, Amharic, Korean, Russian, Latin, and all Indo-European alphabets derived from this ancient proto-Sinaitic Hebrew script. Every other writing system is either pictographic (Chinese, Egyptian hieroglyphic, Aztec runes, etc.) or syllabic (e.g. Cuneiform A, North American Cree and Eskimo, Vai [Liberia, Africa], Katakana and Hiragana [the two Japanese Kana scripts invented between 700-900 AD]). Some hybrids – pictographic syllabaries – (Tamil and Sanskrit and Cuneiform Linear B or Akkadian) also arise in parallel. Syllabaries are an important step on the road to an alphabet because they shift the representation of language from images of things or events (pictograms, sometimes mistakenly called ideograms or logograms) to the much more plastic representation of the sounds of the language itself. But syllabaries are a clumsy compromise, replacing thousands of characters with hundreds: one for ba, another for beh, a third for bee, a fourth for bo, a fifth for boo, etc.

The fundamental revelation in a proper alphabet, and its break from syllabaries, is the recognition that signs didn’t need to represent speech, but could represent atoms of sound that are pre-verbal. An alphabet, in other words, recognizes consonants as separate and constant elements permuted around another constant set of explosives- vowels – which make the utterance possible. Try uttering the consonant “p” without expelling the air that comes with the vowel, and you will see that all you get is the stutterer’s intention to say “peh” or “pah” or “pay,” a moment of hesitation before an explosion that cannot come without a vowel. So one can immediately distinguish an alphabet from a syllabary because the former reduces the number of characters to 36 or fewer, on average, 25 or 26.

Hebrew as the prototypical and aboriginal alphabet, struggled, perhaps by going too radically and naively, if in the right direction. It represented only the 22 alphabetic characters for the aboriginal abstraction of the consonants but did not conceive of how – or even whether – to represent vowels. This is peculiar, since the “idea” of a vowel is entailed once one makes the phonetic distinction of a “consonant.” Perhaps it can be explained by the need for the Hebrews of the time for secrecy, for a code set apart from the reigning script paradigm. Or perhaps, more simply, Hebrew was simply an incomplete and primitive experiment that nonetheless produced a powerful, if defective, technology. Or as some epigraphers have explained, the Hebrew borrowed the first sound from the Egyptian hieroglyphs, in a principle called acrophony (the highest or first sound) to form their alphabets. In any case, the Phoenicians, or some Western Semites with whom the Phoenicians came in contact between the 12th and 9th centuries BC, probably between Tyre (now in Lebanon) and Akko or Atlit (now on the northern coast of Israel) realized the inefficiency or primitiveness of this system and added the missing vowels. The Phoenicians obviously found this new communications technology useful for their commerce and imperialization of the seas. In turn, they pollinate the Mediterranean with it, exporting a new improved alphabet, now a much more efficient device for representing all the sounds of speech, eventually importing it to Greece between the 9th or 8th centuries. At the same time, the alphabet in different forms spread eastward through Persia into India, and westward back into Africa. It also invades Middle Eastern and African regions. Alphabets take hold better where a strong empire doesn’t already exist, among mixed agrarian and tribal civilizations. Empires like Egypt have too strong a cultural matrix to give up their older form of writing. As a result, hieroglyphics with its thousands of pictographic elements, survived into the Roman era.

Porushian Consciousness enables a peculiar cognition, culture, and metaphysics

Given all the above, it is hard to resist making very suggestive connections between the cybernetic practices induced by this inefficient alphabet and the sociological, cultural, and even metaphysical practices of Hebrew culture. For instance, it is hard not to suggest that because the Hebrew language makes the transmission of authority without questioning or interpretation difficult (if not impossible) and because any written message, especially complex or new ones are likley to provoke numerous interpretations, it is easy to imagine that the peculiarities of the alphabet may have helped Jewish culture develop a hearty resistance to authority and consensus in general.

Now put this cognitive practice or habit in the context of the diaspora. There, one of the only constants binding 2000 years of Jewish history and dozens of disparate Jewish communities around the world at any given time, each speaking a different host language, is reading unvowelled Hebrew texts. We can see how the Jews evolve culturally as a peculiarly resistant “virus-like” or “parasite-like” race, perceived by their hosts as ineradicable pests who carry with them a set of insular cognitive and cultural practices dooming them to play on, feed off of, the margins of the host culture. Yet, paradoxically, these same cognitive practices allow them to succeed with remarkable acuity. Jews historically succeeded by penetration into controlling positions in the host culture, acquiring with incredible swiftness professional roles that require skills of literacy, interpretation, learning, and powers of abstraction. Thus these perpetual newcomers threaten within a few generations to mutate the central culturgenic heritage of the hosts.

If we look closely even at the little game we played with the aleph and the tav to produce three or four possible words — the feminine you, letter, the (untranslatable) sign of the accusative case, and 401 — it is tempting to see the rudiments of an entire alternative epistemological practice emerge. In this practice, the letters themselves open a space into which interpretation must be placed in the form of choosing the vowels. The reader takes an active role, looking not only to multiply possible alternatives, but to seek hidden unities beneath them. Indeed, we can understand the intense and peculiarly multivalent hermeneutic practice of Jewish Talmud and mysticism. Furthermore, with the ability to represent “EHT” the accusative case (which is so abstruse that it is not even represented in English) and all other grammatical cases because the alphabet is now a transcription (though in Hebrew only ambiguously so) of the spoken language, civilization now has at its disposal a new sophisticated means to represent and preserve across space and time the act of languaging itself. That is, the text has the newfound capacity for self-reflexive statement, to represent with greater plasticity and fidelity the consciousness or intentions of an author in words. One can “do” texts independent of actions in the world with extreme plasticity. At the same time, the instrument is not completely efficient, so the reader is teased with this gesture at telepathic fidelity, and yet forced to disambiguate the messages sent this way.

So it is also no wonder that the central metaphysical tenet — and indeed one of the only constants of Jewish metaphysical dogma (the phrase is almost oxymoronic because of the absence of a coherent dogma in Judaism) — in 3500 years history of the Jews from the time of Moses is the unpronounceability, the unwriteability, and the unthinkability of the name of God. Jews are taught traditionally never to write or speak The Name, even in another language. In English, for instance, one writes G-d. The arbitrary transliteration of the Tetragrammaton – the four letters of God’s name in Hebrew – YHVH – into Yahweh, is a purely Christian imposition on a Hebrew that it is indeterminate and unpronounceable as written. Even in devout prayers, Jews abbreviate the Tetragrammaton to ‘YY’. amd utter “Adonay” (meaning Lord). For non-liturgical practice, the letters are read “Adoshem” a nonsensical combination of “Lord” and “Name” – or else one says “The Name.”

What at first seems like only a religious fetish is also a reiteration and reinforcement of a central cognitive tool (or at least distinction) of Hebrew literacy. You get a metaphysics of multivalence, interpretation, perpetual and transcendental ambiguity, deferral of meaning to some locus that is never here, a disconnection between the spoken and the written authority, and a denial of presence. God speaks a name and shows a Face, an actual Face, only to Moses, only once. Even then Moses turns away, only to watch the non-physical presence of God recede from him. So rather than a cosmological model of knowability, tangibility – essentially the kind of idolatry we find in tech writing empires– the inefficiency of the script system promotes a metaphysics of absence, of unknowability and of the unrepresentability of central truths.

When the brain was simple

Mind to mind metaphysics in alphabet and cyberspace

The brain is a sur-rational machine for bringing worlds into collision, a metaphor device, a translation circuit for closing and opening the loop between incommensurate and mutually incomprehensible universes. In my view, it is already “meta-physical.”

Nature was finished when it invented the human brain

What is the brain? At its very simplest it is an entity that takes impressions from out there in the form of energy striking different nerve endings in the organs of the body (eyes, skin, ears, nose, mouth/tongue), converts the energy into information, shuttles the information to a central processor, a black box homuncular body without organs sitting in an ecology of incomprehensibly frothing and turbulent hormones, and somehow translates them into wholly different things in here – sensations, thoughts, flocks of birds, schools of fish, swarming, buzzing. The brain is intrinsically a sur-rational machine for bringing worlds into collision, a metaphor device, a translation circuit for closing and opening the loop between incommensurate and mutually incomprehensible universes. It is already “meta-physical.”

Before we were homo sapiens and the brain was simple, there was a neat Kantian fit between animal and environment: The rules of the world out there, its physics, were not challenged by the rules of the world in here; there was a nice match. But then through some urgency that it is just as easy to talk about metaphysically or teleologically as in terms of some deterministic chaotic evolution, the brain exploded, human-like hominids started walking upright about 100,000 years ago, looking forward, using tools, colonizing the world, creating new social structures. The brain, like some imperial culture exploding off a remote island, started projecting itself onto the world, terraforming the Earth in its own image and leaving in its wake a trail of non-bio-degradable tools and waste. Nature was finished. It was finished in the sense that it reached a fulfillment and a culmination, but it had also committed suicide, finished itself off. Not only did humans begin to massively alter the environment, but the brain also started talking, depicting, enacting versions of its experience in cave paintings, ritual dances, gestures, and a grammar of grunts. It became self-conscious. It recognized a mismatch between the world out there and the world in here: Hey! The world persists; we die! Self-consciousness and the idea of death were born in one fatal stroke. Finally, the brain framed everything it looked at: nature became a pastoral scene in the cognitive museum. The cosmos seen through human eyes was an artifice, always already available for use.

The C3 Loop

On the flip side, the human brain is a prisoner of the loop of cognition, culture and communication, caught in its virtuous cycle. We call the cybernetic device that initiated and grew in this loop language or symbolizing. Frances Hellige in his book on cognition and the brain, Hemispheric Asymmetry, describes this loop initiated by the development of language, with feed-forward and feedback components, as a sort of “snowball effect,” a cycle of ever-widening gyres that eventually embraces and creates everything between the poles of culture and the biology of the brain itself, including physiological changes in structure and size of different regions.2

In cybernetic terms, we call this a positive feedback loop. The cybernetic system (in this case, human brain) sends information out into the world-culture-environment, which feed newly intensified signals back into the (brain) system to destabilize the system anew, which in turn re-amplifies its message, like an over-sensitive microphone, and again re-broadcasts this message back onto the world until the universe screeches with the noise of the human brain echoed back to it, in it, a cyborg rock concert. It also changes the brain itself. Another researcher, Charles Lumsden, calls this process “the selective stabilization of the synapses” as a result of continuous exposure to cultural effects or stimulation, a collaboration between cultural invention and inherited genetic characteristics,” or “Gene-Culture Coevolution.”3

Neurophysiologists and cognitive scientists who study the alphabet note that its effects on the brain can even be seen in the lifetime development of individual people. The use of language reshapes the brain from womb to tomb. A whole new discipline of neural plasticity has emerged in the last few decades, showing that the brain is not the static, genetically-determined machine we once thought it to be. In other words, ontogeny recapitulates phylogeny in culture-gene co-evolution as well. Studies of aphasics and dyslexics show that the brain changes physiologically and progressively after injury, suggesting that parts of the brain grow even in the lifetime of individual humans.

Fun with your new brain: the rise of the alphabet

Using different alphabets (or losing the capacity to read the alphabet), even within the lifetime of an individual, is a bit like growing a new brain. Trying a new alphabet must have been (and still is) tantamount to an ongoing progressive hallucination. It lets you think things that you couldn’t have thought before, makes connections that simply didn’t exist physiologically – for which your brain wasn’t wired – and forces your brain into different information processing patterns, which presumably involve different mental events or experiences (as physiological-cognitive research overwhelmingly shows). It’s like having a whole new brain, or at least, a brain with whole new faculties, new circuits, new wetware. Now imagine the mass hallucination of an entire culture learning how to use an alphabet for the first time. Whole tribes of people, or important segments of them, put on this new cybernetic headgear, or what I have been calling a new form of telepathy, virtually all at once. We can imagine this mass cybernetic experiment would be accompanied by social, epistemological, and metaphysical revolutions, apocalyptic prophesies, and re-definitions of the self in relation to body, mind, others, and the invisible. In short, it might provoke the emergence of a new religion.

With this perhaps absurd hypothesis in our minds, let’s take a look at the advent of writing itself. Take (in our imaginations) a time-lapse photograph of the Nile Valley before and just after the advent of hieroglyphics, or (even earlier), the Fertile Crescent of Mesopotamia before and just after the very first invention of writing, Sumerian pictographs around 3200 B.C. These time-lapsed films would show millions of years of desultory animal activity, including the hunting gathering and low-level agricultural activity of upright hominids after 35,000 BC. As we approach 10,000 BC, activity begins to pick up pace and organization. Clusters of hominids show tool use, primitive mound building, expressive cave painting, and cultivation of the earth, though in indifferent and almost-random-seeming patterns. Then, suddenly, around 3200 BCE, BANG! Something leaps across the chaotic bifurcation into a new order of frantic self-organization. Compressed into a few frames is an almost instantaneous transformation; blink and you’ll miss the instant. These fertile regions undergo massive terraforming along rectilinear plots. Rivers are diverted into rectangular irrigation systems. Cities emerge, themselves rectilinear. Zoom in with me now into the square-ish walls of the cities, and into the very square-ish rooms of the city, and we will find the intimate source of this sudden change. There, a row of hard stone benches, arranged regularly. It is a schoolroom for scribes! Hundreds of boys, mostly the sons of privileged nobility, sit for hours hunched over clay tablets, learning to scrawl in regular lines. Indeed, if we superimpose the scratching of these lines they look like the lines of irrigation written on the face of the earth itself, as seen from an orbiting satellite. The harsh discipline of the schoolchildren being tutored in script “canalizes” their thought processes, reinforcing certain pathways. It is hard not to imagine that what’s written on the brain gets projected onto the world, which is literally “canalized,” too.

Looking at a picture of the ancient Sumerian classroom for scribes found in Shuruppak (from ca. 3200 BC),

Sumerian schoolroom
Picture of Sumerian schoolroom for scribes from Edward Chiera, They Wrote on Clay (U Chicago: 1938)

we see familiar rows of benches and the headmasters desk up front. Then we are seized with a horrible and giddy vertigo, a terrible recognition: Five thousand years later, and we’re still canalizing the brains of our children in the same way, enforcing the discipline of writing in virtually the same methods and with not so dissimilar effects as did these ancient Sumerians, “that gifted and practical people” (as Edward Chiera calls them in his groundbreaking study, They Wrote on Clay). Sumerians invented cuneiform as a perfectly portable means to effect commerce, extend the authority of their kings, preserve metaphysical and transcendent information, and secure the stability of caste and rank.

The invention of pictographic writing by the Sumerians was “a secret treasure’ or mystery which the laymen could not be expected to understand and which was therefore the peculiar possession of a professional class of clerks or scribes,” Chiera writes. Furthermore, the metaphysics associated with this new telepathic technology becomes clear in the priestly functions these scribes served. Neo-Babylonian texts used the same ideogram for priest and scribe. Along with the script came a new mythology that, predictably, placed the power of language in the center of its metaphysics: “As for the creating technique attributed to these [new] deities, Sumerian philosophers developed a doctrine which became dogma throughout the Near East — the doctrine of the creative power of the divine word. All that the creating deity had to do, according to this doctrine, was to lay his plans, utter the word, and pronounce the name.

In fact, everywhere pictographic writing makes its advent, we find the sudden emergence of what we can think of as “tech writing empires”: civilizations geometrically akin in their compulsive rectilinearity to the hexagonal hive structures of bees. In China, among the Aztecs of Mexico or Incas in Peru, in Babylon, Sumeria, and Egypt, we see the same pattern of social, epistemological, and metaphysical organization. Along with these scripts come other inventions so predictably similar that they seem to derive directly from imperatives in the nervous system itself amplified or newly grown by use of the new cyborg device of writing: centralized authority in god/kings; monumental ziggurat-like or pyramidal architecture; hierarchies of priest-scribes; complex, self-perpetuating bureaucracies; fluid but clearly demarcated social/economic classes; trade or craft guilds; imperialism; slavery; “canalizing” educational systems; confederations of tribes into nations; standardized monetary systems and trade; taxes; and so on. Almost every conceivable aspect of empire, in its gross forms, was entailed in pictographic writing. Even the alphabet, with its greater efficiency and fidelity to speech, only seems to add abstraction and speed to what McLuhan described as the “exteriorization of the nerve net”

read more