Imagine for a moment you want to find information on the architecture of ancient Rome. You sit in front of your computer, put on your VR glasses, set your motion tracking device, and log in. A glowing computer-generated environment fades in around you. To the bottom of your perimeter of vision appears a console asking if you would like to begin. You tell it you would like to proceed to the .edu quadrant. Instantly the neon lines around you blur as you're whisked along past ads, sectors, and towering cities of data at blinding speed. When your eyes are able to focus you find yourself floating to a halt in the .edu quadrant.
All around are walls of light and shadow towering high into the electric night. There is evidence of streets, or things reminiscent of streets, only no pavement. Points of light shoot across your periferal and off into the distance signifying the presence of other users searching the quadrant. Shapes in the distance glow and change color. Electric data displays blaze up-to-the-minute logistic reports across neon structures. Crystal spires rising high above nebulous data hum advertise new attractions.
These are the virtual presences of colleges, institutions, and organizations from around the world. You can simply peruse the area or at any time call upon the help of an AI (artificial intelligence)bot. You summon a bot with your console and one appears before you (can take the appearance of whatever or whomever you like). You tell it you are looking for information on ancient Rome, specifically architectural.
The bot does a search, returns relevant information, and shows you how to get there or takes you there itself. Next moment you are basking in the cavernous spaces of the United European Institution for Cultural . . . whatever . . . however, this isn't any usual library or museum for the very definition and structure of these institutions have changed with the advent of cyberspace. Now exhibits explain themselves, stories unfold around you allowing direct participation, and books become something you can read and/or enter.
Your book, for instance, is one on the architecture of ancient Rome. You call it in the institution's database. It can come to you or you can travel to it (for the sake of exploration). It doesn't reside on a shelf, but rather awaits you in a chamber. You enter the chamber which is some place that has been designated in the institution's webspace as "the place where you can be left alone with your "book" and where you enter it. However, you don't only have to read about ancient Rome. Now you can go there. So you do.
First you enter the book and are greeted with an introduction as brief or extended as you like. Suddenly, before your eyes, ancient Rome, or rather a computer-generated replica of it, "materializes" before you. Since photorealism is now possible in VR everything is life-like down to the Romans walking along the streets, the merchants at their stands, and you disguised in your toga and sandals, if you so desire. You can run your fingers across textures becuse you can actually feel things with your VR gloves and their sensory feedback.
You can feel yourself walk into objects because of the sensory feedback of your data suit. This "Roman" experience can also be taylored to your preferences and requirements. You can turn off the people. You can deactivate force feedback. You can superimpose a model of ancient Rome over modern Rome and have the ruins reconstruct themselves before your very eyes...
Today surfing the web is like being in a city that you can only see one room at a time. With no broader reference to your context, you quickly become lost. Similarly navigating from one website to another, due to lack of contextual reference, often leaves us feeling like unwitting participants in some twisted experiment: inadequate search engines which turn up millions of results upon query, webpages crammed with unintelligable masses of text and images, and wayward links and ads. Finding information on today's web is usually hit or miss.
This sense of vertigo when searching the world wide web results from trying to navigate a system with inherent spatial qualities using flatland techniques. From a practical standpoint, a three-dimensional arrangement of large databases such as internets, intranets, and portals provides a venue through which more effective methods of navigation could be implemented that better cater to the human senses.
All institutions and businesses, and many people, will want a presence in cyberspace. For businesses that will mean a "headquarters" of sorts. These headquarters will handle commerce and trade, business meetings, customer services, etc. in virtual facilities appropriately designed for the company's specific needs. These will be dynamically generated data driven structures in which the architecture forms itself in response to changes in the company's unique data. For example, a company like Barnes and Noble might use books sold, best buy days, best sellers, available stocks, hiring rates, etc. As best buy days change, a portion of the cyberstructure may fluctuate against a meter providing immediate visual understanding of current best buy days.
Take the Nasdaq stock exchange, for example. The stock market is by nature the most dynamic real world mainstream data system. This will help demonstrate more readily the uses and benefits of 3d web theory and why architecture in cyberspace should be driven and even "designed" by the data systems it represents. Cyberspace is a dynamic space charged by an ever-evolving information society. Everything we consume in cyberspace is information. Thus as cyberspace is an information space so should the architecture of cyberspace be "information architecture" -- dynamic, fluid, always evolving.
From a theoretical standpoint, a spatial arrangement of large databases, such as the World Wide Web, makes it easier to locate specific information. As it currently stands, we must use inadequate search engines that turn up millions of results upon query. We may try to refine our search, but we often we never quite find the information we are searching for. Many times we also find ourselves getting so off-put by stray links, ads, and other distractions that, by time we realize we have strayed, we cannot remember how to get back to where we were or what it was we were looking for.
Such problems denote a gap between computer interface and human perception. One large reason for this gap is that the brain exists naturally in a 3D environment for which it was designed. Implicit in this space/time existence is the ability to completely process its environment through taste, touch, smell, sound, and sight. Computers, however, have been largely impotent in effectively utilizing these receptors to the brain creating a gap which we have all experienced at one time or another – fumbling with word processors, punching the wrong numbers on ATM machines, or trying to program VCRs. However, the more directly computers tap the brain’s receptors, the more they will act as mechanisms for extending it. The intent therefore is not to replace the human brain as in artificial intelligence research, but to compliment it with a successful merger between its ability to recognize patterns and make decisions, and the computer’s ability to store, recall, and rapidly calculate complex data.
A 3D synthetic sensorial approach like Virtual Reality to computer environments like the Internet can help facilitate this merger. Even today companies are developing technologies which take full advantage of the benefits of interacting with computers in virtual space allowing for fuller comprehension of complex data by displaying them in ways the brain can best relate to – three dimensionally.
Xerox’s Palo Alto Research Center (PARC) has devised numerous strategies for approaching the question of spatial arrangement of databases: “We’ve looked at various classical information structures in information space,” says Per-Kristian Halvorsen, one of PARC’s lab managers. “We’ve left the two-dimensional arena behind; we’re distinctly using three dimensions, but there we’ve looked at trees, various kinds of graphs, hierarchical information timeline presentations, and we have also experimentally merged it with images of buildings.”
Time and again we see computer terminology demonstrate a tendency toward spatiality with words such as “rooms,” “firewalls,” “windows,” “desktop,” “office,” and, of course, “cyberspace.” The analogy is one of immersion and of place. Now with Virtual Reality we have the ability to execute this analogy in a new cyber space of data organization that takes into account the human faculties for navigation and orientation.
However, like real space, cyberspace will require painstaking consideration in its design layout. In the same way that cities offer enough variety and detail for us to distinguish one space from another, and the formal structure of architecture makes space navigable, architects can use their discipline to prevent the users of information from getting lost in cyberspace. For instance, cities form distinct points on the surface of the earth. We use these distinct points to navigate between large distances via given transportation routes. As we move in toward a particular city, we are guided by the unique features of that city’s presence as a place (i.e. through streets, plazas, parks, etc.). If we continue further, the scale of spaces and points become smaller and more distinct until the whole system works its way down to the exact desired destination (i.e. in a room in a building). Here, architecture serves to convey complex information in a logical and organized manner so that we may navigate our way to a certain point. In this sense, buildings and cities provide the world’s most detailed navigation systems, and, as such, are widely perceived in terms of their navigational values. To put it simply, the job of architects in cyberspace will be to give complex data visual, readily knowable representations while still providing a pleasant and memorable experience.
We must also keep in mind that cyberspace need not be something that only mimics our physical world but also interprets it. For example, many of our abstract social systems such as economics and even difficult academic concepts as in mathematics and science can receive virtual manifestations. This can be done quite effectively since we now have the technology to inhabit and inspect in virtual space those concepts that were once best visualized using words. Consequently, the mind was the only place we could form a rough and often vague manifestation of a concept. However, with the ability to create virtual spaces in which the architecture of our minds can be seen and inhabited, we allow for a fuller understanding of abstract ideas and a true extension of human cognitive abilities.
III. Project Overview
Today we see evidence of the Internet already evolving toward this three-dimensional cyberspace. VRML, which stands for Virtual Reality Modeling Language, and Java, a programming language capable of creating applications which run in internet browsers, have already allowed for the viewing of three-dimensional graphics, dynamic web pages, and virtual spaces on the internet. From the existence and use of technologies such as these, and many more in queue, one could cogently argue that cyberspace is indeed already under construction.
If the argument, then, is that the Internet will gradually evolve into this virtual cyberspace, another argument would have to follow that institutions and businesses, and some people, will want a presence there. For businesses, that will mean a "headquarters" of sorts. These headquarters will handle commerce and trade, business meetings, customer services and whatnot in facilities appropriately designed for each company's specific needs. These "structures" will be much like the "walls of neon and shadow" described in the introductory scenario of this paper in which they will be "data-driven structures". In other words, the structures form themselves in response to data that comprise the company. For instance, a company like Barnes and Knoble.com might use data such as books sold, best buy days, best sellers, available stocks, hiring rates, etc.
The site I've chosen for this project, the Nasdaq Stock Market, is much more straightforward in this regard. I did this so that it would be more readily evident why architecture in cyberspace needs to be driven and even "designed", in a way, by the data systems which it represents. In short it is because cyberspace has been conceived as a dynamic space which handles the ever-increasing volumes of information consumption wrought by an information society. Much of what we consume today is information, cyberspace is an information space, thus the architecture of cyberspace should be "information architecture" -- dynamic, fluid, always evolving.
I have therefore begun this project as a first attempt at the actual construction of such a space. As stated above, I have taken the Nasdaq website and RE-presented it as it would appear in a virtual space using the described paradigms as an outline. When users first enter this Nasdaq webspace, they find themselves in the "lobby" area as shown in figure 2.
Here we see a lot of bluish architecture and what appear to be three video screens that flash different images at regular intervals. These are actually “portals” which take travelers to other parts of the webspace. At the bottom of the screen there is something resembling a control panel like the helm of the star ship Enterprise. This is in fact just that. It is the control deck for the VRML browser, which allows one to navigate within the space. The space shown in figure 1 is analogous to the homepage of the site. From here users would ideally find links to other parts of the webspace. However, due to time constraints, I have provided one link for now. This link is the furthest to the left of the three "portals" and is highlighted by the arrow as seen in figure 1. This portal takes the user to the main space for this project -- the "Nasdaq 100 Space." This "Nasdaq 100 Space" corresponds to the page on the Nasdaq website where the 100 companies comprising NASDAQ’s 100 index are listed as shown in Figure 3.
On this page you see 100 companies listed with both their actual company name and their corresponding Nasdaq symbol. Next to that you see the company's market index value at the closing of the day. If you click on the company's name, you go to their homepage. If you click on the company's Nasdaq symbol, you go to the company's stock quotes page. In the company's stock quotes page you can look at current news on the company, charting, fundamentals, SEC quotes, and so forth.
After going through the portal link in the “Lobby” space, the user enters the "100 Index Space." This space corresponds to the Nasdaq “100 index page” in figure 3. Upon entry, the user first sees a large blue spider web-like object as shown in figure 4.
This object is a distorted sphere with 100 points on it for the 100 Nasdaq companies. This "sphere" is actually the 3-dimensional representation of the list of companies as seen on the Nasdaq 100 index page. Each company is represented by one of the points on the sphere. The further a point is from the center of the sphere, the greater its market index value (note the large spikes in the image denoting large companies such as Microsoft).
As the space is loading, a security window prompts the user also as seen in Figure 4. Here Netscape will ask you to "Grant" or "Deny" the space permission to access information from the Nasdaq website. The user clicks "Grant” and the sphere changes suddenly. This is because a program, which is a live data feed written in JAVA, looks up the current day's market index values from the Nasdaq website in real time. It then plugs them into the sphere changing its points to today's market index values. Bringing up Netscape's JAVA Console under Communicator/Java Console will show the fetched market index values and the newly computed coordinate lists for the wire sphere as shown in Figure 5.
Flying to the next viewpoint by clicking on the "next view" button on the VRML control deck, the user sees two objects which say "click for current index" and "click for previous index" as shown in figure 6.
If you click on these objects you can change the sphere to look at the current day’s index values or the previous day's index values.
From here, the user may go to a point on the sphere where a company resides. A list of viewpoints may also be summoned for quick access to preset positions in the space. Using these viewpoints, the user may go directly to a desired company without wasting any time. Clicking the RIGHT mouse button will bring up the menu containing the viewpoints also as seen in figure 6. In addition, the menu has everything needed for controlling the browser such as movement, graphics and other options.
At a point on the sphere where a company resides the user sees the company’s logo displayed together with it's market index value as seen in figure 7.
If you get closer to a company logo, a strange object that represents the company appears, and you may enter it as shown in figure 8 on the following page.
This object or “node” is the company's stock quotes space which corresponds to the company's stock quotes page as accessed from the Nasdaq 100 index page (refer to figure 3). The intent for the stock quotes space is the same as the stock quotes page on NASDAQ’s website: to allow an investor to view everything about the company from within the space. Pulling up various graph-objects, readouts, and viewing real-time news footage, the investor can contact their broker at any time as they see what's happening from within the space. The interior of this particular node, the Adobe Node, is shown in figure 9 on the following page.
As the project demonstrates, the technology currently exists to achieve a consensual cyberspace under many of the paradigms set forth in this paper. The above project exists not only as a commentary on today’s industry trends, but also as a real, working program available on the internet at http://dejene.aud.ucla.edu -- at least for the time being. Consequently, anyone can access and use the space at anytime from anywhere in the world. Thus the space aspires more closely to the true intention expressed by Nasdaq for their website which is that of a global virtual trading floor.
IV. How It Works
The project uses two primary computer languages that are made to talk to one another. The first language is VRML which stands for Virtual Reality Modeling Language. VRML is a standard language for describing interactive 3-D objects and worlds delivered across the Internet. The second is Java. Unlike Java, however, VRML is not a base programming language. Where Java is used to write programs from scratch, VRML is a macroscopic language for creating 3d objects and behaviors in space. In this project, VRML is used to create the scene while Java is used externally to “talk” to the scene.
By allowing languages such as Java and VRML to talk to one another, a programmer is able to make use of the special features of each so as to avoid inventing everything on their own. Therefore it was not necessary for this project to create an application which defined everything about a three-dimensional space from scratch since VRML takes care of these definitions implicitly. Likewise, Java also carries many built-in features necessary for the project such as provisions for accessing and scanning remote websites. This leaves the programmer free to concentrate on their main intentions rather than having to reinvent the wheel all the time. In this case the intention was to investigate ways to use remote data to influence the behaviors of a 3d space. Fortunately, VRML and Java include many ready-made tools for this.
The way Java “talks” to VRML is through a set of predefined commands that come with VRML called the EAI or External Authoring Interface. The Java programmer may use these commands in their code to access certain attributes within VRML. Some examples would be movement, lighting, color, visibility, etc. Virtually every VRML function is accessible either directly or indirectly through the EAI. In this way, one is able to cause things to happen within the VRML space which are not directly available within the VRML language specifications.
The following section goes over these technical issues in greater detail.
V. Specifications of the EAI
(by Chris Marin, 1996 Silicon Graphics Inc.
This section describes an overview of the methods and technical issues for this project. Every attempt has been made to present a fairly explicit picture of what was involved while trying to avoid a step by step tutorial. The reader will need to be somewhat familiar with both Java and VRML to fully grasp this section.
VI. Composition
A VRML file describes a scene in an object-oriented manner. The fundamental building block of these objects are Nodes. The main Nodes are essentially a subset of Silicon Graphic's Open Inventor format, these Nodes can be divided into the following categories:
- Shape Nodes: represent 3D geometry objects such as points, lines, polygons and basic primitives like cubes, spheres cones, or cylinders. They are the only visible Nodes.
- Property Nodes: affect the appearance and the characteristics of other Nodes.
- Transform Nodes: perform coordinate transformations including rotation translation, and scale.
- Appearance Nodes: perform object's appearance including color, material and texture maps.
- Metrics Nodes: contain geometric information including coordinates, normals, textures, etc.
- Group Nodes are used to collect Nodes to implement a hierarchical structure. Some of them can isolate the effects of their children from the rest of the scene.
- Light Nodes are used to illuminate the scene
- Camera Nodes are used to define different points of view and parameters.
Other Nodes including the Inline Node and the Anchor Node are used for appending a VRML world with another world and linking to another file respectively. The VRML Specification allows new Nodes to be created; these must hold a description to all their fields.
VRML and Java
Individually, VRML and Java have been developed to the extent of being relatively robust. However the marriage between the two is still in progress and there are currently several ways that VRML and Java can interact together and more in the works. Currently, the two most predominant ways of writing Java code for the purpose of manipulating VRML nodes is through the VRML Script Authoring Interface and the VRML EAI (External Authoring Interface). The Script Authoring Interface relies on the VRML Script Node which contains a "URL" field that may contain either a pointer to a script file or a full script file or a class file. The Script node is part of the VRML specification and it is well documented and supported. Although this method is appropriate for most applications it relies on coding the VRML file itself to coordinate and route the events to the Java code. The EAI is a proposal for an annex to the VRML specification. To this date it has not been officially ratified. Although the EAI is patterned after the Script Authoring Interface, it has the benefit of controlling all the routing of events within the Java code, leaving the VRML file smaller and simpler. The EAI offers more functionality and it lends itself to being a more powerful and flexible solution because of the apparent direct connection between the Java code and the VRML nodes. The EAI defines a set of functions on the VRML browser that the external environment can perform to affect a VRML world.
The VRML browser can interface to several standard points of connectivity. Interfaces shown in blue are used by authors of VRML worlds or of interfaces to these worlds such as the Java applet on the HTML page. Interfaces shown in red are intended for use by a programmer extending the functionality of the VRML browser.
Nodes in a VRML file can be named using the DEF construct. Any node with the DEF construct can be accessed by the applet and is referred as an accessible Node. Once a pointer is obtained the eventIns and eventOuts of that node can be accessed. The Java applet communicates with the VRML world by first obtaining an instance of the Browser class. This class is the Java encapsulation of the VRML world. It contains the entire Browser Script Interface as well as the getNode() method, which returns a Node when given a DEF string in the VRML file. The getEventIn() method of the Node class returns an EventIn when passed a string with the desired eventIn name. The getEventOut() method of the Node class returns an EventOut when passed a string with the desired evenOut name. ExposedFields can also be accessed, either by giving a string for the exposedField itself or by giving the name of the corresponding evenIn or eventOut.
Setting up the Environment
Although in theory the VRML EAI is not tied to any software in particular, the fact remains that presently there is a very narrow set of software packages that can be used for developing and viewing EAI based projects. The EAI Java classes are tailored to work with Cosmo Player 2.0. However, a more recent browser, Blaxxun Contact works as well. At the time of this paper, the browser can still be downloaded from the Blaxxun company homesite at www.blaxxun.com. The EAI related work performed for this project was done on an IBM compatible computer using Windows NT 4.0, Netscape Communicator 4.04 or greater, Blaxxun Contact 4.0, and Symantec Cafe 1.80 using Java JDK 1.1. This setup seems to yield the most predictable results although, because of the fact that the EAI is still in progress, there are many bugs and no guarantees are given by it's author. The Java compiler is, of course, not as critical and most likely other compilers could be used if desired.
Several class libraries comprise the EAI. These libraries come with the Cosmo Player software and are contained in a file called "npcosmo.zip", located in the CosmoSoftware\ CosmoPlayer folder. Netscape's classes are contained in file called "java40.jar" in the Netscape\Communicator\Program\Java \Classes folder. These classes, along with the standard JDK and Cafe classes should be available to the compiler and need to be added to the computer's classpath (set in Control Panel\System Environment) or added to the class directories of the compiler itself. The npcomso.zip and java40.jar files are archive files which contain many class files and they should be left compressed for the compiler to work properly.
The Java applet or embedded code can read defined Nodes in the VRML file and then can send and receive events that affect the VRML world. For this to work, both the Java class file (filename.class) and the VRML file (filename.wrl) need to be embedded in the HTML file. The syntax is standard HTML:
<HTML>
<embed src="filename.wrl" height=300 width=200/>
<applet code="filename.class" height=300 width=200 mayscript>
</applet>
</HTML>
Accessing VRML with Java
In order for the Java class to have access to the VRML world, a pointer is needed to the Browser where the VRML file is embedded. Currently this is done with the Browser method although the WWW Consortium is still in the process of finalizing both embedded objects and applets on an HTML page so it could possibly change. Once a pointer is obtained the getNode() method can be used to retrieve defined Nodes and in turn the Node methods are accessible.
Example to change the color of a cube in a VRML world:
· Code in VRML file that defines material as "cube_material":
Transform { children [ Shape { appearance Appearance { material DEF cube_material Material { diffuseColor 1 0 0 } } geometry Box { } } ] } · The Java code: import VRML.external.*; import VRML.external.field.*; import VRML.external.exception.*; import java.awt.*; import java.applet.*; public class javaexample extends Applet { Browser browser = null; Node material = null; EventInSFColor diffuseColor = null; public void init(){ buttons and other code here } Public void start() { browser = Browser.getBrowser(this); material = browser.getNode("cube_material"); diffuseColor = (EventInSFColor)material.getEventIn("set_diffuseColor"); } Some method such as boolean action(){ float[] val = new float[3]; val[] = 1f; val[] = 0f; val[] = 0f; diffuseColor.setValue(val); } }
The exposedFields of the material node are accessed with the method get-EventIn using the string names for the events. All eventIns and exposedFields of the VRML nodes can be accessed using the prefix "set_..." An eventOut is caught using the suffix "..._changed". The setValue method changes the value of the accessed Node.
3. project setup
With this project, the individual components and their functions are relatively straightforward. Below is a simple flow diagram of connectivity between the main VRML scene, the Java applet, and the Nadaq 100 index page.
PerfectSphere.wrl
(main VRML file seen by the user)
Nasdaq 100 Index Page
HTML Browser
VRML Browser
4 class files:
indexCall.class
indexLookup.class
readSphere.class
readOldCoordlist.class
As shown in the diagram, the main component that does all calling and routing of data is the Java applet. In this way it can be seen as the “wizard behind the curtain” sort of speak since it does its work out of view of the user of the space. This is done quite intentionally since the purpose is to provide a seemingly continuous cyberspatial experience while keeping the generating machinery in the machine room (unless, of course, we want the users to see the machinery the way, say, fermenting tanks would be exposed aesthetically to visitors in a brewery).
When the space is first loaded, the applet is automatically initialized. It then looks up the 100 index page at the Nasdaq website through the HTML browser and scans the page for the company symbols and market index values. The applet retrieves the information and stores it. Next, the applet sends out a call to the VRML browser retrieving the old coordinate values for the sphere and stores this information. Lastly, the applet retrieves the coordinate values from a file containing a perfect sphere which is the “PerfectSphere.wrl” file seen in the diagram. When all this is finished, the applet can then perform it’s computations on the data and plug the new coordinate values back into the VRML space changing the sphere to the current day’s market index values.
Thus we see two major information loops in the diagram between the applet and the Nasdaq website, and the applet and the 100Sphere.wrl file. In addition, other data files may be created as needed and read by the applet as in the case of the Sphere.wrl file. Here we see the power of Java in routing events between a remote website, a text file residing on the server computer, and a VRML space.
To get a better idea of the handling of events within the Java applet itself, consider the diagram on the following page:
As we see, IndexCall.class is the main operand. From here, the three data inlets – indexLookup.class, readSphere.class, and readOldCoordlist.class – are called to send their retrieved values. IndexCall.class then performs three functions:
1. Multiplication of sphere coordinates by the index values.
2. Access to the VRML scene’s eventIns, eventOuts, and values.
3. Installation of the new values into the VRML scene.
V. Conclusion
The ability to produce dynamic virtual headquarters for companies on the web is currently being perpetuated at a feverish pace. All sorts of technologies are being put in place to make the use and construction of spaces such as this one practical every-day installations. Technologies such as html-generating page layout programs as with this very software I am using to write this paper – Microsoft Word – as well as others – Net Objects Fusion, Front Page, Macromedia’s Flash – all attest to a rapidly emerging community in which anyone can take part building.
The newest development – JDK 1.2, the new Java Development Kit, has also arrived. Among many significant new features such as processing speeds close to that of C++ code, JDK 1.2 also incorporates a whole new arsenal of tools for Java development called the “Swing” classes. These are a revolutionary set of reusable java code components that allows programmers to build portable graphical user interfaces that are completely independent of the host operating system. This ensures a consistent and graphically rich platform-independent user experience, regardless of the whether the app is running on a Mac, Solaris server, Microsoft Windows PC, network computer, or some other Java technology-enabled host.
It is developments such as JDK 1.2 as well as countless others that will ensure continued growth in web technology development. And although some technologies flourish while others founder, the overall growth rate remains exponential. In fact, the only limitations now are bandwidth and computer speed which themselves are increasing at such rapid paces that even this point will soon be moot. Therefore, we may look forward to a radically changing and increasingly immersive internet in the near future.
VI. References
1. Michael Benedikt (ed), 1991 Cyberspace: First Steps
2. William Gibson, 1984 Neuromancer
3. Mark Slouka, 1995 War of the Worlds: The High Tech Assault on Reality
4. Timothy Ostler, May 1994 Architecture in Cyberspace, The Architects Journal
5. Peter Ludlow (ed.), 1996 High Noon on the Electronic Frontier
7. Peter Anders, Oct, 1994 The Architecture of Cyberspace Progressive Architecture.
8. William J. Mitchell, 1993 Virtual Architecture Architecture
9. Robert Venturi, 1996 Iconography and Electronics Upon a Generic Architecture
10. Neil Stephenson, 1998 Snow Crash
11. William J. Mitchell. 1997 City of Bits: Space, Place and the Infobahn
12. Muse Technologies Web Site: www.musetech.com
13. The Web3-D Consortium: www.web3d.org
14. Ctheory: www.ctheory.com
15. Java Technology Homepage: http://java.sun.com/
16. Ray Kurzweil, 1999 The Age of Spiritual Machines
17. Chris Marrin, Silicon Graphics, Inc., 1997. Proposal for a Vrml 2.0 Informative Annex, External Authoring Interface Reference
18. Netscape Developer Web Site: http://developer.netscape.com
19. Virtual Reality Modeling Language Specification, Version 2.0, ISO/IEC CD 14772
20.Kris Jamsa, 1996-99 Java Now Java Programming Reference
21. Donald Hearn and M. Pauline Baker, 1997 Computer Graphics
22. H. M. Deitel and P. J. Deitel, 1998 Java: How to Program Java Programming Reference
23. Andrea L. Ames, David R. Nadeau, and John L. Moreland, 1997 VRML Sourcebook VRML Programming Reference
24. John R. Vacca, 1996 VRML: Bringing Virtual Reality to the Internet