Traveling the Electronic Highway: Ubiquitous and High Performance Computing

Maps, Encounters, Directions.

Master Copy on Earth.
Draft 27Nov1993, 30Jan1994, 10Jan1998

©Gio Wiederhold and CS99I students, 1998

Previous chapter: The Internet - Next chapter: Information Services

Water, water everywhere and not a drop to drink [The Ancient Mariner]
Data, data everywhere and barely time to think [The Modern Traveler]

UBI.Intro

This chapter focuses on intiatives of the 1990's to make computing and information available to everybody, anywhere, anytime. Such ubiquitous computing is a broadening of the services initiated with the networks presented in Chapter INTERNET, but requires new approaches for their support and technologies. A major effort in increasing the scale of those directions is the High Performance Computing and Communication Initiative (HPCC), initiated in 199? as a collaborative effort by multiple U.S. Government agencies. It is clear that government alone cannot carry the burden of building, controlling, and maintaining the information highways that are being created. Corporate participation is increasing. At the forefront of commercialization are the telephone companies, the traditional providers of individual links to nearly every person, and the cable TV companies, which provide broadcast services to perhaps the population in the United States. Publishers of newspapers and other current material are also concerned and getting involved. Alliances among them are being formed as this material is written. For up-to-date information you may have to consult your (electronic) newspaper or news magazine. Issues dealing with publishing and libraries are addressed in the LIBRARY chapter, since they require new business approaches.

The HPCC effort is managed collaboratively by ten government agencies, with specific interests, as summarized in Table\hpcc.The topics of interest in HPCC are

Problems that are inherent in government funding have occurred. Some communities and companies have felt slighted, and an intial emphasis on the technological and scientific infrastructure has kept the public remote. The Grand Challenges are largely scientific, and their ivory towers seem remote from the broad objectives for the Information Highways of the Future. The high visibility gained by this program, especially with the initiatives of Senator, and now Vice President, Al Gore, makes it also subject to frequent political review.

Much of initial investments in HPCC have been applied to hardware, a necessary prerequisite for motivating users. Hardware performance is also easier to quantify and publicize than the effectiveness of software and applications. More senior reviewers are comfortable in judging hardware. A need to give more emphasis to software and applications was expressed by the General Accounting Office (GAO) and the Congressional Bugdget Office (CBO) in 1993. It remains difficult to perform software research and development for hardware that does not exist yet and it is even harder to solve problems that have not yet been encountered.

The National Information Infrastructure (NII) was announced in February, 1993. It augments the HPCC effort with support for an Information Infrastructure Technology and Applications (IITA) component, shown in the last column of Table\hpcc. Also, the BRHR component is to cover children in grade school and high school (K-12). The NII announcement also indicated that commercial providers will have a major role in building the information highways and * tollways Existing regulatory restrictions, especially those impinging on toll-setting, will have to be reduced.

The NII goes beyond the provision of network and computing services for scientists in many ways, although the HPCC remains a foundation. This book focuses on software and applications, and is intended to help create maps that can guide long-term directions for development of the information highways and its on- and off-ramps.

UBI.History

The history of HPCC and the National Information Infrastructure (NII) initiatives itself is short. The initiatives derive directly from the advances in networking and high- speed computing of the 1980's. Since networks were discussed in Chapter INTERNET, we now primarily consider computing advances, and trace their origins.

By the 1980's, five approaches to computing had emerged: personal computers, workstations, mini-computers, mainframes, and high-performance systems. The technical boundaries among those systems change with improvements in technology. The approaches were sometimes used synergistically, but often competed with each other as well. Synergistic use was often preached, but more rarely achieved, because with each approach came differences in management methods, operating systems, and communication standards. We will review these five approaches, taking the specific management styles into consideration, since such an understanding is needed to assess the structures and enjoy their inhabitants as we encounter them along the information highways.

UBI.History.personal

Personal computers had their origin in the hobby world. Integrated circuit technology, motivated by making large computers better, enabled the placing of all the electronic logic needed for small tasks on a single silicone chip. These chips were intended for use in calculators and controllers for complex devices. Their programs were devised by experts and the programming methods used were decidedly 'user-hostile'.

However, first hobby-oriented entrepreneurs saw a market in packaging these systems for those folk who would be undaunted, and perhaps challenged by their complexity. Two early ventures were Altair (<19xx>) and IMSAI (19<80>). The model for these machines were the mini-computers, which had become established as tools in many research laboratories. Support for their users was found in micro-computer clubs and unglossy magazines, as Dr. Dobb's Journal of Computer Calisthenics and Orthodontia, advising how to 'run light and avoid over byte' when using those fascinating toys. The lack of memory and compilers for computing languages were the biggest bottlenecks, but reliable input and output was also a problem. Memories were often about 1000 bytes (1K), and many enthousiasts (* hackers in their terminology) entered the entire code for the tiny-C compiler, published in Dr. Dobbs 1976, on the front-panel switches of their machines, only to lose the code when the attached cassette-tape drive failed.

The big break came in <19xx> when the Apple computers appeared, with an integral, adequate keyboard, a color screen, and, a bit later (<19xx>) VISICALC, the first spreadsheet program. Use of the spreadsheet meant that people who were not hackers could use computers themselves. As is typical, the appearance of novel software was less noted than that of the hardware, but their interaction is crucial. New hardware capabilties inspire software developers, and new software capabilities open up new markets, reducung the cost and increasing the ubiquity of the hardware.

IBM's entry in the personal computer (PC) market in 19<82> placed a stamp of approval on the micro-computer technology. Soon other software tools became available that did not require a programming mentality: databases, word-processing, and publishing software.. The price of a simple PC was such that it could already be justified if it only was used as a terminal to mainframes, and as such it was welcomed by managers who underestimated the adapatability of personal computing. IBM's decision to make the PC an * open system, i.e., enable and even encourage other vendors to produce compatible software and hardware extensions, led to a feeding frenzy, which became impossible for anyone to control. Domestic and foreign makers of clones of the PC, sometimes marginally skirting copyright and/or patent restrictions, flourished. Today the prime vendor of PC operating systems, Microsoft, has a book value which exceeds that of IBM. One can only hypothesize what would have happened if IBM had kept the system closed. The example of XEROX, in Sect.\U\X\ALTO below, does not present an attractive alternative. Even with all the competition, IBM's PC division, now called , is one of the strongest components of the company.

Derivatives of the IBM PC comprise the largest segment of potential nodes on the information highways, even if they are considered technically inferior to their larger workstation brothers. Somewhat more elegant are the MacIntosh computers from Apple, but in terms of performance they differ little. Their purchase price is modest, say \dol2500. Personal computers are ubiquitous, because the decision to buy one, the manner in which one uses them are personal preferences, and don't require institutional or corporate blessings. Maintenance and upgrading also becomes a personal responsbility and can waste valuable time. Mixing personal responsibility with use of commercial services can create an acceptable balance. Higher-level management costs are minimal. Enabling PCs to communicate is also low in cost, but if transmission has to go via phone-lines then high- speed modems can add much cost . In a work environmet many PCs can be locally connected, and high-speed connection to the rest of the world can be shared.

When personal computers and their software are obtained by institutions in large quantities, then corporate policies start to intervene. If PCs are used on a network, some compatabilities should be adhered to, but frequently computing management acts slowly, and * acquisition decisions do not keep up with technology. It took the U.S. Air Force, for instance, many years to standardize on a particular model of a PC. When they did, the machine was nearly obsolete. In a rapidly changing world it is wise to only standardize the interfaces, and let users obtain the best equipment for their needs according to their budgets.

UBI.History.workstations

While the personal computers try to support one task at a time, professionals were more demanding. Being unable to get a response from one's machine while printing was going on is intolerable outside of the home environment. When a phone call from a colleague or a patient arrives, one will want to be able to access relevant information, and then return to one's prior task. Arrival of email can happen at any time. Some tasks in themselves require multiple processes. A search through a database can be a subfuncion of an accounting program.

The essential difference between a personal computer and a workstation is the ability to manage multiple tasks in one system. Of course, many personal computers are becoming more like worktations and can then be rightfully advertised as such. The ability to handle multiple tasks requires first of all more resources, specifically memory hold multiple programs, as well as memory to hold an operating system capable of switching tasks. The user also gets involved. There have to be means to start a subtask, check on its progress, use its results, and terminate tasks that are no longer needed. Confusion can easily ensue, so that the multi-process support systems should also provide warnings if something questionable is about to happen, and backup for recovery when it does.

Multi-processing was originally implemented on mainframes, when many users were trying to share the same computing resources, but the environment in a workstation is essentially different. While users sharing a large computer compete with each other for resources, and the systems have to enforce fairness, a workstation is still an individual's responsibility, and while cleanliness of services is important, fair allocation of resources is not.

ALTO

In the early seventies researchers at the Xerox Palo Alto Research Center (PARC) developed the ALTO computer. While its hardware infrastructure was modest and conventional, higher layers of software provided a number of crucial innovations. They adopted the mouse, and used to manage a desktop-like arrangement of subscreens, * windows. Such windows are now an inherent feature of personal computers and workstations, and Microsoft windows maybe the world's most popular software system. * Icons, to denote documents and programs provide visual help to the users. The ALTOs were assigned as personal workstations to PARC researchers, and were connected via the original * Ethernet, and could share files and processing resources. The first demonstration of a friendly * virus was carried out at PARC, as part of testing Ethernet capacity, by having about a hundred ALTOs simulating active network interaction [Shoch]. XEROX' capability in copying machines led to early experiments at PARC with xerographic and laser printers; after all, if one separates the reading and printing parts of a copier one has an image scanner and printer. When both are connected via communication lines one has a facsimile copier (fax).

The software developed at PARC was also innovative. An outstanding example was * 'Smalltalk', a programming system and language that could treat windows, icons, and other construction to be displayed on the screen as * objects, permitting them to be treated as holistic units, rather than as constructs of character strings, integers, cross reference pointers, and programmed methods to interpret those constructs. Programs in Smalltalk could reuse such objects devised by others, by simply invking their defined methods, without needing to understand their internal construction [Bjorning].

XEROX mannagement and their lawyers were wary about releasing these innovations. Even when some universities in 1977 received ALTOs as gifts from XEROX, crucial software, such as Smalltalk was not provided, so that the machines were only suitable as novel word-processors. The impressive capabilities of the concepts developed at PARC were known only to insiders and no market for XEROX's products ensued. Only when insiders left the confines of PARC and started their own businesses did the concepts enter the open marketplace, and become the basis of the rapid evolution of the personal computer and workstation markets.

UNIX

The primary means of controlling workstations today is the UNIX operating system, or one of its upgrades, such as the MACH system. The original UNIX system was devised in 197 at Bell Laboratories by [Ritchie and Thompson] as a reaction to the problems encountered with time-shared mainframes, as discussed below. Workstations did not exit then, but early mini-computers (as the DEC PDP-11) were becoming available. By starting out with fresh concepts, a new simple operatimg system was created. Bell Laboratories, and its parent, AT\&T, had no product plans for it, although soon it became a useful basis for checking out the programs that control the telephone switching network. Bell Labs made the system available at low price to unversities. Several, especially UC Berkeley, developed adaptations, which then were rapidly adopted at other universities. Commercial use was limited, since AT\&T's non-academic license fee was quite high. Commercial users continued to prefer the 'free' software that came from the vendors, so that important families of computers, for instance DEC VAXes, were using different software in academic and commercial laboratories, thoroughly disappointing graduating students.

Today, nearly all workstations come with versions of UNIX, through licenses provided by their vendors. There are still two flavors of UNIX, namely those derived from the original AT\&T UNIX (sometimes called System 5), and those derived from the UC Berkeley UNIX. For the end-user the flavor matters little, but software packages are rarely portable. With different packages come differences in higher level program management, and here the differences are great and sometimes baffling. When such workstations share a network, the interfaces also make the differences minor. Novell Corporation, a major vendor of network software, has now acquired the base right to AT\&T UNIX, and sells sublicenses. Recently vendors from both camps have gotten together and selected higher level functions from one flavor or the other, so that there is hope that differences will disappear. In the meantime, DoD has adopted a base standard (POSIX [], but that standard has only a subset of the functionality users expect today.

Dominant in the workstation market today are SUN, Hewlett-Packard, and IBM, using the flavor of UNIX, and DEC, ..., using the UC Berkeley derivatives. The NeXT system uses MACH as te basis for its NeXT STEP interface; this system is now targeted to make high-performance PCs into full-fledged workstations. COMPAC, a major vendor of PC clones, is also making MACH available on its high-end systems. The MACH kernel was re-engineered at Carnegie-Mellon University (CMU) to improve interprocess communication, greatly increasing parallel operation of tasks. The MACH version is also independent of license constraints to Novell, and is independently marketed by .

There is a free UNIX (* GNU): LINUX, provided via the Free Software Foundation (FSF). Its principal architect, Richard Stahlman, believes that software should be a free good, to be shared and improved by the community. Its label is its motto (GNU is Not UNIX) and a very capable, but user-hostile editor (GNU * EMACS) is provided by the FSF and widely used by the hacker community. It is available for nearly every conceivable machine, simplifying life for those of us who access many types of machines in a given day.

Workstations provide a level of performance that can typically satisfy the most demanding user. Their responsivness is certainly better than that of a shared mainframe. Their cost is still high for an indiviual, say, the on order of $10,000 today. As a business tool a workstation can easily make sense; that amount is perhaps 10\pct of the annual cost of a professional. When the workstations are on a * local network (LAN) then costly devices can be shared. Candidates for sharing are devices as * color printers, * document scanners, or high-capacity * archival storage systems. Simple printers may be shared by neighbors. However, the more sharing occurs, the larger the management and support costs become, since individual responsibility is reduced.

UBI.History.group-computers

The next larger size of computers are * minicomputers and their derivatives. They are the low-end level of shared machines, and are now mainly used to support groups of users that collaborate closely. We find these computers often in laboratories, where some expensive equpment must be shared. Such equipment may be computer peripherals, as scanners and high-resolution color printers, or specialized equipment, as found in clinical laboratories.

At the simplest level individual participants need only a * terminal, at a cost of say $ 1000, and then can share the resources of the minicomputer. Such terminals are dumb, and transmit what is typed or presented without any internal processing. Early *dumb terminals were based on teletypwriters, used for telegraph communications, later simple video screens became dominant. Today most usere will use personal computers to access the minicomputer. Often they are connected using a LAN. Such a computer is then refered to as a client, and the group computer is then called a server. The combination becomes an example of a client-server system, with a thin client. More discussion on client-server systems will be found in the Chapter on Mediators.

Most minicomputers provide the power of several workstations, but since users do not continuously require the capacity of a workstation, the shared mini-computer will satsify the requirements of dozens of coworkers in a laboratory, a design studio, an enginering bureau, or an office most of the time. A minicomputer system, costing say $100,000 may be shared by 30 or more users, reducing the cost of computing hardware per individual. [VAX example] The modern minicomputer and its software has a dual origin: laboratory computing and time-sharing.

Its laboratory ancestor is the LINC computer, built by [Jerry Cox, Charlie Molnar, Lee Huntley... ACME] at MIT in under NIH sponsorship to bring computing into medical laboratories. It was a radical departure from the mainframe computers of the day. Laboratory instruments could be directly connected to it, greatly increasing the capability of researchers to acquire data in * real time. The data could be placed on reels of tape that would fit into the pocket of a laboratory coat, another radical departure, and analyzed in its 12K character memory. Like the later personal computers, the user was * one-line, controlling the operation through a * teletype-writer and keyboard, and a console with many switches. It only occupied the space of a refrigerator, and could be placed, and remain under control of a single laboratory. When we consider its progeny, the LINC computer remains one of the greatest successes of government funding.

Figure\labcomputer. A LINC computer in a laboratory [NLM report]

That such a small computing package could be effective suprised industry, and companies as Digital Equipment Corporation (DEC) started building commercial derivatives, most without the real-time capabilities. Users started contributing and sharing software, greatly reducing support costs for the manufacturers. The low cost encouraged proliferation, and software experimentation. The power of the machines grew rapidly. At Bell laboratories [Ritchie and Thompson] found the corporate mainframe computers unwieldly and designed a simple high-level language, * C, and wrote a * simple operating system (* UNIX) for their DEC PDP-11 mini-computers. Today, this combination is the most widely used software tool for system builders and teaching of computer science.

To make more efficient use of a shared laboratory computer at Rand Corporation, an Air Force think tank, developed a * time-sharing system, (JOSS) on their . Timesharing permits multiple programs and their users to be * on-line, i.e., actively using one computer at the same time. A timesharing operating system gives each program in turn a chance to perform something of interest for the user. While a user is thinking or is distracted, others can make use of the idle resources. Dealing with on-line users also motivated the developmemt of * user-friendly systems, since leafing through large manuals to find the meaning of mysterious codes inhibits effective interaction. The JOSS system, due its very small memory, provided only one universal error message to its users: "eh?", but later systems devote much of their resources to detailed messages and guidance for corrections.

Timesharing capability soon moved to the growing minicomputers, and became the dominant mode of computer interaction at universities. A few systems even combined timesharing with real-time data acquistion, providig easy access to complex technology for their users [ACME reference]. Timesharing also enabled remote use, initally just by having the teletypes connected over telephone lines. Soon timesharing software was also installed on mainframe computers, who already had * multi-programming software to better allocate the many programs they were handling. However, these systems did not have the flexible management (people, organization, and software) needed to deal with on- line users.

A shared resource, as represented by mini-computers and their successors, requires a staff, and the size of the staff must remain modest if benefits are to be achieved from sharing equipment With a modest staff only a well-focused set of quality services can be maintained. Shared computers are only effective when all participants do related work, such as in a laboratory or its business equivalents, say, an accounting firm or a legal office. When a different type of service is needed, it is best to go out over the networks and obtain such a service remotely. For instance, engineering companies may benefit from purchasing services, as can be provided by the FAST system and its successors, described in the Chapter on Electronic Commerce. Keeping the users local and homogenous also reduces the need for management to resolve access conflicts. Participating users can understand when the system is temporarily overloaded due to some important task for a colleague.

Upgrading of computing capabilities for their users is more complex for mini-computer managers than itis when dealing with workstations. Workstation-based users can simply update or purchase individual workstations. Changing a minicomputer system invariably affects all users, and for many of them the benefits will be minor. Today, the low cost of hardware relative to the cost of staff to keep systems functioning reduces the opportunties for effective use of timeshared minicomputers. We are likely to find a smaller proportion of minicomputers along our information highways. Most of them will be found where costly equipment is attached to them, while their use have indirect access from their workstations.

UBI.History.mainframes

In the 1960s large, multi-processing computers represented all significant computing activities. As more alternatives at either side became available, the term 'mainframe' was coined, presumably to indicate their central role. Mainframe computers deal with a large variety of tasks, handle * database * transactions, large * batch operations, serving local and remote users. The users come from all kinds of departments, and need not cooperate with each other. A great deal of software, people, and accounting is devoted to keeping operations fair. Prices are announced for use of computing cycles, long-term storage, printing, etc. The prices differ for day and night- time use, and are adjusted to encourage users to behave in ways that seem beneficial to the users as a whole, and to objectives set by management.

Users benefit from mainframe use if their needs are irregular: they only need to pay for services consumed, and for any terminals and communication lines they own. Most mainframe computers are being connected to the Internet, so that users can also communicate remotely at a low initial cost, although incremental prices may be high, especially in the daytime.

UBI.History.hpcs

High Performance Computer Systems (HPCS) are designed to handle the most challenging computing tasks, as described in SectionINTERNET\F.. Users of such * supercomputers value sharing less than being able to marshall massive computing power to a demanding user's problem. Parallel operation is the dominant theme in HPCS.
  • Overlap among machine instructions: while a result is being written, another part of the computer can perform a new computation. The analogy is having specialized rooms in a house and shuffling people among them. Different instructions may occupy distinct computer sections in parallel as well, This * superscalar capability is now becoming available on single chips, as the * Power PC. <:I> Parallel vector-processing means performing a bunch of similar operations at the same time, typically handling entire rows or columns. The analogy is having an elevator lifting a group of people at a time. If the group or vector is large, the operation is repeated. If the group or vector is small little is gained by having a large elevator or wide vector capacity. If vector operations are few, queues will form at other points, and the benefits also remain minor. Scientific computations and processing of images can gain much from vector processing.
  • Parallel execution of entire tasks. The analogy is having multiple dentist's chairs in a clinic, but also a dental assistant for every chair, working from the same plan. Such a mode is also called * Single-Instruction, Multiple-Data (SIMD) processing.
  • Independent parallel processing, or Multiple-Instruction, Multiple-Data (MIMD) processing. Consider independent dentists and differing patients needing dental work. The chairs will come free at different times, and scheduling is needed to fill up most chairs and keep the dentists busy. However, HPCS problems are not single procedures. Computing tasks depend on, or follow each other, and these * constraints must be recognized to make proper predictions for task (e.g., patient) scheduling..
  • Distributed computing is an extension of MIMD, but now the computers are remote, are likely to differ, and may be specialized. Travel among computers takes much time. Predictive scheduling becomes impossible, since no central information exists that relates the demands of the tasks and the capabilities of the services. Much academic work in parallel computation is based on exploiting multiple workstations, since these are ubiquitous in their laboratories, and are seen as being idle much of the time. General solutions to harness this power are lacking, although some good approaches exist. Chapters MEGA and MEDIATORS focus on software architectures which are best implemented using many workstations.
Figure\teraops Computer System Performance Trends for Grang Challenge Problems (from [OSTP:92])

The HPCC has focused on large-scale MIMD computing for HPCS, as a reasonable balance of performance potential and complexity. Its expectations are sketched in Fig.\teraops. Much of the commercial world has focused on vector-processing, and older, SIMD machines are being phased out. Rapid progress has occurred in workstations, so that distributed computing is also quite attractive if good interconnections are available.

UBI.Functions

The function of the HPCC initiative is to support four major national challenges. Its progress was summarized in [HPCC:94]. The NII broadens those objectives to include the challenges listed here. We will summarize the functions which the initiatives are to provide for these challenges here; several of them have their own chapters assigned to them, so that brevity here does not hurt.

UBI.Functions.grand challenges

The HPCC initiative was focused on a list of * Grand Challenges [OSTP:89]. These were selected as examples where HPCS and NREN were well justified. The program was defined by a Committee on Physical, Mathematical, and Engineering Sciences, and the scientific outlook of the participants. Even though they were all government officials the science emphasis is clear in the subset of challenges chosen as examples in the HPCC publications [OSTP:92]. The program received its first specific funding in the Fiscal Year 1991. Two additional examples introduced networking, as presented in ChapterINTERNET.

1: Weather Forecasting

Severe Weather Events is still a challenge for super-computing. Current weather prediction programs partition the world into fairly large units, and those are adequate to provide a few days prediction for general atmospheric conditions. A typical atmospheric unit today measures <50 km (30 miles) by 50 km>, and is <2000 m high>. Prediction involves * simulating the interactions of these atmospheric units, the earth and sea below, and the sun above. Because of this * coarse granularity, current programs can only generate warnings of critical conditions in a general area, but not actually predict the severity or paths of storms and the like.

Storms of various types, as thunderstorms, tornadoes, hurricanes or cyclones, tropical storms, , blizzards, cause much devastation Much effort goes into tracking them, but predictions of paths and intensity changes remain guesswork. Their unpredictability also incurs high costs when evacuations are ordered as a precaution.

To understand, and eventually predict local, severe weather, the sizes of the atmospheric units must be drastically reduced. More data is required to describe the atmosphere and the land and sea under it in a finer grain. Those data require dispropotionally more computation, since the time intervals to predict and record incremental changes must shrink as well. This means that when reducing the linear size of an atmospheric unit by a factor 10, the demand on computations can increase by a factor $ 10^ 4 =10,000 $ lement

As sensors and measurement capabilities increase and the capacity of networks makes it easy tto ship detailed data to computing nodes on the network, the demand for HPCS in this challenge will become stronger. To achieve 6-hour predictions on 5 km grids may require 20 teraflops.

2: Genomics

A person's inherited genetic makeup controls much of one's subsequent health. In addition to diseases directly caused be genetic irregularities, the susceptibility to many other diseases seems to be genetic in origin. Since <195x <> >we know that our genetic blueprint is encoded in chromosomes composed tightly wound paired strands of amino acids. To locate genes within the human genomic DNA strand of about 3 billion pairs is a daunting task, one that is challenging researchers all over the globe. To identify a disease with a specific squence in that strand samples of DNA are obtained from families with that disease,

Not all genetic problems are expressed by faulty genes. Genes implicated in cancer, * oncogenes seem to be identical to normal genes, but there ability to replicate is turned on when it should not be. Within the genetic strand are sequences which act as promotors or inhibitors to replication, by deforming the strand so that the replication is controlled. The 3-dimnsional (3-D) configuration determines if other biological material can lock itself to the strand, similiar to Velcro. Drugs can take the place of other material, and inhibit biological processes that are otherwise enabled by these attachments. Creating the 3-D models of DNA under various conditions is crucial de developing insight into the processes that control growth, and our life.

Genetic research oriented toward these problems requires rapid communication among researchers to avoid overlap and encourage collaboration, access to research results, including amino-acid sequences that are too long to reliably transcribed by hand, search routines to match new findings to sites in that long, variable, and incompletely known DNA strand, programs that can create and rotate the 3-D images for inspection, and programs that can search for candidate attachments. The latter tasks require immense computing power, as well as scientific progress to exploit that power.

3: Predicting New Semi-conductors

! not yet written

3: Pollutants

Air-borne Pollutants affect plant life, animal and human health. Pollutants can travel far, and cross any boundaries. In order to understand causes of environmental damage the flow of air, its ability to transport pollutants, its temperature as it affects chemical reactions that transform pollutants, ... must be modeled.

5: Aerodynamics

The performance of a new airplane is largely determined by its external shape. How the air flows around the fuselage, lifts the wings, and impinges on the tail surfaces determines its speed, lifting capacity, and stability. While engineers can estimate the performance of a design sufficiently to determine the general shape and size of airplane , it validate and adjust design proposals it is neccessary to build models and test them in wind tunnels. Wind tunnel testing is costly and limted. A windtunnel sufficiently large to hold a full-scale model is unlikely to be able to move the air at regular flying speeds, even for commercial planes, and certainly not for supersonic flight regimes. If reduced-scale models are used, errors are introduced, since the features of an aircraft do not scale evenly, for instance, a 1/4 scale model has only 6\pct of the area and 1.\pct of the volume. Other factors, such as the reduced density at high altitudes or tropical temperatures are also hard to simulate in a windtunnel.

Programs on HPCS can simulate the airflow around an aircraft for all types of conditions. Again, the computational requirements are huge. The surface of the aircraft is segmented into millions of small areas, in each area the direction of flow, the pressure, and the temperature must be determined. Above each area are many cells, since the effect of the aircraft on airflow extends many meters beyond its surface. A supersonic shock wave will even reach the ground below. Electronic windtunnels are becoming essential to aircraft and space craft design, and their performance places a limit on innovation, since each change requires hundreds of hours of computation on even the most powerful supercomputers now available.

Under stress an aircraft is not rigid. When wings move up and down in reaction to the forces that the airflow exerts on them, their shape, and the angle they present to the flow varies. Predicting the effect of mutual interaction of airflow pressures and the phyisical deformation of an entire wing structure is still beyond the capabilities of today's computing, so that simplified computations are used today. If the two types of computations can be combined then structural innovations might become possible, that would make new aircraft both lighter and safer.

A third area where shape interacts with performance is in the design of * stealth features, that make aircraft less observable to enemy radar. Now the shape has to be combined with the of the material that is placed on the surfaces to absorb and diffuse elctronic radiation. Early steath aircraft had very angular shapes, in part because the computational requirement for more complex shapes was excessive, as seen in Fig.\stealth.

Figure\stealth. A Lockheed F-117 Stealth Aircraft.

6: Energy Conservation and Turbulent Combustion

! not yet written

7: Microsystems Design and Packaging

! not yet written

8: Earth's Biosphere

! not yet written

UBI.Functions.universal service/A>

! not yet written

UBI.Technology

Computer systems. large or small, are built of similar architectural components . The differences among them is in the overall hardware * architecture, namely, how many, how big, and what are their connections, i.e., how many people can travel from one component to another without colliding. The principal components are a * processing unit or units, the equivalent of a kitchen and its attached dining room, where crucial social interactions occur; * memory, the study, where work in progress is stored; * storage, a library where long-term data are archived; * input and output buffers, the living room where visitors are received; * a computer bus, the hallway used to go from one room to another. Architectures diffe;. in a simple house one may receive visitors directly in the kitchen, placing more load on the cook, but interaction. In a mansion there may be many reception rooms for visitors, and multiple staircases, to accomodate all types of traffic.

The most magnificent estate is not useful without furnishings, just as hardware is useless without software. It is hard to select furniture without knowing the house's architecture, it is hard to specify a house without knowing what furniture it must contain Experience helps us avoid egrarious mistakes in home building, but we have much less experience in computer systems.

Figure\architecture Sketch for computer - home analogy.

UBI.Technology.cpu

\U\T\CPU If there is a single, * central processing unit (CPU) then all control emanates from it. On computers that handle multiple tasks, the processing unit has to switch its attentions from task to task. Frequent task switching can occupy much time, but, as in our analogy, may be needed to prepare a tasty meal with many courses.

UBI.Technology.timesharing

! not yet written, available from DBD

UBI.Technology.parallel

! not yet written

UBI.Technology.displays

! not yet written, aavilable from MIS

UBI.Technology.windows

! not yet written, aavilable from DBD

UBI.Technology.mouse

! not yet written, ref sri mouse Early 1960

UBI.Alternatives

We will focus here on alternatives in the software area. Alternatives in communication hardware were presented in INTERNET.Alternatives and we expect to continue to see a mix of computers as outlined above. New ways of combining them into novel architectures continue, and are presented in the Chapter on MEDIATORS.

UBI.Alternatives.microsoft

While UNIX remains the primary operatinfg system for workstattions, the field is now being invaded by the larger personal computers. IBM PC's with the Microsoft NT operating systems provide most of the functionality of a UNIX workstation, at aproximately half the cost.

UBI.Conclusion

While this chapter focused on computers and their systems, it is clear that changes are greatest where computers and communcations intersect. Vendors, such as Novell, feel the need to merge both software markets, and high-performance system are always accessed through communication links. Another recurring theme is the interaction between innovative researchers and government support. It is hard to hypthesize what would have happened if innovative people would not have been able to receive support, or if government support were focused on large establishments with good track records and known agendas.

The digital highways will be built by a mixture of phone, cable, and satellite technology. While corporate origins will differ, it is likely that the information will move fairly smoothly over the various highway types, with minimal delays at the * transshipment points. For high data rates and dense concentrations of consumers optical cables will dominate. Satellite-based links can provide crucial backup during failures and emergencies. Reimbursements will be allocated by aggregate use. Such mixed carriage exists now for mail and railroad services, and the shipper is rarely aware of all the companies that were involved in a shipment.

The types of computers found along the highways will remain varied, but we can expect that workstations will dominate. These workstations will be of two origins. The majority will be high-range PCs, and others will be the full range of multi-process workstations. Secondary networks will provide access to smaller workstations and personal computers. Where costly equipment is to be shared, minicomputers, as exemplified by the DEC VAX- series may find a role. * Dumb terminals will be rare. Common services, as databases, will be provided by high-power workstations, as well as by mainframe computers, but the proportion of mainframes will shrink. High-performance computers will provide computation services where workstation are inadequate, Mainframes will rarely used for computation-intensive tasks, but can be effective where a central * control point is for shared resources. A specialization will develop in the service arena, because users will switch to where the service is of the highest quality, and high quality service depends on people, especially professional specialists in various domains.

Acquisition

Acquisition of computer hardware is often a disaster. Computer purchases have traditionally been costly, so that many rules were set up to assure effective useof computers. Technology moves faster than the rules can be adapted. The folk that make those rules are senior people with much experience, but that experience may be from an older generation of equipment, and more seriously, pertain to another * management style. Traditional acquisition centers on hardware specifications, which change rapidly, often within a purchasing cycle. Advances in computer systems are not even bound to a model year, as car models. What cars must provide is transportation, and compatibility with existing roads. What computer hardware must provide is support for applications software, and compatibility with network standards. If purchasing agents were to specify these aspects, then software producers would be motivated to be effective and responsive, so their products could be used on the most advanced and economical hardware.

Other sources of confusion are in ignoring maintenance needs, or assigning maintenance to internal organization that are not able to keep up with technology changes. A serious problem relates

Science versus Commerce

The HPCC program was motivated largely by scientific applications. Much of the remainder of the book will be concerned with applicationsin other fields. The applications shown were diverse, but much less innovation is evident for their software than in the hardware. The scientists working on the Grand Challenges have often been satisfied with * FORTRAN, a computer language for Formula Translation first developed in the 1950's. Although it has undergone many improvements. it does not support well the development of flexible services. The C language, as shown by its history, was innded for relatively small computers, and does not provide good tools for safe composition of large programs. Issues realted to programming and programming languages are presented in A significant adapation is C++, .

The development of computer systems provides lessons for government and commercial funding. .. ..

UBI.Bio

\U\B Noyce?

UBI.Lists

UBI.funding

Table\hpcc shows the planned allocation of government funds for HPCC in fiscal year 1994 to the four established components and to one new component, as introduced in Sect.\U\A.
HPCC sponsors for Fiscal Year 1994
Agency HPCS NREN ASTA BRHR total IITA notes |
ARPA 151.8 60.8 58.7 71.7 343.0 validate final|
NSF 34.2 57.6 140.0 73.2 305.0 36.0 |
DoE 10.9 16.8 75.1 21.0 123.8 |
NASA 20.1 13.2 74.2 3.5 111.0 12.0 |
NSA 22.7 11.2 7.6 0.2 41.7 |
NIH 6.5 6.1 26.2 8.3 47.1 24.9 |
NOAA --- 1.6 10.5 0.3 12.4 |
EPA --- 0.7 9.6 1.6 11.9 |
DoEd --- 2.0 --- --- 2.0 |
NIST 0.3 1.2 0.6 --- 2.1 24.0 |
total 246.2 171.6 402.4 179.8 1,000. 96.0 |
Note: All amounts are in $millions

Fin

Previous chapter: The Internet - Next chapter: Browsing
List of all Chapters.
CS99I CS99I home page.