Water, water everywhere and not a drop to drink [The Ancient Mariner]
Data, data everywhere and barely time to think [The Modern Traveler]
The HPCC effort is managed collaboratively by ten government agencies, with specific interests, as summarized in Table\hpcc.The topics of interest in HPCC are
Problems that are inherent in government funding have occurred. Some communities and companies have felt slighted, and an intial emphasis on the technological and scientific infrastructure has kept the public remote. The Grand Challenges are largely scientific, and their ivory towers seem remote from the broad objectives for the Information Highways of the Future. The high visibility gained by this program, especially with the initiatives of Senator, and now Vice President, Al Gore, makes it also subject to frequent political review.
Much of initial investments in HPCC have been applied to hardware, a necessary prerequisite for motivating users. Hardware performance is also easier to quantify and publicize than the effectiveness of software and applications. More senior reviewers are comfortable in judging hardware. A need to give more emphasis to software and applications was expressed by the General Accounting Office (GAO) and the Congressional Bugdget Office (CBO) in 1993. It remains difficult to perform software research and development for hardware that does not exist yet and it is even harder to solve problems that have not yet been encountered.
The National Information Infrastructure (NII) was announced in February, 1993. It
augments the HPCC effort with support for an Information Infrastructure Technology and
Applications (IITA) component, shown in the last column of Table\hpcc. Also, the BRHR
component is
The NII goes beyond the provision of network and computing services for scientists in
many ways, although the HPCC remains a foundation. This book focuses on software and
applications, and is intended to help create maps that can guide long-term directions for
development of the information highways and its on- and off-ramps.
By the 1980's, five approaches to computing had emerged: personal computers,
workstations, mini-computers, mainframes, and high-performance systems. The technical
boundaries among those systems change with improvements in technology. The
approaches were sometimes used synergistically
However, first hobby-oriented entrepreneurs saw a market in packaging these systems for
those folk who would be undaunted, and perhaps challenged by their complexity. Two
early ventures were Altair (<19xx>) and IMSAI (19<80>). The model for these machines
were the mini-computers, which had become established as tools in many research
laboratories. Support for their users was found in micro-computer clubs and unglossy
magazines, as Dr. Dobb's Journal of Computer Calisthenics and Orthodontia, advising
how to 'run light and avoid over byte' when using those fascinating toys. The lack of
memory and compilers for computing languages were the biggest bottlenecks, but reliable
input and output was also a problem. Memories were often about 1000 bytes (1K), and
many enthousiasts
The big break came in <19xx> when the Apple computers appeared, with an integral,
adequate keyboard, a color screen, and, a bit later (<19xx>) VISICALC, the first
spreadsheet program. Use of the spreadsheet meant that people who were not hackers
could use computers themselves. As is typical, the appearance of novel software was less
noted than that of the hardware, but their interaction is crucial. New hardware capabilties
inspire software developers, and new software capabilities open up new markets,
reducung the cost and increasing the ubiquity of the hardware.
IBM's entry in the personal computer (PC) market in 19<82> placed a stamp of approval
on the micro-computer technology. Soon other software tools became available that did
not require a programming mentality: databases, word-processing, and publishing
software.. The price of a simple PC was such that it could already be justified if it only
was used as a terminal to mainframes, and as such it was welcomed by managers who
underestimated the adapatability of personal computing. IBM's decision to make the PC
an * open system, i.e., enable and even encourage other vendors to produce compatible
software and hardware extensions, led to a feeding frenzy, which became impossible for
anyone to control. Domestic and foreign makers of clones of the PC, sometimes
marginally skirting copyright and/or patent restrictions, flourished. Today the prime
vendor of PC operating systems, Microsoft, has a book value which exceeds that of IBM.
One can only hypothesize what would have happened if IBM had kept the system closed.
The example of XEROX, in Sect.\U\X\ALTO below, does not present an attractive
alternative. Even with all the competition, IBM's PC division, now called
Derivatives of the IBM PC comprise the largest segment of potential nodes on the
information highways, even if they are considered technically inferior to their larger
workstation brothers. Somewhat more elegant are the MacIntosh computers from Apple,
but in terms of performance they differ little. Their purchase price is modest, say \dol2500.
Personal computers are ubiquitous, because the decision to buy one, the manner in which
one uses them are personal preferences, and don't require institutional or corporate
blessings. Maintenance and upgrading also becomes a personal responsbility and can
waste valuable time. Mixing personal responsibility with use of commercial services can
create an acceptable balance. Higher-level management costs are minimal. Enabling PCs
to communicate is also low in cost, but if transmission has to go via phone-lines then high-
speed modems can add much cost
When personal computers and their software are obtained by institutions in large
quantities, then corporate policies start to intervene. If PCs are used on a network, some
compatabilities should be adhered to, but frequently computing management acts slowly,
and * acquisition decisions do not keep up with technology. It took the U.S. Air Force,
for instance, many years to standardize on a particular model of a PC. When they did, the
machine was nearly obsolete. In a rapidly changing world it is wise to only standardize the
interfaces, and let users obtain the best equipment for their needs according to their
budgets.
The essential difference between a personal computer and a workstation is the ability to
manage multiple tasks in one system. Of course, many personal computers are becoming
more like worktations and can then be rightfully advertised as such. The ability to handle
multiple tasks requires first of all more resources, specifically memory hold multiple
programs, as well as memory to hold an operating system capable of switching tasks.
The user also gets involved. There have to be means to start a subtask, check on its
progress, use its results, and terminate tasks that are no longer needed. Confusion can
easily ensue, so that the multi-process support systems should also provide warnings if
something questionable is about to happen, and backup for recovery when it does.
Multi-processing was originally implemented on mainframes, when many users were
trying to share the same computing resources, but the environment in a workstation is
essentially different. While users sharing a large computer compete with each other for
resources, and the systems have to enforce fairness, a workstation is still an individual's
responsibility, and while cleanliness of services is important, fair allocation of resources is
not.
The software developed at PARC was also innovative. An outstanding example was *
'Smalltalk', a programming system and language that could treat windows, icons, and other
construction to be displayed on the screen as * objects, permitting them to be treated as
holistic units, rather than as constructs of character strings, integers, cross reference
pointers, and programmed methods to interpret those constructs. Programs in Smalltalk
could reuse such objects devised by others, by simply invking their defined methods,
without needing to understand their internal construction [Bjorning].
XEROX mannagement and their lawyers were wary about releasing these innovations.
Even when some universities in 1977 received ALTOs as gifts from XEROX, crucial
software, such as Smalltalk was not provided, so that the machines were only suitable as
novel word-processors. The impressive capabilities of the concepts developed at PARC
were known only to insiders and no market for XEROX's products ensued. Only when
insiders left the confines of PARC and started their own businesses did the concepts enter
the open marketplace, and become the basis of the rapid evolution of the personal
computer and workstation markets.
Today, nearly all workstations come with versions of UNIX, through licenses provided by
their vendors. There are still two flavors of UNIX, namely those derived from the original
AT\&T UNIX (sometimes called System 5), and those derived from the UC Berkeley
UNIX. For the end-user the flavor matters little, but software packages are rarely portable.
With different packages come differences in higher level program management, and here
the differences are great and sometimes baffling.
Dominant in the workstation market today are SUN, Hewlett-Packard, and IBM, using
the
There is a free UNIX (* GNU): LINUX, provided via the Free Software
Foundation (FSF). Its principal architect, Richard Stahlman, believes that software should
be a free good, to be shared and improved by the community. Its label is its motto (GNU
is Not UNIX) and a very capable, but user-hostile editor (GNU * EMACS) is provided by
the FSF and widely used by the hacker community. It is available for nearly every
conceivable machine, simplifying life for those of us who access many types of machines
in a given day.
Workstations provide a level of performance that can typically satisfy the most demanding
user. Their responsivness is certainly better than that of a shared mainframe. Their cost is
still high for an indiviual, say, the on order of $10,000 today. As a business tool a
workstation can easily make sense; that amount is perhaps 10\pct of the annual cost of a
professional. When the workstations are on a * local network (LAN) then costly devices
can be shared. Candidates for sharing are devices as * color printers, * document
scanners,
or high-capacity * archival storage systems. Simple printers may be shared by neighbors.
However, the more sharing occurs, the larger the management and support costs become,
since individual responsibility is reduced.
At the simplest level individual participants need only a * terminal, at a cost of say
$ 1000, and then can share the resources of the minicomputer. Such terminals are dumb,
and transmit what is typed or presented without any internal processing. Early *dumb
terminals were based on teletypwriters, used for telegraph communications, later simple
video screens became dominant. Today most usere will use personal computers to access
the minicomputer. Often they are connected using a LAN.
Such a computer is then refered to as a client, and the group computer is then called
a server. The combination becomes an example of a client-server
system, with a thin client.
More discussion on client-server systems will be found in the
Chapter on Mediators.
Most minicomputers provide the power of several
workstations, but since users do not continuously require the capacity of a workstation, the
shared mini-computer will satsify the requirements of dozens of coworkers in a laboratory,
a design studio, an enginering bureau, or an office most of the time. A minicomputer
system, costing say $100,000 may be shared by 30 or more users, reducing the cost of
computing hardware per individual. [VAX example] The modern minicomputer and its
software has a dual origin: laboratory computing and time-sharing.
Its laboratory ancestor is the LINC computer, built by [Jerry Cox, Charlie Molnar, Lee
Huntley... ACME] at MIT in
Figure\labcomputer. A LINC computer in a laboratory [NLM report]
That such a small computing package could be effective suprised industry, and companies
as Digital Equipment Corporation (DEC) started building commercial derivatives, most
without the real-time capabilities. Users started contributing and sharing software, greatly
reducing support costs for the manufacturers. The low cost encouraged proliferation, and
software experimentation. The power of the machines grew rapidly. At Bell laboratories
[Ritchie and Thompson] found the corporate mainframe computers unwieldly and
designed a simple high-level language, * C, and wrote a * simple operating system
(* UNIX) for their DEC PDP-11 mini-computers. Today, this combination is the most
widely used software tool for system builders and teaching of computer science.
To make more efficient use of a shared laboratory computer
Timesharing capability soon moved to the growing minicomputers, and became the
dominant mode of computer interaction at universities. A few systems even combined
timesharing with real-time data acquistion, providig easy access to complex technology for
their users [ACME reference]. Timesharing also enabled remote use, initally just by
having the teletypes connected over telephone lines. Soon timesharing software was also
installed on mainframe computers, who already had * multi-programming software to
better allocate the many programs they were handling. However, these systems did not
have the flexible management (people, organization, and software) needed to deal with on-
line users.
A shared resource, as represented by mini-computers and their
successors, requires a staff, and the size of the staff must remain
modest if benefits are to be achieved from sharing equipment With a
modest staff only a well-focused set of quality services can be
maintained. Shared computers are only effective when all participants
do related work, such as in a laboratory or its business equivalents,
say, an accounting firm or a legal office. When a different type of
service is needed, it is best to go out over the networks and obtain
such a service remotely. For instance, engineering companies may
benefit from purchasing services, as can be provided by the FAST
system and its successors, described in the
Chapter on Electronic Commerce. Keeping the users local and
homogenous
Upgrading of computing capabilities for their users is more complex
for mini-computer managers than itis when dealing with workstations.
Workstation-based users can simply update or purchase individual
workstations. Changing a minicomputer system invariably affects all
users, and for many of them the benefits will be minor. Today, the low
cost of hardware relative to the cost of staff to keep systems
functioning reduces the opportunties for effective use of timeshared
minicomputers. We are likely to find a smaller proportion of
minicomputers along our information highways. Most of them will be
found where costly equipment is attached to them, while their use have
indirect access from their workstations.
Users benefit from mainframe use if their needs are irregular: they only need to pay for
services consumed, and for any terminals and communication lines they own. Most
mainframe computers are being connected to the Internet, so that users can also
communicate remotely at a low initial cost, although incremental prices may be high,
especially in the daytime.
The HPCC has focused on large-scale MIMD computing for HPCS, as a reasonable
balance of performance potential and complexity. Its expectations are sketched in
Fig.\teraops. Much of the commercial world has focused on vector-processing, and older,
SIMD machines are being phased out. Rapid progress has occurred in workstations, so
that distributed computing is also quite attractive if good interconnections are available.
Storms of various types, as thunderstorms, tornadoes, hurricanes or cyclones, tropical
storms,
To understand, and eventually predict local, severe weather, the sizes of the atmospheric
units must be drastically reduced. More data is required to describe the atmosphere and
the land and sea under it in a finer grain. Those data require dispropotionally more
computation, since the time intervals to predict and record incremental changes must
shrink as well. This means that when reducing the linear size of an atmospheric unit by a
factor 10, the demand on computations can increase by a factor $ 10^ 4 =10,000 $ lement
As sensors and measurement capabilities increase and the capacity of networks makes it
easy tto ship detailed data to computing nodes on the network, the demand for HPCS in
this challenge will become stronger. To achieve 6-hour predictions on 5 km grids may
require 20 teraflops.
Not all genetic problems are expressed by faulty genes. Genes implicated in cancer, *
oncogenes seem to be identical to normal genes, but there ability to replicate is turned on
when it should not be. Within the genetic strand are sequences which act as promotors or
inhibitors to replication, by deforming the strand so that the replication is controlled. The
3-dimnsional (3-D) configuration determines if other biological material can lock itself to
the strand, similiar to Velcro. Drugs can take the place of other material, and inhibit
biological processes that are otherwise enabled by these attachments. Creating the 3-D
models of DNA under various conditions is crucial de developing insight into the
processes that control growth, and our life.
Genetic research oriented toward these problems requires rapid communication among
researchers to avoid overlap and encourage collaboration, access to research results,
including amino-acid sequences that are too long to reliably transcribed by hand, search
routines to match new findings to sites in that long, variable, and incompletely known
DNA strand, programs that can create and rotate the 3-D images for inspection, and
programs that can search for candidate attachments. The latter tasks require immense
computing power, as well as scientific progress to exploit that power.
Programs on HPCS can simulate the airflow around an aircraft for all types of conditions.
Again, the computational requirements are huge. The surface of the aircraft is segmented
into millions of small areas, in each area the direction of flow, the pressure, and the
temperature must be determined. Above each area are many cells, since the effect of the
aircraft on airflow extends many meters beyond its surface. A supersonic shock wave will
even reach the ground below. Electronic windtunnels are becoming essential to aircraft
and space craft design, and their performance places a limit on innovation, since each
change requires hundreds of hours of computation on even the most powerful
supercomputers now available.
Under stress an aircraft is not rigid. When wings move up and down in reaction to the
forces that the airflow exerts on them, their shape, and the angle they present to the flow
varies. Predicting the effect of mutual interaction of airflow pressures and the phyisical
deformation of an entire wing structure is still beyond the capabilities of today's
computing, so that simplified computations are used today. If the two types of
computations can be combined then structural innovations might become possible, that
would make new aircraft both lighter and safer.
A third area where shape interacts with performance is in the design of * stealth features,
that make aircraft less observable to enemy radar. Now the shape has to be combined with
the
Figure\stealth. A Lockheed F-117 Stealth Aircraft.
The most magnificent estate is not useful without furnishings, just as hardware is useless
without software. It is hard to select furniture without knowing the house's architecture, it
is hard to specify a house without knowing what furniture it must contain Experience
helps us avoid egrarious mistakes in home building, but we have much less experience in
Figure\architecture Sketch for computer - home analogy.
The digital highways will be built by a mixture of phone, cable, and satellite technology.
While corporate origins will differ, it is likely that the information will move fairly
smoothly over the various highway types, with minimal delays at the * transshipment
points. For high data rates and dense concentrations of consumers optical cables will
dominate. Satellite-based links can provide crucial backup during failures and
emergencies. Reimbursements will be allocated by aggregate use. Such mixed carriage
exists now for mail and railroad services, and the shipper is rarely aware of all the
companies that were involved in a shipment.
The types of computers found along the highways will remain varied, but we can expect
that workstations will dominate. These workstations will be of two origins. The majority
will be high-range PCs, and others will be the full range of multi-process workstations.
Secondary networks will provide access to smaller workstations and personal computers.
Where costly equipment is to be shared, minicomputers, as exemplified by the DEC VAX-
series may find a role. * Dumb terminals will be rare. Common services, as databases, will
be provided by high-power workstations, as well as by mainframe computers, but the
proportion of mainframes will shrink. High-performance computers will provide
computation services where workstation are inadequate, Mainframes will rarely used for
computation-intensive tasks, but can be effective where a central * control point is for
shared resources. A specialization will develop in the service arena, because users will
switch to where the service is of the highest quality, and high quality service depends on
people, especially professional specialists in various domains.
Other sources of confusion are in ignoring maintenance needs, or assigning maintenance to
internal organization that are not able to keep up with technology changes.
A serious problem relates
The development of computer systems provides lessons for government and commercial
funding. ..
UBI.History
The history of HPCC and the National Information Infrastructure (NII) initiatives
itself is short. The initiatives derive directly from the advances in networking and high-
speed computing of the 1980's. Since networks were discussed in Chapter INTERNET, we now
primarily consider computing advances, and trace their origins.UBI.History.personal
Personal computers had their origin in the hobby world. Integrated
circuit technology, motivated by making large computers better, enabled the placing of all
the electronic logic needed for small tasks on a single silicone chip. These chips were
intended for use in calculators and controllers for complex devices. Their programs were
devised by experts and the programming methods used were decidedly 'user-hostile'.UBI.History.workstations
While the personal computers try to support one task at a time,
professionals were more demanding. Being unable to get a response from one's machine
while printing was going on is intolerable outside of the home environment. When a phone
call from a colleague or a patient arrives, one will want to be able to access relevant
information, and then return to one's prior task. Arrival of email can happen at any time.
Some tasks in themselves require multiple processes. A search through a database can be a
subfuncion of an accounting program.ALTO
In the early seventies researchers at the Xerox Palo Alto Research Center
(PARC) developed the ALTO computer. While its hardware infrastructure was modest
and conventional, higher layers of software provided a number of crucial innovations.
They adopted the mouse, and used to manage a desktop-like arrangement of subscreens, *
windows. Such windows are now an inherent feature of personal computers and
workstations, and Microsoft windows maybe the world's most popular software system.
* Icons, to denote documents and programs provide visual help to the users. The ALTOs
were assigned as personal workstations to PARC researchers, and were connected via the
original * Ethernet, and could share files and processing resources. The first demonstration
of a friendly * virus was carried out at PARC, as part of testing Ethernet capacity, by
having about a hundred ALTOs simulating active network interaction [Shoch]. XEROX'
capability in copying machines led to early experiments at PARC with xerographic and
laser printers; after all, if one separates the reading and printing parts of a copier one has
an image scanner and printer. When both are connected via communication lines one has a
facsimile copier (fax).UNIX
The primary means of controlling workstations today is the UNIX operating system, or
one of its upgrades, such as the MACH system. The original UNIX system was devised
in 197UBI.History.group-computers
The next larger size of computers are * minicomputers
and their derivatives.
They are the low-end level of shared machines, and are now mainly used to support
groups of users that collaborate closely. We find these computers often in laboratories, where some
expensive equpment must be shared. Such equipment may be computer peripherals, as
scanners and high-resolution color printers, or specialized equipment, as found in
clinical laboratories. UBI.History.mainframes
In the 1960s large, multi-processing computers represented all
significant computing activities. As more alternatives at either side became available, the
term 'mainframe' was coined, presumably to indicate their central role. Mainframe
computers deal with a large variety of tasks, handle * database
* transactions, large * batch
operations, serving local and remote users. The users come from all kinds of
departments, and need not cooperate with each other. A great deal of software, people,
and accounting is devoted to keeping operations fair. Prices are announced for use of
computing cycles, long-term storage, printing, etc. The prices differ for day and night-
time use, and are adjusted to encourage users to behave in ways that seem beneficial to the
users as a whole, and to objectives set by management.UBI.History.hpcs
High Performance Computer Systems (HPCS) are designed to handle
the most challenging computing tasks, as described in SectionINTERNET\F.. Users of such * supercomputers
value sharing less than being able to marshall massive computing power to a demanding
user's problem. Parallel operation is the dominant theme in HPCS.
Figure\teraops Computer System Performance Trends for Grang Challenge Problems
(from [OSTP:92])UBI.Functions
The function of the HPCC initiative is to support four major national challenges.
Its progress was summarized in [HPCC:94]. The NII broadens those objectives to
include the challenges listed here.
We will summarize the functions which the initiatives are to provide for
these challenges here; several of them have their own chapters assigned to them, so that
brevity here does not hurt.UBI.Functions.grand challenges
The HPCC initiative was focused on a list of * Grand Challenges
[OSTP:89]. These were selected as examples where HPCS and NREN were well
justified. The program was defined by a Committee on Physical, Mathematical, and
Engineering Sciences, and the scientific outlook of the participants. Even though they were
all government officials the science emphasis is clear in the subset of challenges chosen as
examples in the HPCC publications [OSTP:92]. The program received its first specific
funding in the Fiscal Year 1991. Two additional examples introduced networking, as
presented in ChapterINTERNET.
1: Weather Forecasting
Severe Weather Events is still a challenge for super-computing.
Current weather prediction programs partition the world into fairly large units, and those
are adequate to provide a few days prediction for general atmospheric conditions. A
typical atmospheric unit today measures <50 km (30 miles) by 50 km>, and is <2000 m
high>. Prediction involves * simulating the interactions of these atmospheric units, the
earth and sea below, and the sun above. Because of this * coarse granularity, current
programs can only generate warnings of critical conditions in a general area, but not
actually predict the severity or paths of storms and the like. 2: Genomics
A person's inherited genetic makeup controls much of one's subsequent
health. In addition to diseases directly caused be genetic irregularities, the susceptibility to
many other diseases seems to be genetic in origin. Since <195x <3: Predicting New Semi-conductors
! not yet written
3: Pollutants
Air-borne Pollutants affect plant life, animal and human health. Pollutants
can travel far, and cross any boundaries. In order to understand causes of environmental
damage the flow of air, its ability to transport pollutants, its temperature as it affects
chemical reactions that transform pollutants, ... must be modeled.
5: Aerodynamics
The performance of a new airplane is largely determined by its external
shape. How the air flows around the fuselage, lifts the wings, and impinges on the tail
surfaces determines its speed, lifting capacity, and stability. While engineers can estimate
the performance of a design sufficiently to determine the general shape and size of airplane
6: Energy Conservation and Turbulent Combustion
! not yet written
7: Microsystems Design and Packaging
! not yet written
8: Earth's Biosphere
! not yet written
UBI.Functions.universal service/A>
! not yet written
UBI.Technology
Computer systems. large or small, are built of similar architectural components
UBI.Technology.cpu
\U\T\CPU If there is a single, * central processing unit (CPU) then all control
emanates from it. On computers that handle multiple tasks, the processing unit has to
switch its attentions from task to task. Frequent task switching can occupy much time,
but, as in our analogy, may be needed to prepare a tasty meal with many courses.UBI.Technology.timesharing
! not yet written, available from DBD
UBI.Technology.parallel
! not yet written
UBI.Technology.displays
! not yet written, aavilable from MIS
UBI.Technology.windows
! not yet written, aavilable from DBD
UBI.Technology.mouse
! not yet written, ref sri mouse Early 1960
UBI.Alternatives
We will focus here on alternatives in the software area.
Alternatives in communication hardware were presented in
INTERNET.Alternatives
and we expect to continue to see a mix of
computers as outlined above. New ways of combining them into novel
architectures continue, and are presented in
the Chapter on MEDIATORS.
UBI.Alternatives.microsoft
While UNIX remains the primary operatinfg system for workstattions,
the field is now being invaded by the larger personal computers.
IBM PC's with the Microsoft NT operating systems provide most of the functionality
of a UNIX workstation, at aproximately half the cost.
UBI.Conclusion
While this chapter focused on computers and their systems, it is clear that changes
are greatest where computers and communcations intersect. Vendors, such as Novell, feel
the need to merge both software markets, and high-performance system are always
accessed through communication links. Another recurring theme is the interaction
between innovative researchers and government support. It is hard to hypthesizeAcquisition
Acquisition of computer hardware is often a disaster. Computer purchases have
traditionally been costly, so that many rules were set up to assure effective useof
computers. Technology moves faster than the rules can be adapted. The folk that make
those rules are senior people with much experience, but that experience may be from an
older generation of equipment, and more seriously, pertain to another * management style.
Traditional acquisition centers on hardware specifications, which change rapidly, often
within a purchasing cycle. Advances in computer systems are not even bound to a model
year, as car models. What cars must provide is transportation, and compatibility with
existing roads. What computer hardware must provide is support for applications
software, and compatibility with network standards. If purchasing agents were to specify
these aspects, then software producers would be motivated to be effective and responsive,
so their products could be used on the most advanced and economical hardware. Science versus Commerce
The HPCC program was motivated largely by scientific applications. Much of the
remainder of the book will be concerned with applicationsin other fields. The applications
shown were diverse, but much less innovation is evident for their software than in the
hardware. The scientists working on the Grand Challenges have often been satisfied with
* FORTRAN, a computer language for Formula Translation first developed in the 1950's.
Although it has undergone many improvements. it does not support well the development
of flexible services. The C language, as shown by its history, was innded for relatively
small computers, and does not provide good tools for safe composition of large programs.
Issues realted to programming and programming languages are presented in
A significant adapation is C++, UBI.Bio
\U\B Noyce?
UBI.Lists
UBI.funding
Table\hpcc shows the planned allocation of government funds for HPCC in
fiscal year 1994 to the four established components and to one new component, as
introduced in Sect.\U\A.
Note: All amounts are in $millionsAgency HPCS NREN ASTA BRHR total IITA notes |
ARPA 151.8 60.8 58.7 71.7 343.0 validate final |
NSF 34.2 57.6 140.0 73.2 305.0 36.0 |
DoE 10.9 16.8 75.1 21.0 123.8 |
NASA 20.1 13.2 74.2 3.5 111.0 12.0 |
NSA 22.7 11.2 7.6 0.2 41.7 |
NIH 6.5 6.1 26.2 8.3 47.1 24.9 |
NOAA --- 1.6 10.5 0.3 12.4 |
EPA --- 0.7 9.6 1.6 11.9 |
DoEd --- 2.0 --- --- 2.0 |
NIST 0.3 1.2 0.6 --- 2.1 24.0 |
total 246.2 171.6 402.4 179.8 1,000. 96.0 |
Fin
Previous chapter: The Internet -
Next chapter: Browsing
List of all Chapters.
CS99I CS99I home page.