"Global connectivity and digitized information, together with continuing improvements in performance, bandwidth, and costs, are driving fundamental changes in the market place. The emerging virtual enterprises are creating new challenges and opportunities for our field. This talk will discuss this transformation and some of its implications for us."
Fran Allen specializes in compilers, compiler optimization, programming languages and parallelism. Her early compiler work culminated in algorithms and technologies that are the basis for the theory of program optimization and are widely used throughout the industry. More recently she has led compiler and environment projects resulting in committed products for analyzing and transforming programs.
Dr. Allen is a member of the National Academy of Engineering, a fellow of IEEE, ACM and the American Academy of Arts and Sciences. She is currently on the Computer Science and Telecommunications Board, the Computer Research Associates (CRA) board and NSF's CISE Advisory Board.
Session Chair: C. Edward Oliver
Oak Ridge National Laboratory
Crisis, Innovation, Opportunity: Where Does HPCC Go From Here?
John Toole
Director, National Coordination Office for HPCC
The history of high performance computing and communications has been filled with crises, innovations and opportunities. A glimpse into the past reveals numerous critical periods that shaped where we are today, as well as the evolving role of Federal activities. Will the future be different?
In the early 1990s, for example, the Government faced a crisis in providing the high performance computing and communications required to better address Federal science and engineering agency mission needs. The Government responded with the Federal High Performance Computing and Communications Program and the passage of the High Performance Computing Act of 1991. Since then the Program's investments have led to innovative new technologies, including scalable parallel processing systems, high performance networking technologies, World Wide Web browsers, and a broad range of applications that use these technologies, such as weather forecasting, environmental modeling, and biomedical imaging. These technologies have become necessary tools of Government and business, and public acceptance of new networking technologies has been astonishingly rapid.
Once again the Federal government faces a critical need for even more advanced computing, information and communications technologies. The HPCC Program is evolving into a focused collection of new Computing, Information and Communications R&D; programs to develop technologies needed throughout the Federal government and to maintain U.S. technological leadership and competitiveness in the global marketplace. Government-coordinated technology R&D;, in partnership with universities and major high-technology corporations, can create fresh opportunities that lead the way to advancing the frontiers of Computing, Information and Communications.
With the rapid pace of innovation and the accelerated needs of the Federal sector, a long term strategy deeply rooted in the fundamentals of computing, information and communications will be described, along with important lessons we can take from our proud history of success.
Session Chair: Bill Buzbee
National Center For Atmospheric Research
Operational Global Weather Forecasting
David Burridge
Director, European Centre for Medium Range Forecasting
Numerical weather prediction is an initial value, boundary condition problem in which behavior of the atmosphere is to be predicted up to two weeks ahead. In this timeframe, the future state of the atmosphere at any point can be influenced by events at very distant geographical locations. This and other scientific and technical issues dictate that the numerical models employed to carry out predictions beyond a few days, medium-range forecasts, must be global in extent and must describe the atmosphere from the earth's surface to a height of 30 km.
A description of the three main components of the ECMWF global prediction system will be given:
The computational cost of each component is dominated by the cost of integrating global models with a wide range of resolutions. A description will be offered of the approach taken to develop a portable parallel model which has achieved high efficiencies on a wide variety of machines, including the Fujitsu VPP700 that was installed at ECMWF in July 1996.
Session Chair: Bill Buzbee
National Center For Atmospheric Research
Supercomputers: Agent Of Change Or Victim Of Change?
Erich Bloch
Distinguished Fellow, Council on Economic Competitiveness and Distinguished Visiting Professor, George Mason University
Obviously, the answer to the question is that supercomputers were both change agents and now are the victim of changing technologies. This observation is valid for different points of time.
The discussion will trace the history of supercomputing and its effect on science, engineering, defense and other social areas. It will also elaborate on the reasons for the pre-eminence of the U.S. in this field and the effect of supercomputers on the computer sector. For a long time, supercomputers were the originator of technology that trickled down to other branches of computers and data processors. Today, the inverse is true. What is in store for this field and what need the government and the private sector do to provide important and continuous progress in this area will conclude the remarks.
Session Chair: Patricia Teller
New Mexico State University
The World Of Entertainment And Computers
Alvy Ray Smith
Microsoft Graphics Fellow
Alvy Ray Smith has been a leader in computer graphics for over 20 years. Within the entertainment business, he directed the first use of full computer graphics in a successful major motion picture, ÒStar Trek II: The Wrath of Khan," the ÒGenesis Demo," while at Lucasfilm. He hired John Lasseter, Disney-trained animator, and directed him in his first film, ÒThe Adventures of Andre & Wally B." The team he formed for these pieces proceeded, under Lasseter as artistic director at Pixar, to create ÒTin Toy," the first computer animation ever to win an Academy Award, and the first completely computer-generated film, ÒToy Story," Disney's Christmas '95 release. Smith will discuss some of his concepts and successes and will treat the audience by showing bits of ÒToy Story."
Session Chair: Joan Francioni
University Of Southwestern Louisiana
10:30
3DOME: Virtualizing Reality Into A 3D Model
Takeo Kanade
The Helen Whitaker Professor of Computer Science
Director, Robotics Institute, Carnegie Mellon University
To be useful for training and rehearsal, a virtual environment must look real, act real and feel real. Instead of conventional approaches which start by manually building complex synthetic environments using CAD tools, we propose to let the real world represent itself, creating a representation that faithfully copies its real counterparts.
We have built a 3D virtualizing studio, named 3Dome, whose 5-m diameter space is covered by 51 cameras (at this moment). They constitute many stereo camera pairs from which a full three-dimensional description of the time varying event inside the dome is recovered and turned into a CAD model. Once the 3D description is built, we can manipulate the description, having it interact with other scene descriptions or generating views from a Òsoft" camera which is placed and moved completely arbitrarily in the space. Thus, we have a new visual medium, Òvirtualized reality"; not a virtual reality, but a reality that is virtualized.
Currently, all of this computation is done off-line: the event is recorded into video tapes by 51 separate VCRs, the video tapes are digitized separately, and stereo computation is performed off-line. Doing all of this in real-time presents quite an interesting challenge to supercomputing: video input requires a sustained bandwidth of 500MB/sec into a computer system, and the computational need for image matching, though highly parallelizable, is estimated on the order of 100 to 150 GFLOPs per camera or in total 5 TFLOPs for our current setup.
11:15
Cosmology, A Supercomputing Grand Challenge
Jeremiah Ostriker
Provost, Professor of Astronomy
Princeton University
The study of cosmology, the origin, nature and future evolution of structure in the universe, has been totally transformed in the last decade, and computers have played a major role in the change. New theories have arisen which make the subject, formerly almost a branch of philosophy, into a quantitative science. Initial, semi-quantitative tests of these theories, either using data on galaxy distributions in the local universe or cosmic background radiation fluctuations reaching us from the distant universe, indicate rough agreement with the simplest predictions of the theories. But now that fully three-dimensional, time dependent numerical simulations can be made on modern, parallel architecture computers, we can examine (using good physical modeling) the detailed quantitative predictions of the various theories that have been proposed to see which, if any, can produce an output consistent with the real world being revealed to us by the latest ground and space based instruments.
Session Chair: Diane T. Rover
Michigan State University
1:30
Computing "Grand Challenge" In Airplane Design
Albert M. Erisman
Director of Technology, Information and Support Services, The Boeing Company
We used to formulate Ògrand challenge" problems in computing: those we were ready to solve if only we had enough computing power. Airplane design was thought to be one of those problems. Experience from the 777 design shows the problem to be much tougher than this. Changing customer needs causes some rethinking in the overall objectives. Formulating mathematical models, getting data and using good judgment are partners with solving a formulated problem using powerful computers. In this presentation we will illustrate these issues and identify some challenges for the future.
2:15
Deep Blue: The Ibm Chess Machine
C. J. Tan
Sr. Manager, Parallel System Platforms, IBM T. J. Watson Research Center
Deep Blue, the new IBM Chess Machine, recently made history by becoming the first computer to beat the human World Chess Champion, Garry Kasparov, in a regulation game. Although Kasparov came back to win the 6-game match, Deep Blue demonstrated that the approach of combining special-purpose hardware together with the IBM SP2 parallel processor can achieve previously unreachable levels of performance. In this talk we will describe the background of the Deep Blue machine, show video highlights of the match and discuss implications of the event on chess and technology.
Session Chair: Sally D. Haerer
National Center For Atmospheric Research
3:30
Supercomputer Guided Pharmaceutical Discovery And Development: From Simulation To Patient
Frederick Hausheer, M.D.
Chairman & Chief Executive Officer
BioNumerik Pharmaceuticals, Inc.
Pharmaceutical discovery and development involve creating and developing novel molecules which address unmet needs in medicine. During the last 10 years we have used Cray supercomputers and our proprietary software to develop a mechanism-based approach to pharmaceutical discovery involving Grand Challenge simulations of combinatorial chemistry, targeting, delivery, metabolism, formulation and other areas important to pharmaceutical optimization. The best pharmaceuticals must not only be potent against a given disease, but also must be safe and deliverable to the target. Mechanism-based drug discovery focuses on numerically- intensive simulations involving the optimization of new drug safety, targeting and delivery profile by the application of quantum and statistical mechanics and other methods. Our approach and technology have allowed us to rapidly discover and develop novel chemical entities which are aimed at potentially curative treatment of cancer and heart disease. Using a mechanism-based approach we have consistently completed our preclinical research and filed Investigational New Drug (IND) applications in less than 24 months; our results contrast sharply with the pharmaceutical industry average of greater than five years from the first synthesis of a novel chemical entity to IND filing. Mechanism-based pharmaceutical discovery is distinctly different and more efficient from other pharmaceutical discovery research approaches, including screening, rational drug design and combinatorial chemistry and is expected to provide the blockbuster drugs of the future.
4:15
Computing Challenges From The Card Credit Industry
Philip Lankford
Chase Cardmember Services, Risk Management
Data Warehouses, Data Marts, Data Malls, Data Mining. What is next? The computing needs of the major credit card issuers are exploding. This presentation explores how and why the needs are changing, the gaps between current computing technology and the business needs and the current approach by Chase Cardmember Services.
Session Chair: Sally E. Howe
National Coordination Office For HPCC
1:30
Information Technology And Healthcare
Donald A. B. Lindberg, MD
Director, National Library of Medicine
There have been a number of remarkable recent achievements in biomedical research and patient health care delivery. These include the development of excellent clinical applications, as well as infrastructural information resources, such as the Visible Human, Unified Medical Language System, HL7 and DICOM imaging data standards, Internet Grateful Med, Entrez and biotechnology inquiry tools. In all these applications, biomedicine has been a happy and grateful beneficiary of Internet and World Wide Web technology that was created for wholly different purposes. But these applications and tools alone are not enough.
It is unfortunate that most medical groups are still happily lodged in an old-fashioned Medicine Wagon, content to remain parked on the edge of the Electronic Highway. To sustain advanced achievements in medical science, and to develop even more powerful clinical interventions, we are greatly in need of improved information systems. We need wise investments in medical informatics to speed further scientific discovery, to help assure both quality and efficiency in patient care, and to encourage lifelong learning both by the medical profession and the public. Currently, telemedicine is the strategy holding the greatest potential for rapid advances in these areas.
2:15
Supercomputing In The Clinic
Julian Rosenman, MD
University of North Carolina
Soon after discovering the X-ray, scientists noted that some cancerous tumors regressed after exposure to radiation. Permanent tumor control, however, could only be achieved with very high radiation doses that damaged surrounding normal tissue. To minimize this, early clinicians experimented with multiple radiation beams that overlapped only over the tumor volume. By the 1930s, this complexity made pre-treatment planning a necessity. Since radiation portal design using external patient anatomic landmarks to locate the tumor had to be done manually, clinicians could consider only a limited number of alternatives. By the 1960s the increased speed of computerized dosimetry enabled the generation of multiple plans.
Computed tomography (CT) scanning has recently been used to build 3D patient models, enabling substantial and realistic Òpre-treatment experimentation" with multiple candidate plans and more accurate tumor targeting. The current challenge is to understand and improve underlying patient/tumor models. This requires image segmentation-dividing the data into logical and sensible partsÑwhich is today performed by humans; computational solutions are needed. Additionally, a real understanding of complex 3D images requires high, preferably real time interactivity in order to rapidly manipulate the way data are displayed. These tasks are a challenge for today's fastest commercial workstation and should be considered supercomputing problems.
"Downloading" magnetic resonance scanning, nuclear medicine studies, 3D ultrasound, antibody scans, and nuclear magnetic resonance spectroscopy data onto the planning CT can produce a 3D model of the patient that shows the most information-rich features of all these diagnostics. Automating the spatial registration of the CT with the other 3D studies is an additional supercomputing problem.
Session Chair: Patricia Teller
New Mexico State University
3:30
The I-Way: Lessons Learned And Bridges Burned
Thomas A. DeFanti
Director, Electronic Visualization Lab, University of Illinois at Chicago; Assoc. Director for Virtual Environments, NCSA
The I-WAY experiment at SC'95 was a transient phenomenon designed to find the flaws in the concept of a nationwide ATM network for research. The I-WAY team found them. We have since been working on the ÒPersistent I-WAY" with plans for solutions to those problems that aren't going away by themselves. This talk will present the case from the applications point of view with attention to middleware and networking issues.
4:15
Developing The Second Generation Internet
George O. Strawn
NCRI Division Director, National Science Foundation
This abstract was written at the end of July. At that time many strands of high performance networking were changing on a weekly basis and it was impossible to predict which would be the most interesting subjects four months hence. Some of these strands included: networking support for the partnerships for advanced computational infrastructure (PACI); development of high performance networking for the research and education community; interconnection of high performance federal research networks; interconnections of global research networks; and expanded public-private partnerships to accelerate the development of high performance networks. The strands that have major developments before November will likely be of interest to SC'96 attendees and will be reported in this talk.
Session Chair: Mario Barbacci
Carnegie Mellon University
President, IEEE Computer Society
10:00
Sid Fernbach Award Winner Presentation
Established in 1992, the Sid Fernbach Award honors Sidney Fernbach, one of the pioneers in the development and application of high performance computers for the solution of large computational problems, given for Òan outstanding contribution in the application of high performance computers using innovative approaches."
The 1996 winner of the Sid Fernbach Award is Dr. Gary A. Glatzmaier, a distinguished physicist in Geophysical Fluid Mechanics at Los Alamos National Laboratory. Dr. Glatzmaier is being recognized for using innovative computational numerical methods to perform the first realistic computer simulation of the Earth's geodynamo and its resultant time-dependent magnetic field.
10:40
Gordon Bell Award Finalists Presentations
The Gordon Bell Award was established to reward practical use of parallel processors by giving a monetary prize for the best performance improvement in an application. The prize is often given to winners in several categories relating to hardware and software advancement. Following are the three finalist teams competing for this year's Gordon Bell Prize.
Simulation Of The Three-Dimensional Cascade Flow With Numerical Wind Tunnel (Nwt)
Takashi Nakamura, Toshiyuki Iwamiya, Masahiro Yoshida, Yuichi Matsuo, and Masahiro Fukada, National Aerospace Laboratory
The NWT was reinforced to get 280 GFLOPS of theoretical peak performance with the addition of 26 PEs to the original 140 PEs. On a CFD simulation of a jet engine compressor, we attained active performance speed of 111 GFLOPS using 160 PEs.
N-Body Simulation Of Galaxy Formation On The Grape-4
Special Purpose Computer
Toshiyuki Fukushige and Junichiro Makino, University of Tokyo
We report on recent N-body simulations of galaxy formation performed on the GRAPE-4 (GRAvity PipE 4) system, a special-purpose computer for astrophysical N-body simulations. We review the astrophysical motivation, the algorithm, the actual performance and the price per performance. The performance obtained is 332 Gflops averaged over 185 hours for a simulation of a galaxy formation with 786,400 particles. The price performance obtained is 4,600 dollars per Gflops. The configuration used for the simulation consists of 1,269 pipeline processors and has a peak speed of 663 Gflops.
Electronic Structure Of Materials Using Self-Interaction Corrected Density Functional Theory
Adolfy Hoisie, Cornell Theory Center; Stefan Goedecker and Jurg Hutter, Max Planck Institute for Solid State Research
We have developed a highly efficient electronic structure code for parallel computers using message passing. The algorithm takes advantage of the natural parallelism in quantum chemistry problems to obtain very high performance even on a large number of processors. Most of the terms which scale cubically with respect to the number of atoms have been eliminated, allowing the treatment of very large systems. It uses one of the most precise versions of Density Functional Theory, namely Self-Interaction Corrected Density Functional Theory. On a 6-processor Silicon Grahics Symmetric Multiprocessor based on the MIPS R8000 microprocessor, we obtain a performance of 6.3 Gflops per million dollars.
Session Chair: Bill Buzbee
National Center For Atmospheric Research
1:30
Awards Session
The winners of the Best Student Paper, Sid Fernbach Award, Gordon Bell Award and the High Performance Computing Challenge will be presented in this session.
(Following the presentation of awards will be a plenary talk by Erich Bloch.)
The Best Student Paper is determined by a panel of program committee members who read and attend the presentations of each of the nominated papers. The students, paper titles, and sessions of this year's nominees follow.
Anath Grama
Parallel Hierarchical Solvers and Preconditioners for Boundary Elements Methods (see Algorithms II Session, Wednesday, 3:30 p.m.)
Anna M. del Corral
Reaching the Peak Throughput in Complex Memory Systems of Multivector Processors (see Networking & Architecture Session, Wednesday, 10:00 a.m.)
Subodh Kumar
Scalable Algorithms For Interactive Visualization of Curved Surfaces (see Visualization & Education Session, Tuesday,
1:30 p.m.)
Constantinos Evangelinos
Communication Patterns and Models in Prism: A Spectral Element-Fourier Parallel Navier-Stokes Solver (see Performance II Session, Wednesday, 10:00 a.m)
Induprakas Kodukula
Transformations for Imperfectly Nested Loops(see Compiler Analysis Session, Tuesday, 1:30 p.m)
Lei Wang
Performance Evaluation of Code Generation Algorithms for High Performance Fortran (see Performance I Session, Tuesday, 10:30 a.m.)
Ying Chen
Performance Modeling for the Panda Array I/O Library (see Data Mining & Modeling Session, Thursday, 10:00 a.m.)
Session Chair: Ralph Roskies
Pittsburgh Supercomputing Center
3:30
Pittsburgh At Work
Networking And Supercomputing For Science And Engineering At Carnegie Mellon University
Thomas Gross
Carnegie Mellon University
Collaboration between computer scientists and application experts is necessary to realize the potential of current (and future) high-performance networks and parallel supercomputers. In this talk, we focus on two efforts of the Computing, Media and Communication Laboratory (CMCL) at Carnegie Mellon to illustrate how such collaboration can be realized in practice in a university setting. Both projects investigate a "Grand Challenge" problem: an environmental simulation (jointly with researchers of the Mechanical Engineering Department) and the simulation of ground motion in response to earthquakes (jointly with the Department of Civil Engineering). These applications use (or have used) two networking testbeds at CMU (Gigabit Nectar and Credit Net) as well as computational resources at the Pittsburgh Supercomputing Center.
Creating An Information Renaissance For Pittsburgh And Allegheny County
Robert D. Carlitz
Professor of Physics, The University of Pittsburgh; Executive Director, Information Renaissance
Current educational and community networking projects in Pittsburgh have provided a basis for new collaborations to support scalable and sustainable regional networking infrastructure for the region. The talk will describe these projects and the evolving framework for network planning and implementation.
Developments In High Performance Networking--A Comparison Of The Options
David Drury
FORE Systems
Supercomputer users demand ever more bandwidth for their applications. In the past, these have been provided by specialized solutions such as HIPPI, serial HIPPI and fibre channel. More recently, ATM has found increasing use in this market. With the flurry of announcements on Gigabit Ethernet, yet another technology presents itself as a solution to this sector's continuing appetite for bandwidth.
This talk will provide an overview of where each of these technologies are today and what future developments are likely to deliver.