10:30 a.m.
Dr. Takeo Kanade
U. A. and Helen Whitaker Professor of Computer Science
Director, Robotics Institute
Carnegie Mellon University
3Dome: Virtualizing Reality into a 3D Model
To be useful for training and rehearsal, a virtual environment must look real, act real and feel real. Instead of conventional approaches which start by manually building complex synthetic environments using CAD tools, I propose to let the real world represent itself, creating a representation that faithfully copies its real counterparts.
We have built a 3D virtualizing studio, named 3Dome, whose 5-m diameter space is covered by 51 cameras (at this moment). They constitute many stereo camera pairs from which a full three-dimensional description of the time varying event inside the dome is recovered and turned into a CAD model. Once the 3D description is built, we can manipulate the description, having it interact with other scene descriptions or generating views from a "soft" camera which is placed and moved completely arbitrarily in the space. Thus, we have a new visual medium, "virtualized reality". - not a virtual reality, but a reality that is virtualized.
Currently all of this computation is done off-line: the event is recorded into video tapes by 51 separate VCRs, the video tapes are digitized separately, and stereo computation is performed off-line. Doing all of this in real-time presents quite an interesting challenge to super computing: video input requires a sustained bandwidth of 500MB/sec into a computer system, and the computational need for image matching, though highly parallelizable, is estimated in the order of 100 to 150 GFLOPs per camera or in total 5 TFLOPs for our current setup.
11:15 a.m.
Dr. Jeremiah Ostriker
Provost; Professor of Astronomy
Princeton University
Cosmology, A Supercomputing Grand Challenge
The study of cosmology, the origin, nature and future evolution of structure in the universe, has been totally transformed in the last decade, and computers have played a major role in the change. New theories have arisen which make the subject, formerly almost a branch of philosophy, into a quantitative science. Initial, semi-quantitative tests of these theories, either using data on galaxy distributions in the local universe or cosmic background radiation fluctuations reaching us from the distant universe, indicate rough agreement with the simplest predicitons of the theories. But now that fully three dimensional, time dependent numerical simulations can be made on modern, parallel architecture computers, we can examine (using good physical modelling) the detailed quatitative predictions of the various theories that have been proposed to see which, if any, can produce an output consisitent with the real world being revealed to us by the latest ground and space based instruments.
Simulations could address 32^3 = 10^4.5 independent volume elements a decade ago. Now 512^3= 10^8.1 is the standard for hydro computations, with 1024^3 = 10^9.0 the current state of the art. Increasingly, adaptive or moving mesh techniques are being used to improve the resolution in the highest density regions. In purely darkmatter (gravitation only) calculations the ratio of volume to resolution element has reached 16,000^3 = 10^12.6. This has enabled detailed computation for phenomena from gravitational lensing to X-Ray clusters to be made and compared with observations.
Using these tools we have been able to reduce to a small number the currently viable options for the correct cosmological theory.
1:30 p.m.
Dr. Albert Erisman
Director of Technology
Information and Support Services
The Boeing Company
Computing "Grand Challenge" in Airplane Design
We used to formulate "grand challenge" problems in computing: those we were ready to solve if only we had enough computing power. Airplane design was thought to be one of those problems. Experience from the 777 design shows the problem to be much tougher than this. Changing customer needs causes some rethinking in the overall objectives. Formulating mathematical models, getting data, and using good judgment are partners with solving a formulated problem using powerful computers. In this presentation we will illustrate these issues and identify some challenges for the future.
2:15 p.m.
Dr. C. J. Tan
Senior Manager, Parallel System Platforms
IBM T. J. Watson Research Center
Deep Blue: The IBM Chess Machine Deep Blue, the new IBM Chess Machine, recently made history by becoming the first computer to beat the human World Chess Champion, Garry Kasparov, in a regulation game. Although Kasparov came back to win the 6-game match, Deep Blue demonstrated that the approach of combining special-purpose hardware together with the IBM SP2 parallel processor can achieve previously unreachable levels of performance. In this talk we will describe the background of the Deep Blue machine, show video highlights of the match and discuss implications of the event on chess and technology.
3:30 p.m.
Frederick H. Hausheer, M.D.
Chairman & CEO, BioNumerik Pharmaceuticals, Inc.
Supercomputer Guided Pharmaceutical Discovery and Development: from Simulation to Patient
Pharmaceutical discovery and development involves creating and developing novel molecules which address large unmet needs in medicine. During the last 10 years we have used Cray supercomputers and our proprietary software to develop a mechanism based approach to pharmaceutical discovery involving Grand Challenge simulations of combinatorial chemistry, targeting, delivery, metabolism, formulation and other areas important to pharmaceutical optimization. The best pharmaceuticals must not only be potent against a given disease, but also must be safe and deliverable to the target. Mechanism based drug discovery focuses on numerically intensive simulations involving the optimization of a new drugs safety, targeting and delivery profile by the application of quantum and statistical mechanics and other methods. Our approach and technology have allowed us to rapidly discover and develop novel chemical entities which are aimed at potentially curative treatment of cancer and heart disease. Using a mechanism based approach we have consistently completed our preclinical research and filed Investigational New Drug (IND) applications in less than 24 months; our results contrast sharply with the pharmaceutical industry average of greater than 5 years from the first synthesis of a novel chemical entity to IND filing. Mechanism based pharmaceutical discovery is distinctly different and more efficient from other pharmaceutical discovery research approaches including screening, rational drug design and combinatorial chemistry and is expected to provide the blockbuster drugs of the future.
1:30 p.m.
Donald A. B. Lindberg, MD
Director, National Library of Medicine
Information Technology and Healthcare
There have been a number of remarkable recent achievements in biomedical research and patient health care delivery. These include the development of excellent clinical applications, as well as infrastructural information resources such as the Visible Human, Unified Medical Language System, HL7 and DICOM imaging data standards, Internet Grateful Med, Entrez, and biotechnology inquiry tools. In all these applications, biomedicine has been a happy and grateful beneficiary of Internet and World Wide Web technology that was created for wholly different purposes. But these applications and tools alone are not enough.
It is unfortunate that most medical groups are still happily lodged in an old-fashioned Medicine Wagon, content to remain parked on the edge of the Electronic Highway. To sustain advanced achievements in medical science, and to develop even more powerful clinical interventions, we are greatly in need of improved information systems. We need wise investments in medical informatics to speed further scientific discovery, to help assure both quality and efficiency in patient care, and to encourage lifelong learning both by the medical profession and the public. Currently, telemedicine is the strategy holding the greatest potential for rapid advances in these areas.
Obstacles to optimal use of all these advances include the need for proper medical data availability, assurance of medical data confidentiality, and the inevitable intersection of these requirements in practical day-to-day procedures that are acceptable to all the parties concerned.
2:15 p.m.
Julian Rosenman, MD
University of North Carolina
Supercomputing in the Clinic
Soon after discovering the x-ray, scientists noted that some cancerous tumors regressed after exposure to radiation. Permanent tumor control could only be achieved with very high radiation doses that damaged surrounding normal tissue. To minimize this, early clinicians experimented with multiple radiation beams that overlapped only over the tumor volume. By the 1930s, this complexity made pre-treatment planning a necessity. Since radiation portal design using external patient anatomic landmarks to locate the tumor had to be done manually, clinicians could consider only a limited number of alternatives. By the 1960s the increased speed of computerized dosimetry enabled the generation of multiple plans.
Computed tomography (CT) scanning has recently been used to build 3D patient models, enabling substantial and realistic "pre-treatment experimentation" with multiple candidate plans and more accurate tumor targeting. The current challenge is to understand and improve underlying patient/tumor models. This requires image segmentation =E6 dividing the data into logical and sensible parts- which is today performed by humans; computational solutions are needed. Additionally, a real understanding of complex 3D images requires high, preferably real time interactivity in order to rapidly manipulate the way data are displayed. These tasks are a challenge for today's fastest commercial workstation and should be considered supercomputing problems.
"Downloading" magnetic resonance scanning, nuclear medicine studies, 3D ultrasound, antibody scans, and nuclear magnetic resonance spectroscopy data onto the planning CT can produce a 3D model of the patient that shows the most information-rich features of all these diagnostics. Automating the spatial registration of the CT with the other 3D studies is an additional supercomputing problem.
3:30 p.m.
Dr. Tom DeFanti
Director, Electronic Visualization Laboratory
Associate Director for Virtual Environments, NCSA
University of Illinois at Chicago, EECS Department
The I-WAY: Lessons Learned and Bridges Burned
The I-WAY experiment at SC'95 was a transient phenomenon designed to find the flaws in the concept of a nationwide ATM network for research. The I-WAY team found them. We have since been working on the "Persisent I-WAY" with plans for solutions to those problems that aren't going away by themselves. This talk will present the case from the applications point of view with attention to middleware and networking issues.
4:15 p.m.
Dr. George O. Strawn
NCRI Division Director
National Science Foundation
Developing the Second Generation Internet
This abstract was written at the end of July (1996). At that time many strands of the high performance networking were changing on a weekly basis and it was impossible to predict which would be the most interesting subjects four months hence. Some of these strands included: networking support for the partnerships for advanced computational infrastructure (PACI); development of high performance networking for the research and education community; interconnection of high performance federal research networks; interconnections of global research networks; and expanded public-private partnerships to accelerate the development of high performance networks. The strands that have major developments before November will likely be of interest to SC'96 attendees and will be reported in this talk.
10:00 a.m.
Fernbach Winner Presentation
Established in 1992, the Sid Fernbach Award honors Sidney Fernbach, one of the pioneers in the development and application of high performance computers for the solution of large computational problems, given to "an outstanding contribution in the application of high performance computers using innovative approaches."
The 1996 winner of the Sid Fernbach Award is Dr. Gary A. Glatzmaier, a
distinguished physicist in Geophysical Fluid Mechanics at Los Alamos
National Laboratory. Dr. Glatzmaier is being recognized for using
innovative computational numerical methods to perform the first
realistic computer simulation of the Earth's geodynamo and its resultant
time-dependent magnetic field.
11:00 a.m.
Bell Finalists Presentations
The Gordon Bell Award was established to reward practical use of parallel processors by giving a monetary prize for the best performance improvement in an application. The prize is often given to winners in several categories relating to hardware and software advancement. The three finalist teams competing for this year's Gordon Bell Prize are:
Simulation of the 3 Dimensional Cascade Flow with Numerical Wind Tunnel (NWT)
The NWT was reinforced to get 280 GFLOPS of theoretical peak performance with the addition of 26 PEs to the original 140 PEs. On a CFD simulation of jet engine compressor, we attained active performance speed of 111 GFLOPS using 160 PEs.
N-Body Simulation of Galaxy Formation on Grape-4 Special Purpose Computer
We report on recent N-body simulations of galaxy formation performed on the GRAPE-4 (GRAvity PipE 4) system, a special-purpose computer for astrophysical N-body simulations. We review the astrophysical motivation, the algorithm, the actual performance, and the price per performance. The performance obtained is 332 Gflops averaged over 185 hours for a simulation of a galaxy formation with 786,400 particles. The price per performance obtained is 4,600 dollars per Gflops. The configuration used for the simulation consists of 1,269 pipeline processors and has a peak speed of 663 Gflops.
Electronic Structure of Materials Using Self-Interaction Corrected Density Functional Theory
We have developed an highly efficient electronic structure code, for parallel computers using message passing. The algorithm takes advantage of the natural parallelism in quantum chemistry problems to obtain very high performance even on a large number of processors. Most of the terms which scale cubically with respect to the number of atoms have been eliminated allowing the treatment of very large systems. It uses one of the most precise versions of Density Functional Theory, namely Self-Interaction Corrected Density Functional Theory. On a 6 processor Silicon Graphics Symmetric Multiprocessor based on the MIPS R8000 microprocessor, we obtain a performance of 6.3 Gflops per million dollar.
3:30 p.m.
Pittsburgh at Work
Thomas Gross
Carnegie Mellon University
Collaboration between computer scientists and application experts is necessary to realize the potential of current (and future) high-performance networks and parallel supercomputers. In this talk, we focus on two efforts of the Computing, Media and Communication Laboratory (CMCL) at Carnegie Mellon to illustrate how such collaboration can be realized in practice in a university setting. Both projects investigate a "Grand Challenge" problem: an environmental simulation (jointly with researchers of the Mechanical Engineering Department) and the simulation of ground motion in response to earthquakes (jointly with the Department of Civil Engineering). These applications use (or have used) two networking testbeds at CMU (Gigabit Nectar and Credit Net) as well as computational resources at the Pittsburgh Supercomputing Center.
Robert D. Carlitz
Professor of Physics, The University of Pittsburgh
Executive Director, Information Renaissance
Current educational and community networking projects in Pittsburgh have provided a basis for new collaborations to support scalable and sustainable regional networking infrastructure for the region. The talk will describe these projects and the evolving framework for network planning and implementation.
David Drury
FORE Systems
Supercomputer users demand ever more bandwidth for their applications. In the past, these have been provided by specialized solutions such as HIPPI, serial HIPPI and fibre channel. More recently, ATM has found increasing use in this market. With the flurry of announcements on Gigabit Ethernet, yet another technology presents itself as a solution to this sector's continuing appetite for bandwidth.
This talk will provide an overview of where each of these technologies are today and what future developments are likely to deliver.