KEYNOTE ADDRESS, PLENARY SPEAKERS, INVITED SPEAKERS, PANELS/ROUNDTABLES



KEYNOTE

TUESDAY 8:30 A.M. - 10:00 A.M.

Scaling Up
Dr. Frances Allen
IBM Fellow, IBM T.J. Watson Research Center

Fran Allen is an IBM fellow at IBM's T.J. Watson Research Laboratory and is was the 1995 President of the IBM Academy of Technology. Ms. Allen specializes in compilers, compiler optimization, programming languages and parallelism. Her early compiler work culminated in algorithms and technologies that are the basis for the theory of program optimization and are widely used throughout the industry. More recently she has led compiler and environment projects resulting in committed products for analyzing and transforming programs, particularly parallel systems.

Dr. Allen is a member of the National Academy of Engineering, a fellow of the IEEE, the Association for Computing Machinery (ACM) and the American Academy of Arts and Sciences. She is currently on the Computer Science and Telecommunications Board, the Computer Research Associates (CRA) board and NSF's CISE Advisory Board.

Plenary Speakers

WEDNESDAY 	8:30 A.M. - 9:30 A.M.

Crisis, Innovation, Opportunity: Where Does HPCC Go From Here?

John Toole, Director National Coordination Office for HPCC

The history of high performance computing and communications has been filled with crises, innovations, and opportunities. A glimpse into the past reveals numerous critical periods that shaped where we are today, as well as the evolving role of Federal activities. Will the future be different?

In the early 1990s, for example, the Government faced a crisis in providing the high performance computing and communications required to better address Federal science and engineering agency mission needs. The Government responded with the Federal High Performance Computing and Communications Program and the passage of the High Performance Computing Act of 1991. Since then the Program's investments have led to innovative new technologies, including scalable parallel processing systems, high performance networking technologies, World Wide Web browsers, and a broad range of applications that use these technologies, such as weather forecasting, environmental modeling, and biomedical imaging. These technologies have become necessary tools of Government and business, and public acceptance of new networking technologies has been astonishingly rapid.

Once again the Federal government faces a critical need for even more advanced computing, information and communications technologies. The HPCC Program is evolving into a focused collection of new CIC R&D; programs to develop technologies needed throughout the Federal government and to maintain U.S. technological leadership and competitiveness in the global marketplace. Government-coordinated technology R&D;, in partnership with universities and major high-technology corporations, can create fresh opportunities that lead the way to advancing the frontiers of Computing, Information, and Communications.

With the rapid pace of innovation and the accelerated needs of the Federal sector, a long term strategy deeply rooted in the fundamentals of computing, information and communications will be described, along with important lessons we can take from our proud history of success.

THURSDAY	8:30 A.M. - 9:30 A.M.

Operational Global Weather Forecasting

David Burridge, Director European Centre for Medium Range Forecasting

Numerical weather prediction is an initial value, boundary condition problem in which behavior of the atmosphere is to be predicted up to two weeks ahead. In this timeframe, the future state of the atmosphere at any point can be influenced by events at very distant geographical locations. This and other scientific and technical issues dictate that the numerical models employed to carry out predictions beyond a few days, medium-range forecasts, must be global in extent and must describe the atmosphere from the earth's surface to a height of 30 km.

A description of the three main components of the ECMWF global prediction system will be given:

The computational cost of each component is dominated by the cost of integrating global models with a wide range of resolutions. A description will be offered of the approach taken to develop a portable parallel model which has achieved high efficiencies on a wide variety of machines, including the Fujitsu VPP700 that was installed at ECMWF in July 1996.

THURSDAY 1:30 P.M.-3:00 P.M.

Title TBA
Erich Bloch, Distinguished Fellow, Council on Economic Competitiveness
Former Director of NSF
Former Director of Research, IBM

(This talk will occur after the awards presentation. See page 16 for details.)

FRIDAY 8:30 P.M. - 10:00 P.M.

The World of Entertainment and Computers
Alvy Ray Smith, Microsoft Graphics Fellow

Alvy Ray Smith has been a leader in computer graphics for over 20 years. Within the entertainment business, he directed the first use of full computer graphics in a successful major motion picture, "Star Trek II: The Wrath of Khan," the "Genesis Demo," while at Lucasfilm. He hired John Lasseter, Disney-trained animator, and directed him in his first film, "The Adventures of Andre & Wally B." The team he formed for these pieces proceeded, under Lasseter as artistic director at Pixar, to create "Tin Toy," the first computer animation ever to win an Academy Award, and the first completely computer-generated film, "Toy Story," Disney's Christmas '95 release. Smith will discuss some of his concepts and successes and will treat the audience by showing bits of "Toy Story."

INVITED SPEAKERS

TUESDAY 10:30 A.M. - 12 P.M. SESSION

10:30 	Takeo Kanade
       	The Helen Whitaker Professor of Computer Science and Director, Robotics Institute
       	Carnegie Mellon University
3DOME:  Virtualizing Reality into a 3D  Model
11:15 Jeremiah Ostriker Provost; Professor of Astronomy Princeton University Cosmology, A Supercomputing Grand Challenge Abstract not available at this printing TUESDAY 1:30 P.M. - 3:00 P.M. SESSION 1:30 Albert M. Erisman Director of Technology, Information and Support Services, The Boeing Company Computing "Grand Challenge" in Airplane Design We used to formulate "grand challenge" problems in computing: those we were ready to solve if only we had enough computing power. Airplane design was thought to be one of those problems. Experience from the 777 design shows the problem to be much tougher than this. Changing customer needs causes some rethinking in the overall objectives. Formulating mathematical models, getting data and using good judgment are partners with solving a formulated problem using powerful computers. In this presentation we will illustrate these issues and identify some challenges for the future.
2:15	C. J. Tan
	Sr. Manager, Parallel System Platforms
      	IBM T. J. Watson Research Center
Deep Blue:  The IBM Chess Machine
Deep Blue, the new IBM Chess Machine, recently made history by becoming the first computer to beat the human World Chess Champion, Garry Kasparov, in a regulation game. Although Kasparov came back to win the 6-game match, Deep Blue demonstrated that the approach of combining special-purpose hardware together with the IBM SP2 parallel processor can achieve previously unreachable levels of performance. In this talk we will describe the background of the Deep Blue machine, show video highlights of the match and discuss implications of the event on chess and technology.
TUESDAY	3:30 P.M.  - 5:00 P.M. SESSION
3:30	to be announced
4:15	to be announced
WEDNESDAY         1:30 P.M. - 3:00 P.M. SESSION 
1:30	Donald A. B. Lindberg, MD
       	Director, National Library of Medicine
Information Technology and Healthcare
There have been a number of remarkable recent achievements in biomedical research and patient health care delivery. These include the development of excellent clinical applications, as well as infrastructural information resources such as the Visible Human, Unified Medical Language System, HL7 and DICOM imaging data standards, Internet Grateful Med, Entrez and biotechnology inquiry tools. In all these applications, biomedicine has been a happy and grateful beneficiary of Internet and World Wide Web technology that was created for wholly different purposes. But these applications and tools alone are not enough. It is unfortunate that most medical groups are still happily lodged in an old-fashioned Medicine Wagon, content to remain parked on the edge of the Electronic Highway. To sustain advanced achievements in medical science, and to develop even more powerful clinical interventions, we are greatly in need of improved information systems. We need wise investments in medical informatics to speed further scientific discovery, to help assure both quality and efficiency in patient care, and to encourage lifelong learning both by the medical profession and the public. Currently, telemedicine is the strategy holding the greatest potential for rapid advances in these areas.
2:15	Julian Rosenman, MD
       	University of North Carolina
Supercomputing in the Clinic
Soon after discovering the x-ray, scientists noted that some cancerous tumors regressed after exposure to radiation. Permanent tumor control could only be achieved with very high radiation doses that damaged surrounding normal tissue. To minimize this, early clinicians experimented with multiple radiation beams that overlapped only over the tumor volume. By the 1930s, this complexity made pre-treatment planning a necessity. Since radiation portal design using external patient anatomic landmarks to locate the tumor had to be done manually, clinicians could consider only a limited number of alternatives. By the 1960s the increased speed of computerized dosimetry enabled the generation of multiple plans. Computed tomography (CT) scanning has recently been used to build 3D patient models, enabling substantial and realistic "pre-treatment experimentation" with multiple candidate plans and more accurate tumor targeting. The current challenge is to understand and improve underlying patient/tumor models. This requires image segmentation-dividing the data into logical and sensible parts-which is today performed by humans; computational solutions are needed. Additionally, a real understanding of complex 3D images requires high, preferably real time interactivity in order to rapidly manipulate the way data are displayed. These tasks are a challenge for today's fastest commercial workstation and should be considered supercomputing problems. "Downloading" magnetic resonance scanning, nuclear medicine studies, 3D ultrasound, antibody scans, and nuclear magnetic resonance spectroscopy data onto the planning CT can produce a 3D model of the patient that shows the most information-rich features of all these diagnostics. Automating the spatial registration of the CT with the other 3D studies is an additional supercomputing problem.
WEDNESDAY         3:30 P.M. - 5:00 P.M. SESSION
3:30  	Thomas A. DeFanti
	Director, Electronic Visualization Lab, University of  Illinois at Chicago; 
	Assoc. Director for Virtual Environments, NCSA
The  I-WAY:  Lessons Learned and Bridges Burned
The I-WAY experiment at SC'95 was a transient phenomenon designed to find the flaws in the concept of a nationwide ATM network for research. The I-WAY team found them. We have since been working on the "Persistent I-WAY" with plans for solutions to those problems that aren't going away by themselves. This talk will present the case from the applications point of view with attention to middleware and networking issues.
4:15 	George O. Strawn 
      	NCRI Division Director, National Science Foundation   
Developing the Second Generation Internet
This abstract was written at the end of July . At that time many strands of the high performance networking were changing on a weekly basis and it was impossible to predict which would be the most interesting subjects four months hence. Some of these strands included: networking support for the partnerships for advanced computational infrastructure (PACI); development of high performance networking for the research and education community; interconnection of high performance federal research networks; interconnections of global research networks; and expanded public-private partnerships to accelerate the development of high performance networks. The strands that have major developments before November will likely be of interest to SC'96 attendees and will be reported in this talk.
THURSDAY	     10:00 A.M. - 12:00 P.M. SESSION
10 :00	Sid Fernbach Award Presentation
Established in 1992, the Sid Fernbach Award honors Sidney Fernbach, one of the pioneers in the development and application of high performance computers for the solution of large computational problems, given to "an outstanding contribution in the application of high performance computers using innovative approaches." The 1996 winner of the Sid Fernbach Award is Dr. Gary A. Glatzmaier, a distinguished physicist in Geophysical Fluid Mechanics at Los Alamos National Laboratory. Dr. Glatzmaier is being recognized for using innovative computational numerical methods to perform the first realistic computer simulation of the Earth's geodynamo and its resultant time-dependent magnetic field.
11:00	Gordon Bell Award Finalists Presentations 
The Gordon Bell Award was established to reward practical use of parallel processors by giving a monetary prize for the best performance improvement in an application. The prize is often given to winners in several categories relating to hardware and software advancement. The three finalist teams competing for this year's Gordon Bell Prize are:

THURSDAY	1:30 P.M.-3:00 P.M.
1:30 	Awards Session
The winner of the Best Student Paper, Sid Fernbach Award, Gordon Bell Award and the High Performance Computing Challenge will be presented in this session.

The Best Student Paper is determined by a panel of program committee members who read and attend the presentations of each of the nominated papers. The students, paper titles, and sessions of this year's nominees are as follows:

Following the presentation of awards, Erich Bloch will present a plenary talk. Title and abstract of talk will be provided in the Final Program.

THURSDAY	       3:30 P.M.  - 5:00 P.M. SESSION
3:30	Pittsburgh at Work
	Thomas Gross, Carnegie Mellon University
	Bob Carlitz, The University of Pittsburgh
	David Drury, FORE Systems
Abstract not available at this printing.

PANELS & ROUNDTABLES

TUESDAY 10:30 A.M.-NOON

Accelerated Strategic Computing Initiative
Moderator:  Alexander Larzelere, Department of Energy
An overview of the Accelerated Strategic Computing Initiative (ASCI) will be given by program managers in the Department of Energy, highlighting how we got to this point in the program, results to date and future plans. Technical presentations on parts of the nuclear stockpile stewardship program will be presented by principal investigators from Sandia, Los Alamos and Lawrence Livermore National Laboratories.
TUESDAY	1:30 P.M. - 3:00 P.M.
Supercomputing Applications in Medicine
Moderator: John Gilbertson, The University of Pittsburgh
Computing systems have become central to medical care and research, and their success has created a demand for systems with ever increasing capabilities. Some applications are reaching computing power, bandwidth and storage requirements associated with supercomputers. The multimedia electronic patient record is such an application, especially at major medical centers where large scale image storage is becoming common. At the University of Pittsburgh Medical Center, a supercomputer-like machine (MARS - Medical Archival System) is now used for this purpose. In addition, current high-end workstations based on the Alpha and other leading-edge microprocessors now have the compute power and memory bandwidth of supercomputers 10 years ago. Assuming this trend continues, we should ask what medical applications the supercomputer community should be developing today that may run on desktop machines in the next decade.
TUESDAY	3:30 P.M. - 5:00 P.M.
Ten Years of K-12 High Performance Computing and Communications:  What have we learned?
What are we building for the future? 
Moderator: Wally Feurzeig, BB&N;
Programs such as Adventures in Supercomputing (AiS), the Alabama Supercomputing Program to Inspire computational Research in Education (ASPIRE), Earth Vision, the National Education Supercomputer Program (NESP), the PSC High School Initiative, SuperQuest and others have been offering in-depth training and experience with computational science tools and methods to teachers for a decade. This roundtable will offer a forum for those involved to discuss the successes and failures of these efforts and to communicate needs and goals for future programs.
WEDNESDAY 10:00 A.M.-NOON
DOD Modernization Program:  Parallel Software Solving Government Applications
Moderator:  Keith Bromley, NOSC
Presentations will be provided by developers within the Federal Government (i.e., DoD, DoE, NOAA, NASA, NIH) on their experiences in developing scalable parallel application software to solve critical challenges in the areas of: computational fluid dynamics, computational structural mechanics, environmental quality modeling and computational electromagnetics. These talks will address the technical challenges of the efforts, the programming methods applied to solve them and the lessons learned in the research so far. After the initial stage-setting overview speakers, the presentations will be given by the working team leaders who are developing the codes.
Computation and Competitiveness-High Performance Computing and Industrial Leadership
Moderator:  Peter R. Bridenbaugh, Alcoa
The effective use of knowledge lies at the heart of industrial leadership. This panel will discuss the role of high-performance computing in creating competitive advantage in the aerospace, automotive, materials and energy, and industries. They will also offer their view of the future for HPC and their requirements of the HPC community.
WEDNESDAY 1:30 P.M. - 3:00 P.M.
Center Directors Roundtable
Moderator: Andy White, LANL, in coordination with Ken Kliewer, ORNL
This session will include topics on:
   ¥  transformation of centers
   ¥  effects of rapid changes in industry
   ¥  ever-increasing storage capabilities
   ¥  practical management issues
WEDNESDAY 3:30 P.M. - 5:00 P.M.
Archival Storage Systems Experience for High Speed Computing
Moderator: Alan Powers, NASA Ames
Representatives of four sites, each running a different archiving solution, will form the panel. Each panel member will present information about their archive solution and usage statistics. They will discuss their perspective about the archive solution's features, faults, and future enhancements. Each panel member will provide their software and hardware solutions and costs for three proposed archive requirements (5 TB, 125 TB, and 1000 TB). A lengthy question and answer session will follow the discussions.
THURSDAY	10:00 A.M. - 12:00 P.M.
Opportunities and Barriers to Petaflops Computing
Moderator: Paul Messina, CalTech
Petaflops computing has been explored through a series of federally sponsored community led forums. Although the feasibility and application of petaflops systems within the next twenty years are considered a distinct possibility, there are highly conflicting views concerning the means by which this is to be achieved. Such issues as architectural capabilities, system software user interface, exotic technologies, applications that will benefit, economic business model in support of industry, and research and development paths are all considered controversial. Also, diverse perspectives are sometimes in direct conflict. For example, one approach dictates the use of commercial off-the-shelf components as the only practical approach to funding the necessary hardware and software technology. Another approach considers alternative structures of processing logic and memory on a single chip for more efficient computing. A third approach proposes to exploit exotic technologies in hybrid organizations, including superconducting logic, optical communications and storage, and semiconductor memories. At the same time, it is quite possible that the first petaflops computer will be a special-purpose device good for only a single class of applications but available in just a few years. Other approaches have been considered as well.

The panel will present an array of such views and invite the audience to contribute their own ideas on possible paths to petaflops computing.

THURSDAY	3:30 P.M. - 5:00 P.M.
Scalable I/O Initiative Roundtable on Portable Programming Interfaces for Parallel Filesystems
Moderator: Jim Zelenka, Carnegie Mellon University
The Scalable I/O (SIO) Initiative intends to release for public review a draft parallel filesystem API suite no later than six weeks before this conference. The goal of this suite is to define a common set of powerful interfaces for application, compiler and toolkit programmers to use for high-performance I/O on a variety of parallel machine architectures. In this roundtable, the SIO Initiative would like to collect, interpret and discuss the community's responses to, and suggestions for, the proposed API suite. Broad-based comments are sought, but we will be especially interested in the programming experiences of I/O-intensive applications, out-of-core compilers/toolkits and parallel file system implementations. This draft will be available no later than October 1, 1996 at: http://www.cs.cmu.edu/Web/Groups/PDL/SIO/SC96.html.
FRIDAY	10:30 A.M. - 12:00 P.M.
Impacts of the Telecommunications Act of 1996 on the HPCC Community
Moderator: Dave Farber, The University of Pennsylvania
The Telecommunications Act of 1996 has made major changes in the rules of engagement of the key stakeholders in the telecommunications field. Key FCC rulings on interconnection Rules and Universal Service are expected in August and November, respectively. (An Access Reform ruling is expected in January.) The panelists will be drawn from the local, long distance, cable, metropolitan, cellular, FCC and internet communities. A brief summary by each panelist of the key impacts already seen will lead to a lively debate.

TECHNICAL PAPERS

TUESDAY, NOVEMBER 19

Session 1: 10:30 a.m.-Noon
1A  Biology Applications
Parallel Hierarchical Molecular Structure Estimation
Cheng Che Chen and Russ B. Altman, Stanford University;  Jaswinder Pal Singh, Princeton University
A DATA-Parallel Implementation of O(N) Hierarchical N-body Methods
Yu Hu, Harvard University;  S. Lennart Johnsson, University of Houston and Harvard University
The Design of a Portable Scientific Tool:
A Case StudY Using SnB 
Steven M. Gallo and Russ Miller, State University of New York at Buffalo;  Charles M. Weeks, Hauptman Woodward Medical Research Institute
1B  Performance I
RUNTIME Performance of parallel array assignment: an empirical study
Siddhartha Chatterjee and Lei Wang, The University of North Carolina at Chapel Hill
James M. Stichnoth, Carnegie Mellon University
ScaLAPACK: A Portable Linear Algebra Library for Distributed Memory Computers - Design Issues and Performance
Jack Dongarra, Laura Blackford, A. Cleary, S. Hammarling, University of Tennessee, Knoxville; Jaeyoung Choi, Soongsil University, Seoul, Korea; J. Demmel, I. Dillon, University of California, Berkeley; G. Henry, Intel SSPD
Network performance modeling for PVM Clusters
Mark J. Clement, Phyllis E. Crandall, Michael R. Steed
Session 2: 1:30 p.m.-3:00 p.m.
2A  Visualization & Education
Scalable Algorithms for Interactive Visualization of Curved Surfaces	
Dinesh Manocha, Subodh Kumar and Chun-Fa Chang, University of North Carolina
STREN: A Highly Scalable parallel 
stereo terrain renderer for planetary 
mission simulations
Ansel Teng and Meemong Lee, Jet Propulsion Lab
Scott Whitman, Cray Research Inc.
Education in High Performance Computing via the WWW:  Designing and Using Technical 
Materials Effectively
Susan Mehringer, Cornell Theory Center
2B  Compiler Analysis
Compiler-directed Shared-Memory Communication for Iterative Parallel Applications
Guhan Viswanathan and James R. Larus, University of Wisconsin-Madison
Dynamic Data Distribution with Control Flow Analysis
Jordi Garcia, Eduard Ayguade and Jesus Labarta,
Universitat Politecnica de Catalunya
Transformations for Imperfectly Nested Loops
Induprakas Kodukula and Keshav Pingali, Cornell University
Session 3: 3:30 p.m.-5:00 p.m.
3A  Geophysical Applications
Earthquake Ground Motion Modeling on Parallel Computers
Omar Ghattas, Hesheng Bao, Jacobo Bielak, Loukas F. Kallivokas, David R. O'Hallaron, Jonathan R. Shewchuk and Jifeng Xu, Carnegie Mellon University
Performance Analysis and Optimization on the UCLA Parallel Atmospheric General Circulation Model Code
John Lou, California Institute of Technology;  John Farrara, University of California
Climate Data Assimilation on a Massively Parallel Supercomputer
Hong Q. Ding and Robert D. Ferraro, Jet Propulsion Laboratory
3B  Tools
Performance Analysis Using the MIPS R10000 Performance Counters
Marco Zagha, Silicon Graphics, Inc.
Profiling A Parallel Language Based on Fine-Grained Communication
Kaus E. Schauser, Bjoern Haake and Chris Scheiman, University of California at Santa Barbara
Modeling, Evaluation and Testing of Paradyn Instrumentation System
Abdul Waheed, Michigan State University
Wednesday, November 20
Session 4:  10:00 a.m.-Noon
4A  Performance II
An Analytical Model of the HINT Performance Metric
Quinn O. Snell and John L. Gustafson, Ames Laboratory
Communication Patterns and Models in Prism:  A Spectral Element-Fourier Parallel Navier-Stokes Solver
George Em Karniadakis and Constantinos Evangelinos, 
Brown University
The C3I Parallel Benchmark Suite - Introduction and Preliminary Results
Rakesh Jha, Brian VanVoorst, Luiz S. Pires, Wing Au, Minesh Amin, Honeywell Technology Center;  Richard C. Metzger, USAF Rome Laboratory;  David A. Castanon, ALPHATECH, Inc.;  Vipin Kumar, University of Minnesota
The Performance of the NEX SX-4 on the NCAR Benchmark Suite
Steven W. Hammond, Richard D. Loft, NCAR; 
Philip D. Tannenbaum, HNSX Supercomputers, Inc.
4B  Networking & Architecture
Minimal Adaptive Routing with Limited Injection on Toroidal k-ary n-cubes
Fabrizio Petrini and Marco Vanneschi, Universita di Pisa
Low-Latency Communication on the IBM RISC System/6000 SP
Chi-Chao Chang, Grzegorz Czajkowski, Chris Hawblitzell and Thorsten von Eicken, 
Cornell University
CompileD Communication for All-optical  TDM Networks
Xin Yuan, Rami. Melhem and Rajiv. Gupta,The Univ. of Pgh.
Increasing the Effective Bandwidth of Complex Memory Systems IN Multivector Processors
Anna M. del Corral and Jose M. Llaberia, 
Universitat Politecnica de Catalunya
Session 5: 1:30 p.m.-3:00 p.m.
5A  Hydrodynamics Applications
A Parallel Cosmological Hydrodynamics Code
Paul W. Bode, Univ. of Pennsylvania;  Guohong Xu, University of California at Santa Cruz;  Renyue Cen, Princeton University
Transient Dynamics Simulations:  Parallel Algorithms for Contact Detection and Smoothed Particle Hydrodynamics
Bruce Hendrickson, Steve Plimpton, Steve Attaway, 
Jeff Swegle, Courtenay Vaughan, Dave Gardner, 
Sandia National Labs
Performance of a Computational Fluid Dynamics Code on NEC and CRAY Supercomputers:  
BEYOND 10 GIGAFLOPS
Ferhat F. Hatay, University of Colorado at Boulder
5B  Algorithms
Parallel Preconditioners for Elliptic PDEs
Vivek Sarin and Ahmed Sameh, University of Minnesota
Sparse LU Factorization with Partial Pivoting on Distributed Memory Machines
Cong Fu and Tao Yang, Univ. of California at Santa Barbara
Implementation of Strassen's Algorithm for Matrix Multiplication
Elaine M. Jacobson, Anna Tsao and Thomas Turnbull, Center for Computing Sciences;  Steven Huss-Lederman, University of Wisconsin-Madison;  Jeremy R. Johnson, Drexel University
Session 6: 3:30 p.m.-5:00 p.m.
6A  Algorithms II
Global Load Balancing with Parallel Mesh Adaption on Distributed - Memory Systems
Rupak Biswas, NASA Ames Research Center;  Leonid Oliker, Research Institute for Advanced Computer Science; Andrew Sohn, Dept. of Computer & Information Science 
Parallel Hierarchical Solvers and Preconditioners for Boundary Element Methods
Ananth Grama, Vipin Kumar, and Ahmed Sameh, 
University of Minnesota
Parallel Multilevel k-way Partitioning Scheme for Irregular Graphs
George Karypis and Vipin Kumar, University of Minnesota
6B  Parallel Programming Support
Double Standards: Bringing Task Parallelism to HPF Via the Message Passing Interface
Ian Foster and David R. Kohr, Jr., Argonne National Laboratory;  Rakesh Krishnaiyer and Alok Choudary, Syracuse University
OMPI: Optimizing MPI Programs Using Partial Evaluation
Hirotaka Ogawa and Satoshi Matsuoka, The University of Tokyo
Particle-in-Cell Simulation Codes in High Performance Fortran
Erol Akarsu, Kivanc Dincer, Geoffrey C. Fox and Tomasz Haupt, Northeast Parallel Architectures Center

THURSDAY, NOVEMBER 21

Session 7: 10:00 a.m.-Noon
7A  Scheduling
Application-Level Scheduling on Distributed Heterogeneous Networks
Francine D. Berman, Rich Wolski, Silvia Figueira, Jennifer Schopf and  Gary Shao, University of California at San Diego
NetSolve: A Network Server for Solving Computational Science Problems
Henri Casanova, University of Tennessee, Knoxville;  Jack Dongarra, University of Tennessee, Knoxville and Oak Ridge National Laboratory
Multimethod Communiction for High Performance Metacomputing
Ian Foster, Jonathan Geisler, Steven Tuecke, Argonne National Laboratory; Karl Kesselman, California Institute of Technology
BUilding A World-Wide Virtual Machine Based on Web and HPCC Technologies
Kivanc Dincer and Geoffrey C. Fox, Northeast Parallel Architectures Center
7B  Data Mining & Modeling
Parallel Data Mining for Association Rules on Shared-memory Multi-processors
M.J. Zaki, M. Ogihara, S. Parthasarathy and W. Li, 
University of Rochester
Dynamic Computation Migration in DSM Systems
Wilson C. Hsieh, University of Washington;  M. Frans Kaashoek, MIT Laboratory for 
Computer Science;  
William E. Weihl, DEC Systems Research Center
Performance Modeling for the Panda Array I/O Library
Ying Chen, Marianne Winslett, Szu-wen Kuo, Yong Cho, University of Illinois;  Mahesh Subramaniam, Oracle Corportion; Kent Seamons, Transarc Corporation
Striping in Disk Array rm2 Enabling the Tolerance of Double Disk Failures
Chan-Ir, POSTECH
Session 8: 3:30 p.m.-5:00 p.m.
8A  Particle Dynamics
Lightweight Computational Steering of Very Large Scale Molecular Dynamics Simulations
David M. Beazley, University of Utah;  Peter S. Lomdahl, Los Alamos National Laboratory
Design of a Large Scale Discrete Element Soil Model for High Performance 
Computing Systems
Alex R. Carrillo, David A. Horner, John F. Peters, John E. West, U.S. Army Engineer Waterways Experiment Station
Molecular Simulation of RheologicaProperties Using Massively Parallel Supercomputers
P. T.  Cummings, R. K. Bhupathiraju, S.T. Cui and S. Gupta, University of Tennessee;  H. D. Cochran, Oak Ridge National Laboratory
8B  Data & Scheduling
Virtual Memory Versus File Interfaces for Large, Memory-Intensive Scientific Applications
Yoonho Park and Ridgway Scott, University of Houston;  Stuart Sechrest, University of Michigan
Impact of Job Mix on Optimizations for Space Sharing Schedulers
Jaspal Subhlok, Thomas Gross and Takashi Suzuoka, Carnegie Mellon University

SC'96 Home Page | Conference Program Overview | Program at a Glance | Tutorials | Technical Papers | Education Program | Networking Infrastructure | Exhibits | Exhibitor Forum | HPC Challenge | Media Information | Information for Presenters | General Information | Conference Registration | Hotel Registration | Pittsburgh Information | SC'96 Committees | Questions? | `````