Scalable Computing Lab

Ames Laboratory - U.S. Department of Energy

The Ames Laboratory is located on the campus of Iowa State University. A highlight of this year's exhibit will be the reconstruction of the Atanasoff-Berry Computer (ABC), the first electronic digital computer.

Results of the following research projects will be emphasized.

Internetworking into the Twenty-first Century

Argonne National Laboratory, Mathematics and Computer Science Division

Complex scientific applications require the use of internetworked supercomputin g resources. At Argonne National Laboratory we are designing the tools and environments that address these needs, in particular, providing a pathway for DOE computing into the next century. Our efforts range from developing tools for distributed and parallel supercomputing to providing collaborative environments for scientists who may be distributed across the country. This research exhibit will highlight ANL efforts to develop parallel processing communication libraries such as MPI and Nexus; to provide a video, audio, and textual collaborative environment; and to design computational tools such as PETSc for parallel scientific computing. Showcased will be the IBM massively parallel SP system and the results of computational simulations visualizd on the ImmersaDesk.

Army High Performance Computing Research Center

Army High Performance Computing Research Center

The Army High Performance Computing Research Center (AHPCRC), located at the University of Minnesota in Minneapolis, Minnesota, was established by the US Army to maintain leadership in computing technology vital to national security. The research exhibit will demonstrate the results of AHPCRC research projects that utilize the TMC CM-5 and the Cray T3D for large simulations. Highlighted will be basic research projects, collaborative research projects involving both AHPCRC and Army researchers, and educational programs. A heterogeneous challenge application involving CM-5, T3D, and SGI Power Challenge Array systems, and an SGI ONYX system in the research exhibit, will be demonstrated. The AHPCRC research exhibit will utilize several methods of information presentation; high-performance SGI ONYX workstations for research project data visualization demos, a VCR and large-screen monitor for showing narrated videotapes of AHPCRC research project results, and poster size pictures of selected research project graphics mounted in the exhibit.

ARSC - Bringing Solutions to Arctic Issues

Arctic Region Supercomputing Center, University of Alaska Fairbanks

The Arctic Region Supercomputing Center has been supporting computational needs of academic, industrial and government scientists and engineers with HPCC resources, programming talent, technical support, and training since 1992. Areas of specialty supported by ARSC include ocean modeling, atmospheric sciences, climate/global change, magnetohydrodynamics, satellite remote sensing, and civil, environmental, and petroleum engineering. Much of the computational work at ARSC is related to the study of polar and high latitude regions.

Examples highlighted at the booth include:

ARSC, located at the University of Alaska Fairbanks, operates a CRAY Y-MP M98 and a CRAY T3D along with an SGI network of workstations and a video production laboratory.

MARINER: Metacenter Affiliated Resource In the New England Region

Boston University, Center for Computational Science

Boston University's research exhibit features its NSF funded project MARINER: Metacenter Affiliated Resource In the New England Region. MARINER extends the University's efforts in advanced scientific computing and networking to organizations throughout the region. Users from both public and private sector are eligible to participate in a wide range of programs which offer training in, and access to advanced computing and communications technologies. Demonstrations of current research and educational projects developed through the Center for Computational Science and the Scientific Computing and Visualization Group will be shown using graphics workstations and video in the exhibit booth. We will feature live demonstrations through high speed networking connections from multiple supercomputing sites across the country, including Boston University's SGI Power Challenge array. In addition, we will present distributed supercomputing applications, video animations of recent research, and 3-dimensional visualizations using a stereoscopic display.

High-Performance Computing and Communication at Carnegie Mellon University

Carnegie Mellon University, School of Computer Science

This research exhibit presents results from a number of related research projects in the School of Computer Science at Carnegie Mellon University in the areas of high-performance computing and communication. The research focusses on programming tools that support the parallelization of both regular and irregular applications and high-performance networking applied to distributed computing. The projects included are the Archimedes tool supporting unstructured finite element simulations (Dave O'Hallaron and Thomas Gross), the Credit Net high-speed network (Peter Steenkiste and Allan Fisher), the Dome distributed object library (Adam Beguelin), the Fx parallelizing FORTRAN compiler (Thomas Gross and Jaspal Subhlok), and the NESL high-level parallel programming language (Guy Blelloch). The groups work closely with application groups, including the NSF Grand Challenges on Earthquake Modeling and on Large-Scale Environmental Modeling. The exhibit uses a combination of live demos and posters to explain the research.

Caltech and JPL

The Center for Advanced Computing Research at Caltech and Jet Propulsion Laboratory (JPL)

The Center for Advanced Computing Research (CACR) at Caltech, in collaboration with NASA's Jet Propulsion Laboratory (JPL), will produce an exhibition of the various technological and scientific results obtained through the use of high-performance computers and networks.

We plan to use hands-on demonstrations of tools and environments as well as visualizations of scientific data and results obtained using concurrent supercomputers to familiarize attendees with our work.

We will present posters, video, and live demonstrations of computers at work on such problems as battlefield simulation, VLSI simulation, Synthetic Aperture Radar (SAR) world mapping, and quantum chemistry.

In addition, we will present results of our involvement in various consortia including the Center for Research on Parallel Computation (CRPC), The Concurrent Supercomputing Consortium (CSCC), the Scalable I/O Project (SIO), and other collaborative efforts.

CRPC: Making Parallel Computation Truly Usable

Center for Research on Parallel Computation

The Center for Research on Parallel Computation (CRPC) is an NSF Science and Technology Center dedicated to making massively parallel computing truly usable for high-performance computer users. To achieve this, CRPC researchers are developing new programming environments, compilers, numerical algorithms, language extensions, and scientific applications. The center includes researchers at two national laboratories (Argonne and Los Alamos) and five universities (Caltech, Rice, Syracuse, Tennessee, and Texas). The CRPC also has affiliated sites at Boston, Illinois, Indiana, and Maryland Universities and the Institute for Computer Applications in Science and Engineering (ICASE).

The CRPC's exhibit features demonstrations of interactive software systems, descriptions of parallel language extensions, and applications developed by CRPC researchers. These include HPF, Fortran D, the D System, Fortran M, CC++, PVM, HeNCE, parallel templates, ADIFOR, and various applications. Other demonstrations will feature HPCC technologies in kindergarten through graduate education, including a revolutionary approach to interfaces to web-based education for the disabled.

The exhibit includes posters, videos, and theater presentations. Information about CRPC educational outreach activities, software distribution, technical reports, and knowledge transfer efforts is also available.

Cornell Theory Center

Cornell Theory Center

The Cornell Theory Center (CTC), a national center for high performance computing and communications, will showcase a spectrum of the work of the highest-performance, scalable parallel computing capabilities. This spectrum includes:

  • the most challenging computational science applications
  • software for resource managment
  • innovative visual insight technologies
  • multimatlab
  • education on parallel computing
  • public outreach.
  • Among the challenging computational science applications to be featured are biomedicine, fracture mechanmics, macromolecular structure, computational fluid dynamics, and environmental modeling. Software for resource management is based on the CTC/IBM collaboration that resulted in development of EASY-LL, currently in use on CTC's 512-processor IBM RS/6000 POWERParallel System (SP). Among the visual insight technologies to be highlighted is WorkSpace, a tool for collaboration and applications development in the context of a virtual environment. The purpose of the Multimatlab project is to provide a tool that facilitates the use of a parallel computer, or a network of computers, to solve course-grained large-scale problems using MATLAB, by MathWorks. Our educational feature is the Virtual Workshop, an Internet-based course on parallel computation, that has already reached over 500 participants from academia, government, and business and industry. Public outreach activities include the premier of CTC's second Web-based science book, which incorporates the latest Web technologies to communicate computational science to the general public.(Explorations is CTC's first science book). Also featured will be recently developed PC-based exhibits for museums.

    DoD High Performance Computing Modernization Program

    DoD High Performance Computing Modernization Office

    The DoD High Performance Computing Modernization Program (HPCMP) is a multiyear, $1B+ initiative to modernize HPC capabilities for the DoD's research programs. The research exhibit will highlight various components of the HPCMP. The booth will include four major elements: interactive demonstrations which utilize ATM OC-3 interconnects with computational resources located at remote sites, video reports of large scale numerical modeling efforts, a technology demonstration utilizing "novel" hardware, and a collaboration area containing contributions to DoD's HPC efforts produced by academia, industry, and other federal agencies.

    The EM-X Multithread Distributed-Memory Multiprocessor

    Electrotechnical Laboratory, Japan

    The computer architecture section of the Electrotechnical Laboratory has developed a highly parallel distributed-memory multiprocessor, called EM-X. The machine has been designed to tolerate communication latency by using low latency communication and multithreading.

    The key architectural features of EM-X include (1) low-latency fine-grain communication with direct remote memory access, (2) fine grain synchronization, (3) overlapping computation with communication through multithreading, and (4) distributed shared memory paradigm. The processor of EM-X, which is a gate array LSI, has been designed from scratch. The EM-X offers the capability to scale up to 1,024 processing elements (PEs). Its 80 PE prototype system demonstrates 1.6 Gflops peak performance. The EM-X software system includes a multithreaded C compiler, parallel thread libraries, runtime routines and monitors.

    This research exhibit gives an overview of the EM-X architecture along with some on-line demonstrations through Internet. A board which contains five processors will be on display for exhibit attendants to examine. A video presentation is also planned for visualizing some parallel benchmark results and to demonstrate some real-time graphics applications.

    Berkeley Lab High Performance Computing and Communications

    Ernest Orlando Lawrence Berkeley National Laboratory

    Berkeley Lab will present several demonstrations of its inventions and capabilities in high-performance computing and communications:

    1. Shock Physics Interactive Visualization. Using virtual- reality tools, SC96 attendees will change the parameters of a shock physics simulation that is running on a Cray C90 at the National Energy Research Supercomputing Center (NERSC) and see the results in real time. The demonstration is an example of how the Energy Sciences Network (ESnet) and work supported by the Department of Energy Mathematical, Information, and Computational Sciences Division combine to enable remote access to scientific computing facilities.
    2. Distributed Environment for Health Care Imaging Applications. This demonstration will showcase a health care imaging application that uses the Distributed-Parallel Storage System (DPSS) to collect and play back angiography video sequences. The technology and its principles are applicable not only to other health-care imaging purposes, but also to the general problem of manipulating and providing access to large data objects, such as those generated by particle-physics detectors, electron microscopes, and synchrotron radiation sources.
    3. Visual Servoing for Micro Manipulation. The Internet and conventional tools for using it do not provide the fast, guaranteed delivery of control information and user data needed for some remote scientific experimentation. We will demonstrate how advanced machine-vision algorithms, intelligent local control, and a unique partitioning of functions allow realtime operation of an electron microscope.
    4. New at NERSC: The T3E. This presentation will showcase the capabilities of the Cray T3E computer now being acquired by NERSC.
    5. ESnet: Linking Scientific and Computing Centers. Berkeley Lab now operates ESnet, a nationwide data communications network funded by the U.S. Department of Energy, (DOE / Office of Energy Research). ESnet interconnects the transcontinental complex of DOE national laboratories and U.S. universities funded by the DOE to conduct energy research. ESnet facilitates remote access to major ER scientific facilities, provides needed information dissemination among scientific collaborators throughout all ER programs, and provides widespread access to existing ER supercomputer facilities.
    In addition to these live demonstrations, posters will describe recent developments at Berkeley Lab of great interest to the Supercomputing community, in particular the move of NERSC and the Energy Sciences Network (ESNet) to Berkeley Lab.

    A Videoconferencing Tool for Managing Large Scientific Collaborations

    Fermi National Accelerator Laboratory

    Many of the fundamental experiments in computationally intensive fields, such as High Energy Physics (HEP), rely on large distributed collaborations to manage them and do the science. Increasingly, scientists are coming to rely on videoconferencing as a critical tool to accomplish the science, and complement the computing resources they use. Participation can be difficult for scientists at universities and small labs without access to high speed networks such as the vBNS and ESNet. Members of the Computing Division at Fermi National Accelerator Laboratory and HEP Network Research Center are refining a MultiSession Bridge for videoconferencing that allows scientists throughout the world to join meetings at low cost and also allows combinations of packet-switched and circuit switched conferences, unicast or multicast without the requirement for costly switching gear. It also incorporates security features for privacy. Fermilab will be demonstrating this software's use in a variety of its application areas, including HEP, Experimental Astrophysics and nuclear medicine.

    Georgia Tech

    Georgia Tech

    High-performance computing at Georgia Tech covers a wide spectrum of projects (including interactive computational steering, programming tools and environments, performance analysis, compiling for parallelism, data distribution, program visualization, parallel discrete event simulation, network and architecture support for multimedia, computer architecture, and distributed operating systems) and application areas (including quantum mechanical phenomena, orbit spectroscopy, applied mathematics, combustion simulation, rotorcraft and automotive aerodynamics, structural dynamics, global atmospheric modeling, nonlinear dynamics, multi-disciplinary optimization, decision support systems, linear and mixed-integer optimization, production distribution.) These projects and applications pursue the joint theme of `Distributed Laboratories'. The aim is to create laboratory environments across distributed systems in which computational instruments and end users can interact as if they were co-located in shared physical spaces.

    Conference attendees will have the opportunity to interactively steer real-world high performance applications, visualize superscalar risc processor architecture and bottlenecks, visualize runtime data distributions and message passing programs, observe performance metrics of an executing application, and learn more about research at Georgia Tech through WWW presentations.

    LANL, LLNL, Sandia, and DOE ASCI Coordinated Exhibit

    Lawrence Livermore National Laboratory, Los Alamos National Laboratory, Sandia National Laboratory

    In this coordinated SC'96 exhibit, three Department of Energy Laboratories will present demos, posters, and videos related to work at the individual laboratories, as well as the shared Accelerated Strategic Computing Initiative (ASCI) program for DOE - including the new ASCI platforms and the Problem Solving Environment.

    Sandia has a vision for accomplishing their role in Science-Based Stockpile Stewardship: Modeling- and Simulation-Based Life Cycle Engineering. The exhibit illustrates their approach to life cycle engineering based on scientifically valid high-fidelity models, high performance computing and comprehensively diagnosed experiments.

    LANL's software efforts will be shown in the TeleMed demo, featuring the latest JAVA technology, and POOMA, an object-oriented framework which will aid in code portability and reuse. Grand Challenge, industrial collaborations, and ongoing network research will be featured in the exhibit.

    LLNL has a specific goal of enabling multiple Tera-Scale applications in the near future. The exhibit features information about related computational upgrades, presentations from Computational and Networking Research and also applications with the special challenge of data-intensive 3D calculation.

    MHPCC: Advanced Image and Information Solutions

    Maui High Performance Computing Center

    Maui High Performance Computing Center (MHPCC) is ranked as one of the most powerful computing centers in the world, in terms of computer power. Housing one of the world's largest installations of IBM SP systems, the MHPCC offers government, industry, and academic users with a testbed for scalable parallel computing and high performance storage technologies. During 1996 MHPCC has focused on developing and supporting advanced image and signal processing and modeling and simulations projects. During Supercomputing '96, the MHPCC will showcase these efforts, highlighting:

    Mississippi State University

    Mississippi State University

    The NSF Engineering Research Center for Computational Field Simulation at Mississippi State University proposes a research exhibit at Supercomputing '96, as we had at Supercomputing '95, '94, '93 and '92, to present an exhibition of High Performance Computers at Work, solving applications in computational fluid dynamics and embedded high performance computing. The research exhibit is comprised of three sub-exhibits, each representing a portion of on-going activities at the National Science Foundation Engineering Research Center for Computational Field Simulation, at Mississippi State University. The first activity is embedded high performance computing, the rebirth of parallel computing and performance portability in application-oriented environments of interest to industry and defense. The Message Passing Interfac e is used extensively in these systems, as are parallel libraries based on MPI. The second, activity is high performance visualization, a key to understanding the results of solution fields resulting from scientific applications with complex geometries and physics. The third activity comprises advanced task management an important tool for computers used in complex heterogeneous working environments.

    NASA High Performance Computing and Communications Program

    NASA High Performance Computing and Communications Program Office

    The NASA High Performance Computing and Communications Program research exhibit will feature a theater presenting graphic videos that demonstrate the leading-edge research being conducted in the areas of high performance computing hardware, system software, software tools, and networks, as well as the many and varied "grand challenge" applications that push and test the capacity and capabilities of these systems and also serve to promote the information technology infrastructure and enhance national competitiveness. In addition to the theater, static displays will illustrate relevant portions of the Computational Aerosciences, Earth and Space Sciences, Remote Exploration and Experimentation, and Information Infrastructure Technology Applications Project areas. Managers and researchers will be present to further explain Program elements and interact with conference attendees.

    High Performance Applications in Aeropropulsion and Hybrid Networking

    NASA Lewis Research Center

    The NASA Lewis Research Center, located in Cleveland, Ohio, is the NASA Center for Excellence in turbomachinery and hybrid data communications. The exhibit features several parallel computational fluid dynamics (CFD) codes, high data rate satellite communications research, and two Lewis facilities that support supercomputing applications.

    The CFD codes, many of which are associated with the national High Performance Computing and Communications Program (HPCCP), include:

    The high data rate (HDR) and hybrid networking research done by the Advanced Communications Technology Satellite project is featured in a virtual reality demonstration using distributed databases and satellite simulation circuitry.

    Also highlighted are two Lewis facilities, the Advanced Computational Concepts Laboratory and the Graphics and Visualization Laboratory.

    NASA Numerical Aerospace Simulation (NAS) Program

    NASA Numerical Aerospace Simulation (NAS) Systems Division

    The Numerical Aerospace Systems (NAS) Division at NASA Ames Research Center is responsible for implementing the full spectrum of supercomputing in aeronautics for the Agency. This includes research, development and operation of supercomputing systems targeted at solving problems critical to the aerospace industry. The NAS exhibit illustrates real-life technology developments and applications within the NAS division. Interactive demos provide an overview of the research and tools development currently underway. New technology areas and outlooks for the future are also discussed.

    Numerical Wind Tunnel(NWT) and HPC Research at the National Aerospace Laboratory, Japan

    National Aerospace Laboratory, Japan

    National Aerospace Laboratory(NAL) of Japan is the only one national research institute for aerospace engineering and services in Japan. NAL facilities include major aerospace R&D; testing facilities such as wind tunnels from hypersonic to low speed. Computer facility has been the top of Japan in the past 30 years. NAL has developed supercomtuters jointly with computer manufacturers to fulfil the needs of CFD for national aerospace projects. Since 1993, NAL has been operating NUMERICAL WIND TUNNEL(NWT) which was developed jointly with Fujitsu Co. NWT is one of the top high performance computer consisting of parallelly connected 166(140 in 1993 year) vector processors. In the past two years the Gordon Bell Prize were awarded to NWT applications on CFD and QCD.

    In the 1996 Research Exhibit, the Numerical Simulator II system consisting of the enhanced NWT(166 PE's), Fujitsu NWT-FEP, VP2100, CRAY YMP M92 and Intel PARAGON XP/S25 336 Nodes will be shown along with CFD applications. The exhibit use work station, X-terminal and PC's connected to NAL system thru the Internet. We especially wish to show that how excellent is NWT hardware and software (NWT-FORTRAN) for CFD applications and to exchange views and opinions with participating organizations and visitors. The '96 Gordon Bell Prize Finalist Paper demonstration will be also made using CG animation.


    NAL Information on WWW homepage: http://www.nal.go.jp

    Putting Supercomputers to work at the National Center for Atmospheric Research (NCAR)

    National Center for Atmospheric Research Computers at Work The National Center for Atmospheric Research (NCAR) proposes an exhibit highlighting investigations that use a broad range of supercomputing technologies to solve real-world problems. These problems range from understanding how surface convection of the Sun affects the weather on Earth to aiding firefighters on the ground as they attempt to control or mitigate forest fires.

    Putting supercomputers to work on problems that may result in immediate benefit for humanity is a major objective of the scientific and research staff at NCAR. Providing the high-performance computing support to NCAR and the atmospheric sciences community is the primary mission of NCAR's Scientific Computing Division (SCD).

    This year's proposed research exhibit focuses on four very different types of research efforts under way at NCAR. These efforts put supercomputers to work to achieve more than purely speculative output; they produce important insights and answers that can be used to help us better understand the intricacies of the physical systems that surround us and impact our lives.

    For 1996, NCAR's exhibit will focus on four research projects that required long-running simulations on NCAR's supercomputers:

    Collaborative Partnerships

    National Center for Supercomputing Applications

    The National Center for Supercomputing Applications (NCSA=81) is committed to enhancing American competitiveness in science, engineering, education, and business. NCSA is a leader in the development and implementation of a national strategy to create, use, and transfer advanced computing and communication tools and information technologies.

    Presentations will showcase the pioneering work of NCSA users and current collaborating partners on a variety of themes, from examining the building blocks of molecules to experiencing colliding galaxies. NCSA's exhibit will feature projects in interdisciplinary research, computer and computational science, Web-based collaborative software, virtual teaming, and virtual reality. In addition to workstation demonstrations, NCSA's booth will feature an ImmersaDesk=81. The ImmersaDesk is a drafting-table format virtual reality graphics display device designed as a single-user application development station.

    NCSA, a unit of the University of Illinois at Urbana-Champaign, receives major funding to support its research from the National Science Foundation, the Department of Defense, the Defense Advanced Research Projects Agency, NASA, corporate partners, the State of Illinois, and the University of Illinois.

    National Coordination Office for Computing, Information, and Communications (NCO CIC): Coordinated Federal R&D; in Computing, Information, and Communications

    National Coordination Office for Computing, Information, and Communications (NCO CIC)

    The National Coordination Office for Computing, Information, and Communications (NCO CIC) -- formerly the NCO for High Performance Computing and Communications -- coordinates $1.0 billion per year of Federal R&D in computing, information technology, and communications. The Federal High Performance Computing and Communications (HPCC) Program forms the core of the CIC R&D programs, which are organized into five Program Component Areas: High End Computing and Computation; Large Scale Networking; High Confidence Systems; Human Centered Systems; and Education, Training, and Human Resources.

    Our booth will inform visitors about the NCO and about CIC R&D conducted by the participating organizations -- DARPA, NSF, DOE, NASA, NIH, NSA, NIST, ED, VA, NOAA, EPA, AHCPR -- through exhibition of representative results from agency-sponsored projects. It will distribute CIC strategic planning documents and NCO publications, including the HPCC Program's FY 1997 Annual Report to Congress (the Blue Book) and FY 1997 Implementation Plan that provides detailed descriptions of HPCC R&D activities.

    National Laboratory for Applied Networking Research

    National Laboratory for Applied Networking Research

    The NLANR project was conceived to provide ongoing research and support needed to sustain the growth of future high performance network infrastructure. Initial efforts have focused on the NSF funded vBNS network, which has provided high speed connectivity among the five NSF funded high performance computing facilities for over a year. Participants in the NLANR project include networking experts from each of these facilities: the Cornell Theory Center, the Pittsburgh Supercomputing Center, the National Center for Supercomputing Applications, the National Center for Atmospheric Research, and the San Diego Supercomputer Center. More recently, several parallel efforts are underway to expand the project to include researchers exploring the overall improvement of Internet infrastructure. The research exhibit will showcase both the traditional vBNS related efforts of the NLANR project as well as newer efforts in the areas such as WWW caching and Internet statistics and measurement.

    In addition to showcasing NLANR specific research, we will attempt to examine, in real time, traffic on SCInet as well as SC96 usage of the vBNS. The results of these efforts will be shown live in the NLANR booth throughout the show.

    Data Intensive Computing and Data Mining with the NSCP Meta-Cluster

    National Scalable Cluster Project

    We will demonstrate software tools and applications which have been developed by the National Scalable Cluster Project (NSCP) and which combine high performance computing, high performance data management and high performance networking. The NSCP is a collaboration between research groups at the University of Illinois at Chicago, the University of Pennsylvania, and the University of Maryland at College Park, together with partners from other universities and companies.

    We propose to use NSCP developed software to connect ATM clusters of workstations at the three participating universities with an ATM cluster on the exhibit floor. We will demonstrate a low overhead, high performance persistent object manager we have developed called PTool managing over 250 Gigabytes of disk. We will showcase five interactive applications developed for the Meta-Cluster, including the data mining, analysis, and visualization of high energy physics and other scientific data; and several high performance digital libraries built over the meta-cluster.

    Real-time Numerical Weather Prediction on HPC Platforms

    NOAA/Forecast Systems Laboratory

    The mission of NOAA's Forecast Systems Laboratory (FSL) is to evaluate and transition technology to the operational weather services. FSL has been evaluating the appropriateness of high performance computers for the use in real-time numerical weather prediction (NWP). In order to develop the infrastructure for parallelizing multiple NWP models, FSL produced software known as the Scalable Modeling System (SMS). To date FSL's software has been used to parallelize five NWP models. FSL will display the technology at Supercomputing '96 by running NWP models on various platforms using live data t o produce current numerical weather forecasts. These forecasts will be visualize d using workstations in the booth connected to SCINET. We will also replay visualizations of retrospective cases including FSL's collaborative support of the Atlanta Olympics.

    Regional Training Center for Parallel Processing (RTCPP) and Lesson Objects on Parallel Processing (LOOPS)

    North Carolina State University

    To further and promote the use of parallel processing North Carolina State University has established, with NSF support, a Regional Training Center for Parallel Processing (RTCPP). RTCPP provides an advanced network-based environment for learning about parallel computing and scientific problem solving, and a facility for construction of customized lessons and courses on parallel processing. RTCPP content is being developed in cooperation with a number of other sites. RTCPP education material ranges from basics on parallel computing to advanced topics on MPI and pedaflop computing. As one part of the exhibit we will demonstrate a system called Lesson Objects on Parallel Processing (LOOPS) that we use to construct individual lesson objects, courses, and curricula and then integrate them with scientific, educational or business workflows within which users of RTCPP operate. The presentation model we use conforms with the quality of service capabilities of the distributing network, and the current incarnation of Internet.

    Interactive Web Tools for Scientists and Engineers

    Northwest Alliance for Computational Science and Engineering; Network for Engineering and Research in Oregon

    Want to learn high-performance computing without becoming a computer scientist? NACSE provides Web-based training materials and tools for learning about computational techniques and parallel programming, aimed at scientists, engineers, and students. This exhibit demonstrates how exciting self-guided Web materials can be.

    Coping with Unix: An Interactive Survival Kit provides a gentle, hands-on introduction to developing computational projects on Unix machines. It covers everything from organizing source files to running programs and visualizing the results. WebTerm (a Java applet emulating a terminal from within an HTML document) provides ``live action'' for trying out Unix commands from your Mac or PC.

    HyperSQL replaces SQL (Standard Query Language), the specialized language normally needed to access Sybase or other relational databases. HyperSQL provides Web-based forms to access text, image, and sound objects from large-scale databases. HyperSQL doesn't need to reside on the database machine; instead, it provides a ``gateway,'' establishing network connections and building SQL queries automatically.

    Oak Ridge National Laboratory

    Oak Ridge National Laboratory

    The ORNL research exhibit features demonstrations of the application of high-performance computing, networking, and storage to real world problems. Leadership in this area involves the capacity to integrate advanced mathematical and computational techniques, data management and analysis methods, software tools, and communications technologies with the outstanding high-performance computing systems and infrastructure of our Center for Computational Sciences (CCS). High-performance computing is a core competency of ORNL. Our strengths range from the ability to develop realistic mathematical models of complex phenomena and scalable algorithms for their solution to the availability of massively parallel processors and storage systems accessed by high-performance computing environments. Grand Challenge applications highlighted will include materials research, groundwater remediation, and climate modeling. Of special interest will be demonstrations of ongoing research in large-scale metacomputing using ATM to interconnect multiple Paragons at OC12 speeds to concurrently work on a single application using PVM.

    Ohio Supercomputer Center

    Ohio Supercomputer Center

    The Ohio Supercomputer Center (OSC) is a focus for high performance computing in the state. We will highlight several recent accomplishments of Ohio researchers supported by OSC. Examples of some of the research included in the booth are:

    Other activities coordinated by OSC for Ohio researchers are the PhAROh Metacenter Regional Alliances, DoD Modernization participation, and high speed networking. The OSC is located in Columbus, Ohio and operates a CRAY Y-MP, CRAY T3D, IBM SP/2, SGI Power Challenge and a Convex SPP. The Center also has a workstation cluster and video editing suite for researchers.

    Parallel Tools Consortium

    Parallel Tools Consortium

    Over the past several years, careful examination has exposed the fact that parallel tool use is appallingly low within the high-performance computing community, despite increasing demands for software support. The Parallel Tools Consortium (Ptools) is a collaborative effort including researchers, developers, and users from the federal, industrial, and academic sectors whose mission is to improve the responsiveness of parallel tools to user needs. Ptools takes a leadership role in defining, developing, and promoting parallel tools that meet the specific requirements of users who develop scalable applications on a variety of platforms. Several prototype tools will be demonstrated at the exhibit, including a lightweight core file browser, a distributed array query and visualization package, scalable UNIX tools, and portable timing routines.

    The Ptools Consortium Steering Committee includes national laboratory technical staff, university researchers, and software tool developers several computer hardware vendors. Representatives from these organizations will be available and eager to discuss the Ptools mission and recent accomplishments and to further promote the dialog between tool users and tool developers.

    The MGAP-2: Demonstration of a Massively Parallel Micro-Grained Array Processor

    Pennsylvania State University, MicroSystems Research Laboratory, Department of Computer Science and Engineering

    The Micro-Grain Array Processor (MGAP-2) is an array of 49,152 micro-grain processors, implemented as a planar mesh, operating at 50MHz, and capable of computing 4.9 teraops per second. Each processor has 32-bits of local dual-port RAM, computes two three-input boolean functions per clock, and has a dynamically reconfigurable interconnect to each of its four neighbors. This communication flexibility allows algorithms to be mapped onto the array in an efficient manner and the processors to be dynamically grouped into larger computational units.

    We have developed a high level language, *C++, for programming the MGAP-2 and have targeted efficient systolic, low communication complexity algorithms for applications such as basic arithmetic and image processing operations, motion estimation, speech recognition, computational molecular biology, simulation of physical phenomenon using a cellular automaton model, Hough Transform, Discrete Wavelet Transform, Discrete Cosine Transform and Singular Value Decomposition.

    The entire MGAP-2 system fits onto a single 9Ux400 mm VME board. This significantly reduces the cost of the system and allows the array to be used as a co-processing component in a standard workstation.

    Pittsburgh Supercomputing Center

    Pittsburgh Supercomputing Center

    The Pittsburgh Supercomputing Center (PSC) is one of four national supercomputing centers funded by the National Science Foundation. In accord with the PSC's mission to foster the use of the latest hardware and software techniques in high performance computing our research exhibit will showcase current projects which utilize the Center's various resources. We will feature a wide variety of demonstrations in the areas of computational science including seismology, space science, and pathology. We will also demonstrate an interesting distributed computing experiment in collaboration with a federal laboratory. The educational and outreach mission of the PSC will be demonstrated through the presentation of research projects carried out by students of the Pennsylvania Governor's School for the Sciences under the auspices of Carnegie Mellon University and the PSC.

    Bringing High-Performance Parallel Computing Into The Mainstream

    Purdue University, School of Electrical and Computer Engineering

    The Purdue University School of Electrical and Computer Engineering is bringing parallel supercomputer technology into the mainstream. To achieve this goal, three things are needed: low-cost high-performance hardware, software support, and educational tools. Most of the work presented is public domain.

    PAPERS, Purdue's Adapter for Parallel Execution and Rapid Synchronization, is simple hardware that allows a cluster of commodity PCs/workstations to perform as a tightly-coupled parallel machine. Multiple PAPERS clusters, and VGA video walls, will be demonstrated. The support software centers on AFAPI, the Aggregate Function Application Program Interface, for clusters and SMPs (and even uniprocessors), but will include information on compilers and Linux parallel processing. CASLE, the Compiler/Architecture Simulation for Learning and Experimenting, is a WWW-based tool designed to allow undergraduate students to interactively change various parameters of a simple simulated compiler and architecture, directly observing the integrated impact on performance.

    The Parallel Object-Oriented Language OCore and the Gleaner GC

    Real World Computing Partnership

    We will present an overview of the parallel object-oriented language OCore. To reduce the complexity in writing efficient parallel programs for multicomputers, OCore introduces the notion of a community, a structured set of objects that supports data parallel computation as well as multi-access data. We will show application examples using communities as well.

    We will also present a global on-the-fly garbage collection algorithm, called Gleaner, designed for OCore. The Gleaner GC may reclaim global cyclic garbage by less inter-processor messages in most programs which are conscious of reference locality, than existing global garbage collection algorithms. The Gleaner GC also enables the local garbage collector on each processor to be executed on-the-fly.

    Parallel Programming Environment on Workstation Cluster

    Real World Computing Partnership

    We will demonstrate a parallel programming environment on our workstation and PC clusters using the Myricom Myrinet network, which consist of 36 Sun Sparc Station 20's and 32 Pentium processors, respectively. An extended C++ language called MPC++ and an operating system called SCore-D have been designed and running on the workstation cluster.

    The MPC++ version 2 programming language defines parallel description primitives called Multi-Thread Template Library (MTTL). Parallel programming constructs such as data parallel statements are implemented using the MPC++ metalevel architecture. Those unique features are presented with some programming examples.

    SCore-D is an operating system implemented as a set of daemon processes on top a Unix operating system without any kernel modification. Time Space Sharing Scheduling (TSSS), time sharing scheduling on dynamically partitionable parallel machines, is implemented to achieve shorter response time of jobs and better CPU utilization. We will demonstrate the impact of TSSS on the workstation and PC clusters.

    SDSC: A National Laboratory for Computational Science and Engineering

    SDSC: A National Laboratory for Computational Science and Engineering

    The SDSC Home Page for text-based browsers is at http://www.sdsc.edu/texttop.html

    SDSC's exhibit will feature a projection video display, a Helisys LOM solid object fabrication device, a Fakespace Boom, computers, and a 60"x40" autostereogram 3D image.

    Presentations include:

    • Molecular Science Exploratorium -- An interaction environment for exploring 3D molecular structures.

    • Distributed Object Computational Testbed -- An environment for handling complex documents on geographically distributed data archives and computing platforms.

    • Data Mining and Visualization for the Distributed Climate Simulation Laboratory (joint presentation with NCAR).

    • OC-12 on the vBNS (joint presentation with NLANR).

    • HPCC in Education -- with Smithsonian Medalist Kris Stewart.

    • Large Image Databases for KidSat and EarthRISE -- Live and videotaped presentation of Sally Ride's CalSpace K-12 educational programs.

    • SDSC Science Discovery -- Science, technology, art, and humor in a virtual tour.

    • Immersive Video -- Using supercomputers to convert multi-camera video to VRML for 3D environments.

    • Visual Data Mining of Multidimensional Information using Parallel Coordinates.

    • Managing Multiple HPF Invocations Using KeLP.

    Supercomputing'96 Education Program

    Supercomputing'96 Education Committee

    The SC'96 Education Program exhibit booth will provide a focal point for disseminating information about K-12 educational applications and outreach efforts throughout the HPCC community. The presence of the education booth on the exhibit floor is also intended to facilitate interaction between academics and professionals and teachers attending the conference.

    A web-connected Macintosh will be available in the booth with links set to the home pages of SC'96 attending teachers' and presenters' schools, enabling visitors to the booth to find out who from their area is present at the conference. A web form will be available so professionals who are willing to help teachers can enter their name and contact information in a database for post-conference follow-up.

    Educational applications of computational science and Internet/WWW technologies, especially recent developments that were not ready in time to include in the education program, will be demonstrated in the booth. Information and promotional materials will be available for K-12 programs sponsored by supercomputing centers, universities, corporations, and other organizations.

    The booth will be staffed by members of the SC'96 Education Committee and teacher/presenters in the education program.

    Tokyo Institute of Technology

    Tokyo Institute of Technology, Computer Center

    Tokyo Institute of Technology (Titech) is one of the leading universities in scientific computation and computer science in Japan. Titech research exhibit demonstrates current research results coming from fluid dynamics, computational biochemistry and parallel computation researches. The computer center in Titech introduces its supercomputer CRAY C916, several graphics workstations to be used for visualizing the result computed by the supercomputer, and ultra-band network(600Mbps optical fiber and 600Mbps ATM). Titech has opened the supercomputer to high school students since 1995. The result of the Supercomputer contest for high school students (SuperCon) is also introduced.

    Polaris at Work

    University of Illinois at Urbana-Champaign and Purdue University

    Polaris is an advanced infrastructure for compiler and software tool research, development, and teaching. It is being developed at the University of Illinois and Purdue University under support by the Advanced Research Project Agency.

    Polaris is freely available, and is at work at many sites nationally and internationally. This multi-university exhibit presents several of these projects. The presenters are from Cornell University, Purdue University, the University of Illinois, the Universitat Politecnica de Catalunya (Spain), the University of Rochester, and the University of Toronto. The exhibits cover the topics

  • The Polaris program analysis and manipulation infrastructure,
  • Polaris at work on workstations and massively parallel computers,
  • Teaching compilers with Polaris,
  • Tools for research on instruction level parallelism,
  • Locality enhancement technology, and
  • A Data Access Visualization Environment.
  • The High Performance Systems Software Lab at Maryland

    University of Maryland, Department of Computer Science and Institute for Advanced Computer Studies

    This exhibit will demonstrate several of the ongoing research projects in the High Performance Systems Software Lab at the University of Maryland. The projects include a protocol construction library and parallelizing compiler software for distributed shared memory systems, performance measurement tools for parallel systems, compiler and runtime infrastructure software for high performance computing, systems software for flexible distributed computing utilizing mobile agents, and a high performance Earth Science database system for accessing and processing large quantities of satellite imagery. The exhibit will consist of multiple demonstrations of software tools, and posters showing many aspects of the ongoing research in the Lab. For more information about the Lab, see http://www.cs.umd.edu/projects/hpssl.

    Legion: The Next Step Toward the World-Wide Virtual Computer

    University of Virginia

    The coming of giga-bit networks makes possible the realization of a single nationwide virtual computer comprised of a variety of geographically distributed high-performance machines and workstations. To realize the potential that the physical infrastructure provides, software must be developed that is easy to use, supports large degrees of parallelism in applications code, and manages the complexity of the underlying physical system for the user. Legion is a metasystem project at the University of Virginia designed to provide users with a transparent interface to the available resources, both at the programming interface level as well as at the user level. Legion addresses issues such as parallelism, fault-tolerance, security, autonomy, heterogeneity, resource management, software component interoperability, and access transparency in a multi-language environment.

    The Legion team has established collaborations with three of the supercomputer centers, two DOE labs, CalTech, and several universities. In the case of the supercomputer centers we are working with them as partners in the ongoing supercomputer center re-competition under the PACI program. Legion will be one of the metacomputing tools used to create the image of a single partnership-wide resource. Further, deployment of Legion onto SDSC and NCSA resources has already begun under the DARPA-funded distributed object computation testbed (DOCT). In the case of the DOE labs, we are working with researchers at LANL and SNL under the auspices of the ASCI program.

    Legion is not vapor-ware. During Supercomputing '95 in San Diego Legion won the HPC Challenge award for ``Overall System Interface and Functionality ``.

    During supercomputing '96 we will demonstrate both the current prototype version of Legion running at SDSC, NCSA, Virginia, and SAIC Arlington; and the full-featured under development.

    For more information on Legion see our web page at: http://www.cs.virginia.edu/~legion

    Parallelizing and Distributing Compilers for Meta-Computer Systems

    WASEDA University

    The Waseda University exhibit demonstrates system software for High-Performance Computing using a network of workstation clusters. One of the heart of our exhibit is Meta-computer, that provides parallelization, distribution and execution of programs written in traditional serial languages. The system is a middleware which brings distributed heterogeneous computation resources (WS, MPP, vector processors, ...) on the desk and provides a solution for the user who wants to share the resources with the fellows on the network. Leading technologies support the Meta-computer system are in the fields of parallelizing compilers, high performance networks, distributed systems, performance evaluation and so on.

    The view of this system is parallelizing compiler and runtime-system in traditional terms. But, compiler and runtime-system are indivisible on complex systems of today. The compiler, in particular decomposer and scheduler, must know runtime-environment. Our compiler interacts with runtime-system.

    More information is available at here