SOFTWARE DEVELOPMENT PLAN
For
Scalable FAST3D CFD Model with VCE Grid Generator
for Complex Geometry Using a Globally Structured Grid
(CFD-1 2/98)



SECTION 1. SCOPE

1.1 IDENTIFICATION.

Computational Technology Area (CTA):
Computational Fluid Dynamics (CFD)

Common High Performance Computing (HPC) Software Support Initiative (CHSSI) Project:
Scalable FAST3D CFD Model with VCE Grid Generator for Complex Geometry Using a Globally Structured Grid

CHSSI Project Number:
CFD-1

Languages Used:
FORTRAN, C

Software Version/Release Date:
Alpha Release 7/97 Beta Release 2/98

1.2 COMPUTATIONAL TECHNOLOGY AREA PROJECT OVERVIEW.

The goals of this project are to: 1) develop a suite of scalable FAST3D CFD software using advanced numerical methods for modeling multidimensional, reactive flow fluid dynamics in complex geometries, 2) provide an integrated, easy-to-use methodology so users can program their own complex geometry, spatially varying initial conditions, time dependent source terms, and in situ diagnostics directly into FAST3D without sacrificing parallel performance, 3) demonstrate that the scalable CFD software performs efficiently on a range of DoD scalable high performance computing platforms, 4) demonstrate the utility of the scalable software with selected simulations of grand challenge or service priority fluid dynamics applications, and 5) help integrate the scalable FAST3D software into DoD RDT&E programs.

This software implementation project focuses on providing a scalable software suite for modeling multidimensional, reactive flow fluid dynamics in complex geometries. The generic requirement is for time-dependent, compressible flow solutions in three dimensions with moderate flow speeds in the Mach number range 0.2 to 5.0 for problems with complex body geometry, complicated spatially varying initial flow conditions, user-specified, time-dependent source terms, and chemical reactions in the flow. FAST3D is a general purpose Computational Fluid Dynamics (CFD) capability based on the high resolution Flux-Corrected Transport (FCT) algorithms invented and developed at the Naval Research Laboratory. The conservation equations for mass, momentum, and energy density, with time-dependent chemical reaction mechanisms, are solved by the monotone, high-resolution, Eulerian, finite-volume algorithms developed in the FCT methodology. The NRL-developed Virtual-Cell Embedding (VCE) algorithms are used to represent complex geometries efficiently, including effects of smoothly rounded bodies, through a grid generator program called GRIDVCE. The VCE algorithms are implemented in a structured orthogonal grid using a new compact data structure particularly well-suited for optimization and parallelization. Scalar and vector versions of precursor FCT codes are now in use at hundreds of sites in the U.S. and abroad. The FAST3D CHSSI project described here will optimize, package, and document the VCE grid generator and the scalable, integrated SMDM (shared-memory/distributed memory) version of the FAST3D flow solver for general DoD use. The support software for this capability, i.e. the graphical user interface, the station data recovery program, and the VOYEUR asynchronous, online graphics package, will also be provided and documented.

The overall objective of this project is to accelerate the exploitation of scalable parallel high performance computing systems to solve fluid dynamics problems of critical importance to DoD. The deliverable goal is a robust, flexible, accurate, portable, well-documented, and well-supported scalable software suite consisting of a flow solver and associated grid generator. This project will demonstrate that the software runs efficiently on a range of HPC and scalable DoD platforms. The applications goal is to solve, in the course of the project, three or four specific priority Science and Technology (S&T) problems of DoD or national challenge-scale priority clearly demonstrating the applicability, flexibility, scalability, accuracy, and robustness of the delivered software. The vertically integrated team assembled through this project plans to guarantee the continuity of the capability beyond the duration of the project. The CFD-1 project members will train and assist a wide range of other DoD scientists and engineers in application of the FAST3D software suite.

Initially, the suite of scalable FAST3D software will be ported to Major Shared Resource Centers (MSRC) that focus on the CFD CTA and selected Distributed Centers (DC).

SECTION 2. REFERENCED DOCUMENTS

1.
A Fluid Transport Algorithm That Works, J.P. Boris, Proceedings, IAEA SMR-9/18, Seminar Course in Computing as a Language of Physics, International Centre for Theoretical Physics, Trieste, Italy:171--189, 2--20 August 1971.

2. Flux-Corrected Transport 1. SHASTA, A Fluid Transport Algorithm That Works. J.P. Boris and D.L. Book, J. Comput. Phys. 11, 38 (1973), also Chapter 11 in Methods in Computational Physics, Academic Press, New York, 85, (1976).

3. LCPFCT - A Flux-Corrected Transport Algorithm for Solving Generalized Continuity Equations. J.P. Boris, A.M. Landsberg, E.S. Oran and J.H. Gardner, NRL Memorandum Report 93-7192 (1993).

4. Implementation of the Full 3-D FAST3D (FCT) Code Including Complex Geometry on the Intel iPSC/860 Parallel Computer. T.R. Young, Jr., A.M. Landsberg, and J.P. Boris, Proceedings of the SCS Simulator Multi-conference, San Diego, CA (1993).

5. An Efficient, Parallel Method for Solving Flows in Complex Three Dimensional Geometries. A.M. Landsberg, T.R. Young, Jr. and J.P. Boris, 32nd Aerospace Sciences Meeting, Reno, NV, AIAA Paper 94-0413 (1994).

6. Analysis of the Nonlinear Coupling Effects of a Helicopter Downwash with an Unsteady Ship Airwake. A.M. Landsberg, J.P. Boris, W.C. Sandberg, and T. R. Young, Jr. 33rd Aerospace Sciences Meeting, Reno, NV, AIAA Paper 95 0047, (1995).

7. Three Dimensional Flow Simulations of Chemical Vapor Deposition Reactors, Carolyn R. Kaplan, C. Richard DeVore, Jay P. Boris, NRL Memorandum Report 95-7760, (1995).

8. Dynamics of Oblique Detonations in Ram Accelerators, C. Li, K. Kailasanath, E.S. Oran, A.M. Landsberg, and J.P. Boris, International Journal of Shock Waves 5, 97-101, June 1995.

9. CHEMEQ - A Subroutine for Solving Stiff Ordinary Differential Equations, T.R. Young, U.S. Naval Research Laboratory Memorandum Report 4091, February 26 1980.

SECTION 3. OVERVIEW OF REQUIRED WORK


This software development plan concentrates on the development, validation, documentation, visualization, and demonstration of scalable fluid dynamics algorithms. The intrinsically parallel nature of the FAST3D flow solver algorithms and data structure ensures that the code can be ported to current state of-the-art supercomputers and parallel computers with different architectures. FAST3D will also be retrofitted to high performance uniproccessors and to multiprocessors to ensure availability of this capability to the widest range of possible users. The standard LCPFCT flow algorithm [3] and stiff chemical equation integrator [9] will be used in their published form to ensure compatibility with the greatest number of external users. An MPI interface, possibly to vendor specific message passing libraries (MPL, NX, PVM, etc.), will be implemented to handle the communications requirements and should accommodate most of the current platforms. This communications software already exists in the high performance computing community. Due to the geometrical feature data being packed in a compressed format, it is anticipated that native word sizes and formats will have to be dealt with for each specific machine.

The risk involved in developing the scalable FAST3D code is very low. A scalable version of the model with most of the promised capabilities is now operational on a number of the DoD HPC systems and an earlier version was used as one of the DoD's CFD benchmark programs, constituting in effect the Software Acceptance Test (SAT). The FAST3D software does require considerable further refinement and documented re-verification with existing numerical and empirical data sets, improved user documentation, and coupling with advanced visualization methods. The risk in obtaining and maintaining portability and robustness with the desired level of user convenience is much higher. Our experience to date has been that different system managers and supporting organizations make different choices about required file structures, storage and backup, about mounting new but relatively untested software, about the use of interprocess and interprocessor communications packages, and about security and intercomputer connectivity. These decisions and their effective implementations change on a weekly basis and an explicit or de facto decision in any one of these areas is capable of temporarily or permanently incapacitating the Graphical User Interface and even the VOYEUR asnychronous graphics package. To lessen the risk and to keep project efforts on cost and schedule, these packages will be offered to prospective users in the form we use them with some help in conversion to their systems but without the guaranteed level of portability provided for the flow solver and grid generator.

Benchmark calibration or validation simulations, as well as scalable demonstration problems targeted for grand challenge or service priority applications, are critical components of the project. Metrics for benchmark problems are known numerical and/or experimental data. Benchmark problems include, but are not limited to, the following: idealized muzzle blast problems; bluff body vortex shedding computations; shock flows over wedges and cones with analytic solutions; idealized shock and blast wave propagation problems; flow through complex, porous geometry; and an industry standard arc airfoil test problem. The scalable demonstration simulations are outlined in the Developmental Test Phases section of this Software Development Plan. This suite of test problem definitions, test data sets, and their corresponding results, will be placed on the World Wide Web where all users and potential users can assess FAST3D's performance and verify their implementations of the model. These test problems ensure and demonstrate the accuracy and scalability, as well as evaluate the design utility, of the scalable FAST3D software.

Technology transfer to the DoD user community is an integral component of the research project and will be accomplished through a series of meetings, training workshops and frequent dialogue between the software developers and applications experts. Given the wide distribution of the vector version of FCT algorithm, the project represents a major advancement towards the DoD's exploitation of scalable HPC platforms for solving critical DoD applications in fluid dynamics.

SECTION 4. PLANS FOR PERFORMING GENERAL SOFTWARE DEVELOPMENT ACTIVITIES

4.1 SOFTWARE DEVELOPMENT METHODS.

This software will be manually developed using high-level programming languages (e.g. FORTRAN and C) with explicit standard message passing interfaces (e.g. PVM or MPI) for distributed memory high performance computers and compiler directives for shared memory high performance computers. We will also use the Resource Control System (RCS), which runs under UNIX, to track the history and versions of the code.

4.2 STANDARDS FOR SOFTWARE PRODUCTS.
FORTRAN subroutine header comments will describe the function of the routine and will define all input/output data. Subroutine function will be described in the User and Programmer's Reference Manual. Routine history and version information can be obtained from RCS. Code block indentation, code spacing, and the order of routine information will be performed in a readable manner conforming to accepted practices. Variable, routine, and file naming conventions will have relevance to the data and the routine function. Comments regarding the algorithm will be provided as necessary to allow a knowledgeable reader to understand the purpose of the routine. Since this project will re-use thousands of lines of FORTRAN in close to one hundred routines of the serial code, we plan to implement the above standards in all new routines that are developed for this project and upgrade existing routines that need major modification.

This FAST3D scalable CFD software will contain a statement restricting its export and transfer to other users. Copies may be made available only to those who have followed the procedures established by the High Performance Computing Modernization Program Office (HPCMPO).

4.3 REUSABLE SOFTWARE PRODUCTS.

4.3.1 INCORPORATING REUSABLE SOFTWARE PRODUCTS.

The identification of potential reusable scalable software products will be accomplished primarily by participation in CHSSI principal investigators' (PI) workshops, literature surveys and monitoring HPC activities within and outside of the DoD. The Programming Environment and Training (PET) component of the DoD HPC Modernization Program (HPCMP) represents another avenue for identification and incorporation of reusable scalable software products. Of particular interest to this project will be grid generators and GUI capabilities developed by the CFD components of the PET program. No candidate reusable software products for incorporation into this project have been identified at this time as the grid generation methodology and associated parallel data structures are unique.

4.3.2 DEVELOPING REUSABLE SOFTWARE PRODUCTS.
Reporting potentially reusable software products developed here will be accomplished through participation in CHSSI workshops and interaction with peers. Where appropriate these components will be made available on the World Wide Web.

4.4 COMPUTER HARDWARE RESOURCE UTILIZATION.
In addition to the in-house parallel machines which will be used to develop the codes and run a number of the test cases, we will use a number of the high performance machines deployed by the HPCMP. Allocation of HPC resources will be in accordance with the DoD HPCMP allocation policy. A DoD Challenge Grant was issued in 1997 for FAST3D computations on three of the DoD HPC platforms for one of the demonstration problems identified in the CFD-1 TEMP Addendum.

SECTION 5. PLANS FOR PERFORMING DETAILED SOFTWARE DEVELOPMENT ACTIVITIES

5.1 ESTABLISHING A SOFTWARE DEVELOPMENT ENVIRONMENT.

Through RCS the current and past versions of the source codes and libraries will be maintained in a separate directory from the routines under development. Each developer will keep a separate directory for routines under development as well as maintain test data while the routines are under development. The libraries based on the current and past controlled RCS source code versions will constitute the software distribution library. Additionally any UNIX system libraries and architecture dependent libraries will be assumed to be available. Each developer will maintain their own makefiles and scripts based on the current FAST3D versions prior to merging and testing for the next RCS version update. A script program has been developed to interactively select and package ('tar') a version of the FAST3D code directly through RCS for user distribution.

5.2 SOFTWARE REQUIREMENTS ANALYSIS.
The generic, technical CFD requirement is for time-dependent, compressible flow solutions in three dimensions for problems with complex geometry, chemical reactions in the flow, and moderate flow speeds in the Mach number range 0.2 to 5.0. The deliverable goal, as established for the DoD HPCMP CHSSI program, is a robust, flexible, accurate, portable, well-documented, and well-supported scalable software suite. This software must run efficiently on a wide range of HPC and scalable DoD platforms. Software requirements analysis is based on informal but extensive technical interactions between code developers and applications experts culminating in the development of the CHSSI CTA Plan.

5.3 SOFTWARE DESIGN.
The software architectural design consists of two fully-supported pieces, the grid generator, GRIDVCE, and the flow solver, FAST3D. The suite of FAST3D software also includes a GUI and the LCP&FD VOYEUR system for on-line, real-time flow visualization. These two additional capabilities will be available to aid FAST3D users but their complete portability/suitability for all systems on which FAST3D and the grid generator GRIDVCE work can not be guaranteed for the reasons specified in Section 3 above. Each program will consist of a main driving routine for data and/or process initialization and several libraries to perform the necessary work. In some cases the libraries may overlap due to common data or functional requirements between two or more programs. All of the programs will interact through common data interfaces, which initially will be data files, but in the future could be some other form of data transmission.

Over the course of the project, the code development team will meet at periodic intervals to review the current library design and implement any necessary changes. Furthermore, additional input toward the software design in this project will be obtained through periodic CTA meetings and reviews.

The grid generator will read input files which include parameters to describe the computational domain and the type and/or format of the complex configuration. It will then produce a file containing a description of the volume mesh and boundary conditions. FAST3D's separation of the geometry of the problem from the grid used to represent it is unique at this time.

The flow solver has the ability to self-initialize (without VCE capabilities) or will read the output of the grid generator and other data files that describe the time advancement procedure and desired diagnostic output. In addition the flow solver may read a file containing the flow variables. The flow solver will also contain a method by which the user may prescribe a flow field through a user written routine and may modify that flow field during the course of program execution.

Real-time flow visualization will use the VOYEUR package via X-windows. The Graphical User Interface can be used to initiate any of the main programs or to manipulate the various input data for the grid generator, the flow solver or the visualization package.

5.4 SOFTWARE IMPLEMENTATION AND UNIT TESTING.
The initial software implementation will be done by the code developer in a separate directory. Once the developer is satisfied that the new routines work properly, the design team must give its approval before the new routines are entered into the RCS directory. The design team gives its approval when a reasonable amount of evidence can be provided that the new routines perform as designed for the range of situations and conditions required.

5.5 DEVELOPMENTAL TEST PHASES.
Alpha Test and Beta Test focus on a set of demonstration simulations to: 1) illustrate the applicability, robustness, and flexibility of the scalable software to meet DoD user requirements for DoD fluid dynamics applications and 2) demonstrate engineering design utility for problems of grand challenge or service priority nature. The details of the developmental test computations are described in the CFD-1 Addendum to the Test and Evaluation Master Plan (TEMP). In summary, the developmental tests will consist of scalable demonstration simulations that will address three of the following eight important DoD fluid dynamics applications areas: design of an open-air detonation facility to destroy demilitarized munitions in an environmentally sound manner, contaminant transport simulations for Chemical/ Biological Defense (CBD) in complex urban environments, ship-board and building-confined fire-suppression and explosion safety simulations, reactive flow interior ballistics computations for ram accelerators and conventional munitions, ship superstructure studies to reduce IR and stack gas plume signatures while quantifying the on-deck turbulence defining a hazard for landing aircraft, and basic science studies for flow through porous media, for detonation structure and deflagration to detonation transition, and large eddy simulations of jet transition to turbulence. The T&E objectives of this suite of simulations are to demonstrate the accuracy, scalability, and design utility of scalable fluid dynamics algorithms for advanced design studies.


5.6 PREPARING SOFTWARE FOR USE.

5.6.1 PREPARING THE EXECUTABLE CODE.

Script file(s) maintained in a separate utility directory will be employed to compile and build binary libraries as well as build the executable code. These script files will be included in each FAST3D software release for the user's convenience. The users may have to modify one or more of these scripts to execute on their own system.

5.6.2 PREPARING USER MANUALS.

User documentation will be maintained in MS Word files (Version 6.0 or higher) from which Postscript file(s) and Web html or PDF versions can be prepared directly. It is anticipated that users will get their documentation directly from the web.

5.6.3 PREPARING VERSION DESCRIPTION DOCUMENTS FOR USER SITES.
Version description documents include software summaries, highlight notices, user manuals and technical reports. We do not plan to distribute software requiring different documentation at different sites. Generally the User and Programmer Reference Manual will be updated to the latest composite version at or shortly after testing and release but we expect that continual modifications will have to be made as questions are asked and user problems with the manual are discovered. We intend to keep past copies of the manual archived in case a user has adapted a previous release of the software to his needs and requires the prior documentation.

5.6.4 INSTALLING AT USER SITES.
Installation procedures will be documented in a README.install file. Script file(s) maintained in a separate utility directory will be used for installing the software at user sites. Help is always available since this installation process cannot be made completely foolproof because of the factors enumerated in the risk discussion in Section 3 above.

5.7 SOFTWARE CONFIGURATION MANAGEMENT.
RCS, makefiles and UNIX scripts will be to used manage the software configuration.

5.8 CORRECTIVE ACTION.
The Gnu Problem Report Management System (GNATS) has been implemented at the developers site (NRL) to track problems and report them to the developers. Code developers will log and prioritize "bug" reports. Corrective action(s) will be forwarded to user sites as interim corrections and/or incorporated in future releases of the software. To the extent possible this facility will be made available to users over the Web. Our plan is to implement the X-windows based or Web-based version of this fault-reporting system for off-site users. GNATS includes a process for notifying users of corrective actions. Any code modifications to accomplish the corrective action will be incorporated in the distributed code as soon as practical.

SECTION 6. SCHEDULES AND ACTIVITIES.

The scheduled deliverable from this software development project is a suite of advanced numerical methods and advanced scientific visualization methods for modeling fluid dynamics problems on scalable architectures. Additional deliverables include technical publications in refereed archival journals and conference proceedings, validation simulations for a broad range of fluid dynamics problems, demonstration problems relevant to critical DoD fluids problems, documentation, and participation in workshops to foster technology transfer.

For a complete schedule of activities and milestones, see Section 2 of the CFD-1 TEMP Addendum. Below is a summary of the yearly project deliverables, activities, and milestones. In all cases the alpha release of a software product is to FAST3D team members only, the beta release is available to other CHSSI participants, while the final release will be available to the DoD and the DoD's US-based contractors.

YEAR 1 Deliverables (FY97):

  • Alpha release of the scalable flow solver, FAST3D, including self initialization without VCE, ability to read VCE grid generator files, standard set of boundary conditions, data storage/retrieval utilities, MPI implementation
  • Alpha release of the grid generator, GRIDVCE
  • Alpha release of 2D and Axisymmetric Benchmark Test Cases
  • Alpha release of File/Directory Structure, Makefiles and scripts
  • Chemistry models (H2-O2) and Lumped Parameter Model
  • "Alpha" release of VOYEUR visualization package (support not guaranteed)
  • YEAR 2 Deliverables (FY 98):
  • Beta release of scalable flow solver FAST3D and grid generator GRIDVCE
  • Documentation on FAST3D flow solver, GRIDVCE grid generator, User I/O data structures, and VOYEUR visualization package (partial documentation).
  • Optimized Running Transpose Algorithm for increased efficiency
  • Dynamic flow field modifications and custom boundary condition capability
  • Additional Reactive Flow Species including CHEMEQ rate integrator release
  • 3D Benchmark Test Cases delivered with Beta release
  • Access to progress on 3D demonstration calculations, i.e. open-air detonation, ram accelerator dynamics, ultra-complex geometry
  • VOYEUR software made available "as is" on the web so other sites may adapt it to different CFD codes, other applications, etc.
  • YEAR 3 Deliverables (FY99):
  • Parallel Disk I/O
  • Enhance user ability to identify distinct and curved surfaces during execution
  • Enhance user ability to specify boundary conditions, as well as source terms, internal to the flow
  • Final release of GRIDVCE and FAST3D
  • All demonstration calculations: open-air detonation, ram accelerator dynamics and ultra-complex geometry
  • Completed documentation of GRIDVCE grid generator and FAST3D flow solver
  • YEAR 4/5 Deliverables:
  • GUI for entire FAST3D suite with documentation
  • Parallel version of the GRIDVCE grid generator for refining portions of the grid
  • FAST3D version including initial dynamic grid capability
  • VOYEUR visualization package ported to major FAST3D platforms and integrated with local systems requirements and restrictions where practical
  • Full documentation of VOYEUR software
  • Expanded diagnostics, i.e. interface to alternate visualization packages (AVS, TecPlot, etc.), full documentation of diagnostic interfaces
  • Continuing user support, consulting and assistance to MSRC integration
  • SECTION 7. PROJECT ORGANIZATION AND RESOURCES

    The technical personnel collaborating on this scalable software project include scientists, engineers, parallel code developers, and applications end-users. Background information for current project participants is presented below. Points of contact for the key DoD project participants are also provided.

    PRINCIPAL INVESTIGATORS.

    Naval Research Laboratory:

    Jay Boris - Chief Scientist, Laboratory for Computational Physics and Fluid Dynamics, NRL Code 6400
    - holds the NRL Chair of Science in Computational Physics and is an internationally recognized leader in Computational Fluid Dynamics. Dr. Boris invented the Flux-Corrected Transport (FCT) algorithms, now in use throughout the world, developed a number of the plasma simulation techniques now in use, and is co-author of the widely used book, "Numerical Simulation of Reactive Flow". He has authored or co-authored nearly 300 publications. Since 1976 he has been the Director of the Laboratory for Computational Physics and Fluid Dynamics, an interdisciplinary group of physicists and engineers who have been leaders in developing Computational Physics and Reactive Flow techniques to exploit rapidly developing computer technology for Navy and DoD problems. Dr. Boris has been a leader in developing and using parallel processing technology for over a decade. In 1983 the LCP&FD, led by Dr. Boris, Theodore Young and Robert Scott, began assembly of the MIMD Graphical and Array Processing System (GAPS) . The 2D and 3D FCT models for GAPS became the prototypes for the FAST3D model being developed in this project. Dr. Boris programmed extensively on the Cray-scale GAPS and has continued parallel and high performance computing through to the present. With Alexandra Landsberg, he developed the VCE algorithms that retain the simplicity, accuracy, and efficiency of the FCT algorithms while treating extremely complex geometry. Dr. Boris is a fellow of the APS and the AIAA. He has received the Arthur S. Fleming Award, the Navy Award for Distinguished Achievement in Science, and the Navy's Captain Robert Dexter Conrad Award.

    U.S. Naval Research Laboratory Phone: 202-767-3055
    Code 6400, LCP&FD Fax: 202-767-6260
    4555 Overlook Ave, SW Email: boris@lcp.nrl.navy.mil
    Washington DC 20375-5344

    Alexandra Landsberg - Research Engineer, LCP&FD, NRL Code 6410 - is the co-developer of the VCE algorithms and has extensive knowledge of the FCT algorithms, the FAST3D model and the graphics facility VOYEUR. In addition to the algorithm development work, she has been involved in all aspects of the 3D unsteady ship airwake project. These include converting the CAD geometry model to a usable grid generator model; developing new algorithms specific to ship airwake calculations; modifying, running and porting the ship airwake code on the parallel Intel systems; and visualization and analysis of the computational results. With her familiarity with the VCE grid generator and FAST3D model, Alexandra Landsberg helped with the modeling and problem initialization on both vector and parallel processing computers for the CVD reactor and the ram accelerator.

    US Naval Research Laboratory Phone: 202-767-1975
    Code 6410, LCP&FD Fax: 202-767-4798
    4555 Overlook Ave, SW Email: landsberg@lcp.nrl.navy.mil
    Washington DC 20375-5344

    Charles Lind - Research Engineer, LCP&FD, NRL Code 6440 - In addition to algorithm development, Dr. Lind has been involved with all aspects of the design of a partially confined detonation facility. These include writing new algorithms and pre- and post-processing software specifically for partially confined detonations and the visualization and analysis of the computational results. Other responsibilities include supporting the development of FAST3D, including modifying and porting the post processing and visualization software and writing and maintaining the software scripts. Other work includes the study of steady and unsteady incompressible and compressible fluid dynamics problems about complex geometries and the porting of FAST3D to a variety of platforms.

    US Naval Research Laboratory Phone: 202-767-1975
    Code 6440, LCP&FD Fax: 202-767-4798
    4555 Overlook Ave, SW Email: lind@lcp.nrl.navy.mil
    Washington DC 20375-5344

    Robert Scott - Research Scientist, LCP&FD, NRL Code 6440 - has been extensively involved in MIMD parallel computing beginning with the development of GAPS, then with the installation and maintenance of the Intel iPSC/860 within the LCP&FD, and is currently involved in the procurement of a new MPP system for NRL for the DoD HPC Modernization plan. His involvement has been both in software, particularly graphics and visualization software, as well as hardware. In addition, Rob Scott has extensive experience in communications systems, networking, and computer systems operation. With his experience in graphics and video animation, he brought the LCP&FD visualization capabilities years before the NRL visualization lab became a reality. Rob Scott has been instrumental in porting the FAST3D code, as well as other LCP&FD codes, to the new parallel systems as they become available.

    US Naval Research Laboratory Phone: 202-767-6593
    Code 6440, LCP&FD Fax: 202-767-6598
    4555 Overlook Ave, SW Email: rob@lcp.nrl.navy.mil
    Washington DC 20375-5344

    Theodore Young - Research Scientist, LCP&FD, NRL Code 6400
    - has recently implemented the full 3D time-dependent scalable model FAST3D, including complex geometry, in parallel on the iPSC Intel Touchstone, Delta, and Paragon computers. He served as chief architect and principal scientist in the development of the Graphical and Array Processor System (GAPS), a course grain MIMD system on the order of a Cray. He developed efficient numerical algorithms and ported large application codes to the GAPS. Mr. Young also developed the vector/parallel version of CHEMEQ, the stiff chemistry ODE solver. He currently serves on several technical advisory committees concerning current and future issues and procurements pertaining to NRL's central computer facilities and network. He is also responsible for the development and application of numerical techniques and models to study reactive flow problems associated with combustion, laser pellet hydrodynamics, and flows around complex objects.

    US Naval Research Laboratory Phone: 202-767-3214
    Code 6440, LCP&FD Fax: 202-767-4798
    4555 Overlook Ave, SW Email: young@lcp.nrl.navy.mil
    Washington DC 20375-5344

    Other DoD Program Participants and Major Users:

    Almadena Chtchelkanova - Berkeley Research Associates - Phone: 202-767-3611
    Works on developing the GUI xfast for the FAST3D program suite. Dr. Chtchelkanova has extensive experience in development of parallel linear libraries, the object-oriented approach to system libraries, in software testing, and has industry SQA experience. Her Ph.D thesis, 1988, was focused on studying the physics of gaseous nebulae.

    Bohdan Cybyk - Research Engineer, Wright Laboratory, USAF - has concentrated on the appplication of CFD algorithms to transonic compressors. His PhD thesis, performed in the LCP&FD, contered on combining the Monotonic Lagrangian Grid with Direct Simulation Monte Carlo and applying the new capability to rarified hypersonic flows. During this work he learned the LCP&FD computer systems and software as well as important aspects of parallel processing for MIMD systems. NOTE: Dr. Cybyk has now joined the NRL staff on a permanent basis and is now concentrating on the application of the FAST3D suite of programs to contaminant transport problems.

    Michael Nusca - Research Engineer, Army Research Laboratory - has concentrated on the development of CFD codes for high-speed/compressible shock-induced combustion, and low-speed/compressible multi-phase chemically reacting flows. He has also published extensively on the subjects of incompressible flows, multiple-body aerodynamic interactions, solid-fuel ramjets, hypersonic aerodynamics, ram accelerators, interior ballistics flow, and CFD algorithms.

    Propulsion Branch, Weapons and Phone: 410-278-6108
    Materials Research Directorate Fax: 410-278-6094
    AMSRL-WM-BW nusca@arl.army.mil
    Aberdeen Proving Ground MD 21005-5066

    Other DoD and Contract Users:

    LCP&FD, NRL Code 6400 - Dr. Mark Emery, Research Scientist - Phone 202-767-3196
    Mark is the principal investigator of a DoD CHSSI effort in the CSM CTA to combine fluid dynamics (FAST3D modified for Dynamic Geometry) with a structural dynamics code (LSDYNA3D from Livermore Software Technology Incorporated). He has had extensive experience with FAST3D and its predecessor codes for inertial confinement fusion and underwater explosion computations.

    Livermore Software - Dr. Fort F. Felker - Phone: 510-449-2500
    Senior member of the LSTC Technical Team with extensive experience in fluid structure interation problems and structural dynamics. Dr. Felker is developing the interface between DYNA3D and FAST3D for fluid-structure interaction problems.

    Livermore Software - Dr. John O. Hallquist - Phone: 510-449-2500
    Developer of the DYNA and NIKE suite of structural dynamics codes and founder of LSTC. Dr. Hallquist participates in the development of the coupled codes and leads scalability efforts for DYNA3D.

    Geocenters Incorporated - Dr. Carl Dyka - Phone: 202-404-7183
    Senior researcher with Geocenters has extensive background with explicit and implicit structural codes. Dr. Dyka is developing diagnostics for coupling FAST3D and LSDYNA3D and performing benchmark computations.

    University of Colorado, Dr. Carolyn Kaplan - Phone: 303-931-6432
    Dr. Kaplan, formerly a research engineer at NRL, now resides in Colorado and teaches at the University of Colorado, Boulder.

    LCP&FD, NRL Code 6410 - Dr. Chiping Li, Research Scientist - Phone: 202-767-3251
    Dr. Li has used FAST3D for several years to study the physics of ram accelerators for Air Force and Navy programs.

    Kaman Sciences - Dr. Dennis Jones - Phone: 703-329-7166
    Dr. Jones group has begun working on the incorporation of FAST3D into the Army's Chem/Bio warfare simulation at CBDCOM in Aberdeen. Several members of his staff participated in one of the first of the FAST3D one day training sessions and they are currently working with a FAST3D data set to develop software to couple FAST3D output to downwind Gaussian puff models.

    Army Research Laboratory - Dr. James Despirito, Research Engineer - Phone: 410-278-6104
    Under Dr. Nusca's direction Dr. Despirito will begin using FAST3D at ARL on problems in the Weapons and Materials Research Directorate this year.

    Air Force Research Laboratory, Edwards - Per Conversation with Jay Levine (research engineer(s) TBD to apply FAST3D to 3D rocket plume prediction)

    NAWC PAX River
    - Per Conversation with Dr. Steve Kern (research engineer(s) TBD to apply FAST3D to computation of ship airwakes)

    Some Users of Previous Versions of FAST3D/FCT Software:

    NRL LCP&FD, Code 6404 - Dr. Elaine Oran, NRL Senior Scientist for Reactive Flow Physics - Phone: 202-767-2960
    Plasma Physics Division, NRL Code 6730, Dr. Jill Dahlburg, Research Scientist - Phone: 202-767- 5398
    Wright Laboratory, USAF -Dr. Yvette Weber, Research Engineer, Phone: 937-255-6207
    Wright Laboratory, USAF -Dr. James Weber, Research Engineer, Phone: 937-255-1237
    Aeronautical and Marine Research Laboratory, Australia - Dr. David Jones, Phone: 61-3-9-626-8520
    Craft Tech, Nicolas Tonello, Research Engineer - Phone: 215-249 9780

    SECTION 8. NOTES

    This section lists acronyms, terms, or definitions required for this SDP.

    AIAA

    American Institute of Aeronautics and Astronautics

    APS

    American Physical Society

    ASCII

    American Standards Code for Information Interchange

    ANSI

    American National Standards Institute

    CBD

    Chemical/Biological Defense

    CHSSI

    Common HPC Software Support Initiative

    CPU

    Central Processing Unit

    CFD

    Computational Fluid Dynamics

    CTA

    Computational Technology Area

    CVD

    Chemical Vapor Deposition

    DC

    Distributed Center

    DoD

    Department of Defense

    ECI

    Export Controlled Information

    FCT

    Flux-Corrected Transport

    GAPS

    Graphical and Array Processing System

    GNATS

    Gnu Problem Report Management System

    LCP&FD

    Laboratory for Computational Physics and Fluid Dynamics

    HPC

    High Performance Computing

    HPCMPO

    High Performance Computing Modernization Program Office

    IR

    Infra-red

    MIMD

    Multiple Instruction/Multiple Data

    MPI

    Message Passing Interface

    MPP

    Massively Parallel Processing

    MSRC

    Major Shared Resource Center

    NAWC

    Naval Air Warfare Center

    NRL

    Naval Research Laboratory

    ODE

    Ordinary Differential Equation

    PET

    Programming Environment and Training

    PI

    Principal Investigator

    PVM

    Parallel Virtual Machine

    RCS

    Revision Control System

    RDT&E

    Research Development Test & Evaluation

    S&T

    Science and Technology

    SDP

    Software Development Plan

    TBD

    To Be Determined

    TEMP

    Test and Evaluation Master Plan

    USAF

    United States Air Force

    VCE

    Virtual Cell Embedding