Approach: There are basically three different paradigms for solving computational fluid dynamics (CFD) problems: Spectral/Pseudospectral (PS), Finite Volume/Finite Difference (FD), and Particle approaches. There also are basically two paradigms for massively parallel processor (MPP) computer architectures: Single Instruction Multiple Data (SIMD) and Multiple Instruction Multiple Data (MIMD). SIMD architecture favors codes that have multiple processors executing identical instructions in lockstep, whereas MIMD architecture favors codes with distinct blocks of code that can be executed independently and simultaneously. In this project we started with a suite of codes that were optimized to run on single vector-processor machines such as the Cray XMP and YMP. These codes were rewritten as necessary to take advantage of the SIMD or MIMD architecture as appropriate. The goal was to develop codes whose execution speeds scale linearly with the number of processors, to take advantage of the larger machines as they become available. Since the new MPP machines are potentially faster than their predecessors, this has allowed us to become more ambitious in the scientific problems we address and to include increasingly complex physics packages in the codes as they are developed.
As the codes developed and the machines became available we concurrently applied them to a variety of scientifically important questions concerning the dynamics of the solar atmosphere, in particular to advance our understanding of coronal heating, chromospheric dynamical events, and fluxtube interactions in the corona.
Accomplishments: Four codes were developed for this project and we utilized five computers: the Thinking Machines (TMC) CM-200 (SIMD) and CM-5 (SIMD/MIMD), the Intel iPSC 860 (MIMD) and Paragon (MIMD), and the Cray T3D (SIMD/MIMD). We have increased the resolution of our 3D magnetohydrodynamics code from the 64x64x64 resolution at the start of the project to our current maximum of 256x256x256 resolution, an increase of over two orders of magnitude in computational intensity. This has enabled us to resolve the magnetic reconnection at the dissipative scale. Figure 1 shows the remarkable result of one such simulation wherein two orthogonal flux tubes reconnect and pass through one other to recover the original configuration. This phenomenon was not observed in previous simulations due to the greater dissipation inherent in using coarser grids.
Increased computing power has permitted order-of-magnitude longer simulations and more than double the resolution in our 2.5-dimensional flux-corrected transport magnetohydrodynamics (FCT-MHD) code. We have used this code to simulate driven magnetic reconnection in low-lying chromospheric magnetic structures. The increased resolution has permitted the study of asymmetric cases in which jetting of plasma along field lines is seen, reminiscent of spicules and surges, and filamentary current sheets filling a substantial part of the volume and resembling coronal loops in profile are formed. Figure 2 shows an example of such a simulated reconnection event.
These codes have demonstrated linear scaling, with a peak performance of over 4 GFlops on the 256 node NRL CM-5. We also have developed a new particle-particle particle-mesh (PPPM) code combining the Monotonic Lagrangian grid (MLG) and particle in cell (PIC) methods to take advantage of the efficiency on parallel machines of the MLG and the long scale accuracy of PIC codes. This code scales linearly, has achieved 2.6 Gflops on the 256 node CM-5, and has been used to investigate heretofore unknown quasistatic equilibria in 3D.
Significance: Modeling the solar corona involves many different scales of phenomena, from the very small resistive dissipation scales to the very large ideal MHD scales. This requires very high resolution codes in three dimensions. Present day computers do not have the speed or memory to resolve all the relevant scales. The MPP architectures, potentially with Teraflop speeds, will provide for the first time sufficient power to perform these simulations. Our present codes are gearing up to take advantage of these potential speeds. In the meantime, with the rapidly advancing computing power available we are able to perform less ambitious but relevant scientific studies.
Status/Plans: Most of our effort to this stage has revolved around the TMC CM-5. We have begun work on the Cray T3D and IBM SP2, and are now focusing our efforts on these MIMD machines. We plan to continue code development while taking advantage of the available codes to perform scientifically exciting simulations of solar activity and heliospheric dynamics.
Further Information: More detailed information about our work is available through the following links:
Progress Toward Metrics