Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Design and analysis of two highly scalable sparse grid combination algorithms|
|Citation:||Journal of Computational Science, 2016; 17:547-561|
|Peter E. Strazdins, Md. Mohsin Alia, Brendan Harding|
|Abstract:||Many large scale scientific simulations involve the time evolution of systems modelled as Partial Differential Equations (PDEs). The sparse grid combination technique (SGCT) is a cost-effective method for solve time-evolving PDEs, especially for higher-dimensional problems. It consists of evolving PDE over a set of grids of differing resolution in each dimension, and then combining the results to approximate the solution of the PDE on a grid of high resolution in all dimensions. It can also be extended to support algorithmic-based fault-tolerance, which is also important for computations at this scale. In this paper, we present two new parallel algorithms for the SGCT that supports the full distributed memory parallelization over the dimensions of the component grids, as well as across the grids as well. The direct algorithm is so called because it directly implements a SGCT combination formula. We give details of the design and implementation of a ‘partial’ sparse grid data structure, which is needed for its efficient implementation. The second algorithm converts each component grid into their hierarchical surpluses, and then uses the direct algorithm on each of the hierarchical surpluses. The conversion to/from the hierarchical surpluses is also an important algorithm in its own right. It requires a technique called sub-griding in order to correctly deal with the combination of very small surpluses. An analysis of both indicates the direct algorithm minimizes the number of messages, whereas the hierarchical surplus minimizes memory consumption and offers a reduction in bandwidth by a factor of 1–2−d, where d is the dimensionality of the SGCT. However, this is offset by its incomplete parallelism (70–80%) and a factor of 2d load imbalance in practical scenarios. Our analysis also indicates both are suitable in a bandwidth-limited regime and that the direct algorithm is scalable with respect to d. Experimental results including the strong and weak scalability of the algorithms indicates that, for scenarios of practical interest, both are sufficiently scalable to support large-scale SGCT but the direct algorithm has generally better performance, at least by a factor of 2 in most cases. Hierarchical surplus formation is much less communication intensive, but shows less scalability with increasing core counts. Altering the layout of processes in the process grids and the mapping of processes affects the performance of the 2D SGCT by less than 10%, and affects even less the application part of an SGCT advection application.|
|Keywords:||High performance computing; parallel computing; PDE solvers; sparse grid combination technique; algorithm-based fault tolerance|
|Rights:||© 2016 Elsevier B.V. All rights reserved.|
|Appears in Collections:||Mathematical Sciences publications|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.