The Italian Grid Infrastructure (IGI) is one of the National Grid Initiatives (NGIs) composing the European Grid Infrastructure (EGI) and provides computational and storage resources to scientific communities belonging to various domains and disciplines. Starting from the early 2000s, the infrastructure was originally shaped on the Large Hadron Collider community needs, mainly addressed by the paradigm of the High Throughput Computing (HTC) which is able to fully exploit the geographically distributed resources running sequential applications. The support for parallel codes was therefore initially limited. In the last decade, however, facilitated by European projects such as the EGEE series (Enabling Grid for eScience in Europe) and EGI- InSPIRE, the usage of the infrastructure was significantly extended to other scientific communities, requiring new applications and new types of workflows to be ported to the Grid.  Copyright owned by the author(s) under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike Licence. http://pos.sissa.it PoS(ISGC 2013)024 As a consequence the request to support non-embarrassingly parallel applications increased. Moreover, the latest hardware enhancements, in particular the wide spread of multicore processors, boosted the request for High Performance Computing (HPC) support by the Grid. This support is now greatly improved also taking advantage from new Grid middleware capabilities fostered by dedicated working groups within both EGI and the NGIs, including IGI. In this paper we will present the achievements obtained by recent collaborations between IGI (its Training and User Support unit in particular) and several user communities in exploiting the IGI resources to run a set of case studies provided by the user communities, whose workflows require the execution of parallel applications. Within such collaborations, IGI provided not only the necessary resources but also the know- how to appropriately modify the applications and make them suitable for an effective and efficient use of the distributed computing environment. The work made and the adopted solutions are of great interest across the various scientific domains, ranging from Computational Chemistry, to Geophysics and Bioinformatics. Moreover the parallel applications have been selected in order to cover a wide range of technological challenges for what concerns resource usage and software requirements. The paper shows to what extent some types of HPC is now feasible on the Italian Grid comparing the speed-up factors obtainable with parallel runs. Since not all the Grid sites support parallel jobs, for some use cases effort has been spent to combine HTC (or scalar) and HPC (or parallel) runs of the same application to maximize the exploitation of the infrastructure. Focus is also given to the issues encountered during the porting process, in particular those concerning the resource discovery and the tuning of the executables to the underlying hardware infrastructure. Possible further improvements from an user point of view, such as better support to accelerators (manycores processors and GPUs) and interoperability with supercomputing, as in the case of Computational Chemistry domain, is also discussed.

Porting workflows based on small and medium parallelism applications to the Italian Grid Infrastructure / D., Cesini; A., Costantini; Alfieri, Roberto; E., Giorgio; V., Boccia; DE PIETRI, Roberto; L., Gaido; A., Venturini; S., Ottani; A., Buzzi; P., Malguzzi; M., Bencivenni; D., Mastrangelo; G., La Rocca. - In: POS PROCEEDINGS OF SCIENCE. - ISSN 1824-8039. - 179:(2013), pp. PoS(ISGC 2013)024-1-PoS(ISGC 2013)024-12. (Intervento presentato al convegno The International Symposium on Grids and Clouds (ISGC) 2013 tenutosi a Taipei; Taiwan; nel March 17-22, 2013).

Porting workflows based on small and medium parallelism applications to the Italian Grid Infrastructure

ALFIERI, Roberto;DE PIETRI, Roberto;
2013-01-01

Abstract

The Italian Grid Infrastructure (IGI) is one of the National Grid Initiatives (NGIs) composing the European Grid Infrastructure (EGI) and provides computational and storage resources to scientific communities belonging to various domains and disciplines. Starting from the early 2000s, the infrastructure was originally shaped on the Large Hadron Collider community needs, mainly addressed by the paradigm of the High Throughput Computing (HTC) which is able to fully exploit the geographically distributed resources running sequential applications. The support for parallel codes was therefore initially limited. In the last decade, however, facilitated by European projects such as the EGEE series (Enabling Grid for eScience in Europe) and EGI- InSPIRE, the usage of the infrastructure was significantly extended to other scientific communities, requiring new applications and new types of workflows to be ported to the Grid.  Copyright owned by the author(s) under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike Licence. http://pos.sissa.it PoS(ISGC 2013)024 As a consequence the request to support non-embarrassingly parallel applications increased. Moreover, the latest hardware enhancements, in particular the wide spread of multicore processors, boosted the request for High Performance Computing (HPC) support by the Grid. This support is now greatly improved also taking advantage from new Grid middleware capabilities fostered by dedicated working groups within both EGI and the NGIs, including IGI. In this paper we will present the achievements obtained by recent collaborations between IGI (its Training and User Support unit in particular) and several user communities in exploiting the IGI resources to run a set of case studies provided by the user communities, whose workflows require the execution of parallel applications. Within such collaborations, IGI provided not only the necessary resources but also the know- how to appropriately modify the applications and make them suitable for an effective and efficient use of the distributed computing environment. The work made and the adopted solutions are of great interest across the various scientific domains, ranging from Computational Chemistry, to Geophysics and Bioinformatics. Moreover the parallel applications have been selected in order to cover a wide range of technological challenges for what concerns resource usage and software requirements. The paper shows to what extent some types of HPC is now feasible on the Italian Grid comparing the speed-up factors obtainable with parallel runs. Since not all the Grid sites support parallel jobs, for some use cases effort has been spent to combine HTC (or scalar) and HPC (or parallel) runs of the same application to maximize the exploitation of the infrastructure. Focus is also given to the issues encountered during the porting process, in particular those concerning the resource discovery and the tuning of the executables to the underlying hardware infrastructure. Possible further improvements from an user point of view, such as better support to accelerators (manycores processors and GPUs) and interoperability with supercomputing, as in the case of Computational Chemistry domain, is also discussed.
2013
Porting workflows based on small and medium parallelism applications to the Italian Grid Infrastructure / D., Cesini; A., Costantini; Alfieri, Roberto; E., Giorgio; V., Boccia; DE PIETRI, Roberto; L., Gaido; A., Venturini; S., Ottani; A., Buzzi; P., Malguzzi; M., Bencivenni; D., Mastrangelo; G., La Rocca. - In: POS PROCEEDINGS OF SCIENCE. - ISSN 1824-8039. - 179:(2013), pp. PoS(ISGC 2013)024-1-PoS(ISGC 2013)024-12. (Intervento presentato al convegno The International Symposium on Grids and Clouds (ISGC) 2013 tenutosi a Taipei; Taiwan; nel March 17-22, 2013).
File in questo prodotto:
File Dimensione Formato  
2013_POS_ISGC 2013_024.pdf

non disponibili

Tipologia: Documento in Post-print
Licenza: Creative commons
Dimensione 803.35 kB
Formato Adobe PDF
803.35 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11381/2698884
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact