I’m Zuhair AlSader (زهير الصدر). My pronouns are he/him. I primarily work with storage and distributed systems, most recently at Davzero.io. I am open to work!
Cloud infrastructures are increasingly being adopted as a platform for high performance computing (HPC) science and engineering applications. For HPC applications, the Message-Passing Interface (MPI) is widely-used. Among MPI operations, collective operations are the most I/O intensive and performance critical. However, classical MPI implementations are inefficient on cloud infrastructures because they are implemented at the application layer using network-oblivious communication patterns. These patterns do not differentiate between local or cross-rack communication and hence do not exploit the inherent locality between processes collocated on the same node or the same rack of nodes. Consequently, they can suffer from high network overheads when communicating across racks. In this thesis, we present COOL, a simple and generic approach for Message-Passing Interface (MPI) collective operations. COOL enables highly efficient designs for collective operations in the cloud. We then present a system design based on COOL that describes how to implement frequently used collective operations. Our design efficiently uses the intra-rack network while significantly reducing cross-rack communication, thus improving application performance and scalability. We use software-defined networking capabilities to build more efficient network paths for I/O intensive collective operations. Our analytic evaluation shows that our design significantly reduces the network overhead across racks. Furthermore, when compared with OpenMPI and MPICH, our design reduces the latency of collective operations by a factor of log N, where N is the total number of processes, decreases the number of exchanged messages by a factor of N and reduces the network load by up to an order of magnitude. These significant improvements come at the cost of a small increase in the computation load on a few processes.
@mastersthesis{AlSaderZuhair2020,author={{AlSader, Zuhair}},title={Optimizing MPI Collective Operations for Cloud Deployments},year={2020},publisher={UWSpace},url={http://hdl.handle.net/10012/15581},}
COOL: A Cloud-Optimized Structure for MPI Collective Operations
Mohammed Alfatafta, Zuhair AlSader, and Samer Al-Kiswany
In 2018 IEEE 11th International Conference on Cloud Computing (CLOUD), Jul 2018
We present COOL, a simple and generic structure for MPI collective operations. COOL enables highly efficient designs for all collective operations in the cloud. We then present a system design based on COOL that implements frequently used collective operations. Our design efficiently uses the intra-rack network while minimizing cross-rack communication, thus improving the application performance and scalability. We use recent software-defined networking capabilities to build optimal network paths for I/O intensive collective operations. Our analytical evaluation shows that our design imposes the least possible network overhead across racks. Furthermore, when compared with OpenMPI and MPICH, our design reduces the number of steps to only three, decreases the number of exchanged messages by a factor of N, the total number of processes, and reduces the network load by up to an order of magnitude. These significant improvements come at the cost of a modest increase in the computation load on a few processes.
@inproceedings{8457871,author={Alfatafta, Mohammed and AlSader, Zuhair and Al-Kiswany, Samer},booktitle={2018 IEEE 11th International Conference on Cloud Computing (CLOUD)},title={COOL: A Cloud-Optimized Structure for MPI Collective Operations},year={2018},volume={},number={},pages={746-753},keywords={},doi={10.1109/CLOUD.2018.00102},issn={2159-6190},month=jul,}