Optimizing computational costs of Spark for SARS‐CoV‐2 sequences comparisons on a commercial cloud
Résumé
Cloud computing is currently one of the prime choices in the computing infrastructure landscape. In addition to advantages such as the pay-per-use bill model and resource elasticity, there are technical benefits regarding heterogeneity and large-scale configuration. Alongside the classical need for performance, for example, time, space, and energy, there is an interest in the financial cost that might come from budget constraints. Based on scalability considerations and the pricing model of traditional public clouds, a reasonable optimization strategy output could be the most suitable configuration of virtual machines to run a specific workload. From the perspective of runtime and monetary cost optimizations, we provide the adaptation of a Hadoop applications execution cost model extracted from the literature aiming at Spark applications modeled with the MapReduce paradigm. We evaluate our optimizer model executing an improved version of the Diff Sequences Spark application to perform SARS-CoV-2 coronavirus pairwise sequence comparisons using the AWS EC2's virtual machine instances. The experimental results with our model outperformed 80% of the random resource selection scenarios. By only employing spot worker nodes exposed to revocation scenarios rather than on-demand workers, we obtained an average monetary cost reduction of 35.66% with a slight runtime increase of 3.36%.