Big Data is reforming many industrial domains by providing decision support through analyzing large data volumes. Big Data testing aims to ensure that Big Data systems run …
S Hsaini, S Azzouzi, MEH Charaf - International Journal of Parallel …, 2021 - Taylor & Francis
Over the last few years, there has been a rising trend towards the field of distributed testing where the implementation under test (IUT) has physical distributed ports. However, running …
New processing models are being adopted in Big Data engineering to overcome the limitations of traditional technology. Among them, MapReduce stands out by allowing for the …
J Morán, C Riva, J Tuya - Proceedings of the 6th International Workshop …, 2015 - dl.acm.org
MapReduce is a parallel data processing paradigm oriented to process large volumes of information in data-intensive applications, such as Big Data environments. A characteristic of …
Big Data programs are those that process large data exceeding the capabilities of traditional technologies. Among newly proposed processing models, MapReduce stands out as it …
Among the current technologies to analyse large data, the MapReduce processing model stands out in Big Data. MapReduce is implemented in frameworks such as Hadoop, Spark …
JB de Souza Neto, A Martins Moreira… - Software Testing …, 2022 - Wiley Online Library
This paper proposes transmut‐Spark for automating mutation testing of big data processing code within Spark programs. Apache Spark is an engine for big data analytics/processing …
Programs that process a large volume of data generally run in a distributed and parallel architecture, such as the programs implemented in the processing model MapReduce. In …
We propose TRANSMUT-Spark, a tool that automates the mutation testing process of Big Data processing code within Spark programs. Apache Spark is an engine for Big Data …