Fasten Big Data Computing With Apache Spark

Page 1

Fasten Big Data Computing With Apache Spark

Fasten Big Data Computing

Part 1

Author : Sara Stanford


Fasten Big Data Computing With Apache Spark A big boost to Big Data – Apache Spark is a powerful, open source engine for large scale data processing. Originally developed in 2009, open sourced in 2010, Apache Spark has now earned the liking of numerous enterprises with its salient features. It has proven to be one of the largest communities contributing to Big Data. Of the total data, more than 90% of data in the world has been generated in the last 2 years, so is the statistics from a confirmed source by IBM. With the increased mounting of data and enhanced need for Big Data activities, Spark has served to support rapid application development for Big Data and is capable of allowing code reuse between different kind of applications be it batch, interactive and streaming. There are advanced graphs available to fasten the application performance. It has been used by a large variety of organizations for processing large datasets and 400 + developers have contributed to developing Spark. If you are known to Java or Python, it's definitely easy to learn Spark. Apache Spark has a well-designed and striking development API which lets the developers undergo data iteration with various data science methodologies which need quick in-memory processing. Also, with YARN, Spark can, in parallel be used for other data related workloads with all of them sharing the same data set. Generality


What does Apache Spark offer? •

Faster execution Spark encourages Hadoop application clusters to execute 100x faster in memory and 10x faster on disk. Owing to its advance DAG execution engine, it also possesses support for cyclic data flow and inmemory computing. Simplicity and Generalization Apache Spark allows writing applications with ease in Java, Scala or Python. There is availability of over 80 operators and it can used to query data within the shell. It has a perfect combination of SQL, streaming and complex analytics. There are high level tools like Spark SQL, MLlib (machine learning), GraphX and SparkStreaming. These libraries can be seamlessly combined in the same application. Powerful analytics As we read earlier, there is support for SQL queries, streaming and complex analytics. These combinations lead to providing a single workflow and give out sophisticated analytics. Real time processing It manages real time streaming and can manipulate real time data using Spark Streaming. Hence, streaming is also possible with Hadoop and other frameworks available.

With the advent of Spark, Hadoop has been overshadowed a little with Spark giving many more facilities over Hadoop and also helping out Hadoop to function better. Watch this space for more on a comparison between both the Apache products – Spark and Hadoop.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.