Apache Spark Bitesize: What is RDD
As the title suggests, this is meant to be a quick post clarifying the core abstraction in Apache Spark: RDD, also known as Resilient Distributed Dataset. RDD is the fundamental data structure in Spark and it is an immutable distributed collections of elements. In simple terms, it is essentially the way Spark represents a set of data, which spreads across multiple machines. As per the formal definition:
RDDs are fault-tolerant, parallel data structures that let users explicitly persist intermediate results in memory, control their partitioning to optimize data placement, and manipulate them using a rich set of operators.
It is possible to create RDDs in two different ways: 1) by calling the parallelise method of the JavaSparkContext class in the driver program; 2) by referencing the dataset which resides on an external storage system.
Here is an example of how to create a parallelised collection via the Scala API:
[code language=”scala”]
val myCollection = List(1,3,6,8,9)
val myDistributedCollection = sc.parallelize(myCollection)
[/code]
This, instead, is an example of how to reference external datasets (Scala API):
[code language=”scala”]
val distFile = sc.textFile("myFile.csv")
[/code]
In the next Apache Spark Bitesize, I will be covering RDD operations: transformations and actions.
Further resources:
- M. Zaharia et. al. Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing, 2001. Available at: https://people.eecs.berkeley.edu/~matei/papers/2012/nsdi_spark.pdf
- Apache Spark – Quick Start: http://spark.apache.org/docs/latest/quick-start.html
- SparkHub: https://sparkhub.databricks.com/
Posted on July 7, 2016, in Uncategorized. Bookmark the permalink. Leave a comment.
You must log in to post a comment.
Leave a comment
Comments 0