A list of resources to getting started with Scala. Become a (better) Scala developer with a list of resources, tutorials and best practices.
This set of resources is fundamental for whoever is starting to develop for WordPress and has decided to make use of the hackable IDE Atom.io. Here are some of the most appealing WordPress Atom packages that make it easy for development with support for functions, filters, action hooks.
Looking for the best Eclipse shortcuts? Here are the top 10. These are for all the Eclipse aficionados to be able to use your favourite IDE at its best.
As the title suggests, this is meant to be a quick post clarifying the core abstraction in Apache Spark: RDD, also known as Resilient Distributed Dataset. RDD is the fundamental data structure in Spark and it is an immutable distributed collections of elements. In simple terms, it is essentially the way Spark represents a set of data, which spreads across multiple machines. As per the formal definition:
RDDs are fault-tolerant, parallel data structures that let users explicitly persist intermediate results in memory, control their partitioning to optimize data placement, and manipulate them using a rich set of operators.
It is possible to create RDDs in two different ways: 1) by calling the parallelise method of the JavaSparkContext class in the driver program; 2) by referencing the dataset which resides on an external storage system.
Here is an example of how to create a parallelised collection via the Scala API:
val myCollection = List(1,3,6,8,9) val myDistributedCollection = sc.parallelize(myCollection)
This, instead, is an example of how to reference external datasets (Scala API):
val distFile = sc.textFile("myFile.csv")
In the next Apache Spark Bitesize, I will be covering RDD operations: transformations and actions.
- M. Zaharia et. al. Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing, 2001. Available at: https://people.eecs.berkeley.edu/~matei/papers/2012/nsdi_spark.pdf
- Apache Spark – Quick Start: http://spark.apache.org/docs/latest/quick-start.html
- SparkHub: https://sparkhub.databricks.com/
The technologist speaks about an ambitious plan to build a powerful artificial intelligence.
Being a long-term Semantic Web aficionado, I have used Apache Jena with Eclipse several times for manipulating and parsing RDF models and originally installed it by downloading the Windows-compatible distro from http://jena.apache.org/download/index.cgi and and import the libs into my Eclipse project. I have recently found an Eclipse Library Plugin for Jena 2.0 and I was rather excited to be able to work with Jena without fetching the latest Jena distro and adding it to your Build Path, but the excitement did not last long….The plugin is unfortunately for Jena 2.0 and it does not seem to be compatible with most recent Eclipse versions (e.g. Mars 4.5). Oh well, back to the known route, then!:)
(Related) Interesting resources: