In our last tutorial section, we looked at filtering, joining and sorting data. Today, we will look at more Spark Data Transformations on RDD. Our focus topics for today are: distinct, groupby and union in Spark. Now, we will have a look at several other operators for that. First, we will use the “Distinct” transformation
Distinct
Distinct enables us to return exactly one of each item. For instance, if we have more than one entry in the same sequence, we can reduce this. A sample would be the following array:
[1, 2, 3, 4, 1]
However, we only want to return each number exactly once. This is done via the distinct keyword. The following example illustrates this:
ds_distinct = sc.parallelize([(1), (2), (3), (4), (1)]).distinct().collect() ds_distinct
GROUPBY
A very important task when working with data is grouping. In Spark, we have the GroupBy transformation for this. In our case, this is “GroupByKey”. Basically, this groups the dataset into a specific form and the execution is added when calling the “mapValues” function. With this function, you can provide how you want to deal with the values. Some options are pre-defined, such as “len” for the number of occurrences or “list” for the actual values. The following sample illustrates this with our dataset introduced in the previous tutorial:
ds_set = sc.parallelize([("Mark", 1984), ("Lisa", 1985), ("Mark", 2015)]) ds_grp = ds_set.groupByKey().mapValues(list).collect() ds_grp
If you want to have a count instead, simply use “len” for it:
ds_set.groupByKey().mapValues(len).collect()
The output should look like this now:

Union
A union joins together two datasets into one. In contrast to the “Join” transformation that we already looked at in our last tutorial, it doesn’t take any keys and simply appends the datasets. It is very similar to the previous one, but the result is different. The syntax for it is straight forward: it is written “dsone.union(dstwo)”. Let’s have a look at it:
ds_one = sc.parallelize([("Mark", 1984), ("Lisa", 1985)]) ds_two = sc.parallelize([("Luke", 2015), ("Anastasia", 2017)]) sorted(ds_one.union(ds_two).collect())
Now, the output of this should look like the following:
[('Anastasia', 2017), ('Lisa', 1985), ('Luke', 2015), ('Mark', 1984)]
Today, we learned a lot about Spark data transformations. In our next tutorial – part 3 – we will have a look at even more of them.
There is of course much more to learn about Spark, so make sure to read the entire Apache Spark Tutorial. I regularly update this tutorial with new content. Also, I created several other tutorials, such as the Machine Learning Tutorial and the Python for Spark Tutorial. The official Apache Spark page can intensify your experience. Your learning journey can still continue.
Leave a Reply
Want to join the discussion?Feel free to contribute!