In the previous tutorial, we learned about data cleaning in Spark. Today, we will look at different options to work with columns and rows in Spark. First, we will start with renaming columns. We did this already several times so far, and it is a frequent task in data engineering. In the following sample, we will rename a column:
thirties = clean.select(clean.name, clean.age.between(30, 39)).withColumnRenamed("((age >= 30) AND (age <= 39))", "goodage") thirties.show()
As you could see, we took the old name – which was very complicated – and renamed it to “goodage”. The output should be the following:
+-----+-------+ | name|goodage| +-----+-------+ | Max| false| | Tom| true| | Sue| false| |Mario| true| +-----+-------+
In the next sample, we want to filter columns on a string-expression. This can be done with the “endswith” method being applied to the column name that should be filtered. In the following sample, we want to filter all contacts that are from Austria:
austrian = clean.filter(clean.lang.endswith("at")) austrian.show()
As you can see, only one result is returned (as expected):
+---+-----+---+-----+ |nid| name|age| lang| +---+-----+---+-----+ | 1|Mario| 35|DE-at| +---+-----+---+-----+
Removing Null-Values in Spark
In our next sample, we want to filter all rows that contain null values in a specific column. This is useful to get a glimpse of null values in datasets. This can easily be done by applying the “isNull” function on a column:
nullvalues = dirtyset.filter(dirtyset.age.isNull()) nullvalues.show()
Here, we get the two results containing these null values:
+---+----+----+-----+ |nid|name| age| lang| +---+----+----+-----+ | 4| Tom|null|AT-ch| | 5| Tom|null|AT-ch| +---+----+----+-----+
Another useful function in Spark is the “Like” function. If you are familiar with SQL, it should be easy to apply this. If not – basically, it scans text in a column, which contains one or more specific literals. You can use different expressions to filter for patterns. The following one filters all people that have “DE” in it, independent of what follows afterwards (“%”):
langde = clean.filter(clean.lang.like("DE%")) langde.show()
Here, we get all items:
+---+-----+---+-----+ |nid| name|age| lang| +---+-----+---+-----+ | 2| Max| 46|DE-de| | 4| Tom| 34|DE-ch| | 1|Mario| 35|DE-at| +---+-----+---+-----+
Shorten Strings in a Column in Spark
Several times, we want to shorten string values. The following sample takes the first 2 letters with the “substr” function on the column. We afterwards apply the “alias” function, which renames the function (similar to the “withColumnRenamed” function above).
shortnames = clean.select(clean.name.substr(0,2).alias("sn")).collect() shortnames
Also here, we get the expected output; please note that it isn’t unique anymore (names!):
[Row(sn='Ma'), Row(sn='To'), Row(sn='Su'), Row(sn='Ma')]
Spark offers much more functionality to manipulate Columns, so just play with the API :). In the next tutorial, we will have a look at how to build Cubes and Rollups in Spark
There is of course much more to learn about Spark, so make sure to read the entire Apache Spark Tutorial. I regularly update this tutorial with new content. Also, I created several other tutorials, such as the Machine Learning Tutorial and the Python for Spark Tutorial. The official Apache Spark page can intensify your experience. Your learning journey can still continue.
Leave a Reply
Want to join the discussion?Feel free to contribute!