Posts

One of the reasons why Python is so popular for Data Science is that Python has a very rich set of functionality for Mathematics and Statistics. In this tutorial, I will show the very basic functions; however, you might be very disappointed, since they are really basic. When we talk about real data science, you might rather consider learning scikit learn, pytorch or Spark ML. However, today’s tutorial will focus on the elements of it, before moving on to the more complex tutorials.

Basic Mathematics in Python from the math Library

The math-library in Python provides a great number of most of the relevant functionality you might want to use in Python when working with numbers. The following samples provide some overview on them:

import math
vone = 1.2367
print(math.ceil(vone))

First, we import “math” from the standard library and then we create some values. The first function we use is ceiling. In the following sample, we calculate the greatest common denominator between two numbers.

math.gcd(44,77)

Other functions are logarithmic, power, cosinus and many more. Some of them are displayed in the following sample:

math.log(5)
math.pow(2,3)
math.cos(4)
math.pi

Basic statistics in Python from the statistics library

The standard library offers some elementary statistical functions. We will first import the library and then calculate the mean of 5 values:

from statistics import *
values = [1,2,3,4,5]
mean(values)

Some other possible functions are:

median(values)
stdev(values)
variance(values)

Have a look at those two libraries – there is quite a lot to explore.

What’s next?

Now, the tutorial series for Python is over. You should now be fit to using pyspark. If you are not yet familiar with Spark, have a look at the Spark Tutorial i created here. Also, I will create more tutorials on Python and Machine Learning in the future, so make sure to check back often to the Big Data & Data Science tutorial overview. I hope you liked this tutorial. If you have any suggestions and what to improve, please feel free to get in touch with me! If you want to learn more about Python, I also recommend you the official page.

Python has a really great standard library. In the next two tutorial sessions, we will have a first look at this standard library. We will mainly focus on what is relevant for Spark developers in the long run. Today, we will focus on FuncTools and IterTools in Python, the next tutorial will deal with some mathematical functions. But first, let’s start with “reduce

The reduce() function from the IterTools in Python

Basically, the reduce function takes an iterable list and executes a function on it. In most of the cases, this will be a lambda function but it could also be a normal function. In our sample, we take some values and create the sum of it by moving from left to right:

from functools import reduce
values = [1,4,5,3,2]
reduce(lambda x,y: x+y, values)

And we get the expected output

15

The sorted() function

Another very useful function is the “sorted” function. Basically, this sorts values or pairs of tuples in an array. The easiest way to apply it is to do it with our previous values (which were unsorted!):

print(sorted(values))

The output is now in the expected sorting:

[1, 2, 3, 4, 5]

However, we can still improve this by even sorting complex objects. Sorted takes a key to sort on, and this is passed as a lamdba expression. We state that we want to sort it by age. Make sure that you still have the “Person” class from our previous tutorial:

perli = [Person("Mario", "Meir-Huber", 35, 1.0), Person("Helena", "Meir-Huber", 5, 1.0)]
print(perli)
print(sorted(perli, key=lambda p: p.age))

As you can see, our values are now sorted based on the age member.

[Person(firstname='Mario', lastname='Meir-Huber', age=35, score=1.0), Person(firstname='Helena', lastname='Meir-Huber', age=5, score=1.0)]
[Person(firstname='Helena', lastname='Meir-Huber', age=5, score=1.0), Person(firstname='Mario', lastname='Meir-Huber', age=35, score=1.0)]

The chain() function

The chain() method is very helpful if you want to hook up two lists with the same objects in it. Basically, we take the Person-Class again and create a new instance. We then chain the two lists together:

import itertools
perstwo = [Person("Some", "Other", 46, 1.0)]
persons = itertools.chain(perli, perstwo)
for pers in persons:
    print(pers.firstname)

Also here, we get the expected output:

Mario
Helena
Some

The groupby() function

Another great feature when working with data is grouping of data. Python also allows us to do so. The groupby() method takes two parameters: the list to group and the key as lambda expression. We create a new array of tuple pairs and group by the family name:

from itertools import groupby
pl = [("Meir-Huber", "Mario"), ("Meir-Huber", "Helena"), ("Some", "Other")]
for k,v in groupby(pl, lambda p: p[0]):
    print("Family {}".format(k))
    for p in v:
        print("\tFamily member: {}".format(p[1]))

Basically, the groupby() method returns the key (as the value type) and the objects as list in the key group. This means that another iteration is necessary in order to access the elements in the group. The output of the above sample looks like this:

Family Meir-Huber
	Family member: Mario
	Family member: Helena
Family Some
	Family member: Other

The repeat() function

A nice function is the repeat() function. Basically, it copies an element several times. For instance, if we want to copy a person 4 times, this can be done like this:

lst = itertools.repeat(perstwo, 4)
for p in lst:
    print(p)

And also the output is just as expected:

[Person(firstname='Some', lastname='Other', age=46, score=1.0)]
[Person(firstname='Some', lastname='Other', age=46, score=1.0)]
[Person(firstname='Some', lastname='Other', age=46, score=1.0)]
[Person(firstname='Some', lastname='Other', age=46, score=1.0)]

The takewhile() and the dropwhile() function in IterTools in Python

Two functions – takewhile and dropwhile – are also very helpful in Python. Basically, they are very similar, but their result is the opposite form each other. takewhile runs until a condition is true, dropwhile runs once a condition is false. Takewhile will take elements from an array/list as long as the predicate is true (e.g. lower than 20, this would mean that elements are only considered as long as they are below 20) – Dropwhile with the same condition would remove elements as long as their values are below 20. The following sample shows this:

vals = range(1,40)
for v in itertools.takewhile(lambda vl: vl<20, vals):
    print(v)
print("######")
for v in itertools.dropwhile(lambda vl: vl<20, vals):
    print(v)

And also here, the output is as expected:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
######
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39

As you can see, these are quite helpful functions. In our last Python tutorial, we will have a look at some basic mathematical and statistical functions.

If you are not yet familiar with Spark, have a look at the Spark Tutorial i created here. Also, I will create more tutorials on Python and Machine Learning in the future, so make sure to check back often to the Big Data & Data Science tutorial overview. I hope you liked this tutorial. If you have any suggestions and what to improve, please feel free to get in touch with me! If you want to learn more about Python, I also recommend you the official page.

One thing that everyone that deals with data is with classes that make data accessible to the code as objects. In all cases – and Python isn’t different here – wrapper classes and O/R mappers have to be written. However, Python has a powerful decorator for us at hand, that allows us to ease up or work. This decorator is called “dataclass”

The dataclass in Python

The nice thing about the dataclass decorator is that it enables us to add a great set of functionality to an object containing data without the need to re-write it always. Basically, this decorator adds the following functionality:

  • __init__: the constructor with all defined member variables. In order to use this, the member variables must be initialised with its type – which is rather uncommon in Python
  • __repr__: this pretty prints the class with all its member variables as a string
  • __eq__: a function to compare two classes for ordering
  • order functions: this creates several order functions such as __lt__ (lower than), __gt__ (greater than), __le__ (lower equals) and __ge__ (greater equals)
  • __hash__: adds a hash-function to the class
  • frozen: prevents the class from adding/deleting attributes on runtime

The definition for a dataclass in Python is easy:

@dataclass
class Classname():
CLASS-BLOCK

You can also add each of the above described properties separately, e.g. with frozen=True or alike.

In the following sample, we will create a Person-Dataclass.

from dataclasses import dataclass
@dataclass
class Person:
    firstname: str
    lastname: str
    age: int
    score: float
p = Person("Mario", "Meir-Huber", 35, 1.0)
print(p)

Please note the differences in how to annotate the member variables. You can see that there is now no need for a constructor anymore, since this is already done for you. When you print the class, the __repr__() function is called. The output should look like the following:

Person(firstname='Mario', lastname='Meir-Huber', age=35, score=1.0)

As you can see, the dataclass abstracts a lot of our problems. In the next tutorial we will have a look at IterTools and FuncTools.

If you are not yet familiar with Spark, have a look at the Spark Tutorial i created here. Also, I will create more tutorials on Python and Machine Learning in the future, so make sure to check back often to the Big Data & Data Science tutorial overview. I hope you liked this tutorial. If you have any suggestions and what to improve, please feel free to get in touch with me! If you want to learn more about Python, I also recommend you the official page.

Once you put your applications into production, you won’t be able to debug them any more. This is creating some issues, since you won’t know what is going on in the background. Imagine, a user does something and an application occurs – maybe, you don’t even know that this behaviour can lead to an error. To overcome this obstacle, we have a powerful tool in almost any programming environment: logging in Python.

How to do logging in Python

Basically, the logger is imported from “logger” and it is used as a singleton. This means that you don’t need to create any classes or alike. Basically, first you need to instruct the logger with some information – such as the path to store the logs in and the format to be used. In our sample, we will use these parameters:

  • filename: The name of the file to write to
  • filemode: how the file should be created or appended
  • format: how the logs should be written into the file (regular expressions, …)

Then, you can call different logging levels to the logger. This is done by simply typing “logger” and using the action:

logger.<<ACTION>> 

Basically, we use these actions:

  • debug: a debug message that something was executed, …
  • info: some information that a new routine or alike is started
  • warning: something didn’t work as expected, but no error occurred
  • error: a severe error occurred that lead to wrong behaviour of the program
  • exception: an exception occurred. It is logged as “error” but in addition it includes the error message

Now, let’s start with the logging configuration:

import logging
logging.basicConfig(filename="../data/logs/log.log", filemode="w", format="%(asctime)s - %(levelname)s - %(message)s")

We store the log itself in a directory that first needs to be created. Then, we provide a format with the time, the name of the level (e.g. INFO) and the message itself. Now, we can go into writing the log itself:

logging.debug("Application started")
logging.warning("The user did an unexpected click")
logging.info("Ok, all is fine (still!)")
logging.error("Now it has crashed ... ")

This creates some log information into the file. Now, let’s see how this works with exceptions. Basically, we “provoke” an exception and log it with “exception”. We also set the parameter “exc_info” to true, which includes the exception without passing it on explicitly (Python handles that for us :))

Logging exceptions in Python

try:
    4/0
except ZeroDivisionError as ze:
    logging.exception("oh no!", exc_info=True)

Now, we can review our file and the output should be like this:

2019-08-13 16:21:04,329 - WARNING - The user did an unexpected click
2019-08-13 16:21:04,889 - ERROR - Now it has crashed ...
2019-08-13 16:21:05,461 - ERROR - oh no!
Traceback (most recent call last):
  File "<ipython-input-9-5d33bb8d3dd6>", line 2, in <module>
    4/0
ZeroDivisionError: division by zero

As you can see, logging is really straight-forward and easy to use in Python. So, no more excuses to not do it :). Have fun logging!

If you are not yet familiar with Spark, have a look at the Spark Tutorial i created here. Also, I will create more tutorials on Python and Machine Learning in the future, so make sure to check back often to the Big Data & Data Science tutorial overview. I hope you liked this tutorial. If you have any suggestions and what to improve, please feel free to get in touch with me! If you want to learn more about Python, I also recommend you the official page.

In our previous tutorial, we had a look at how to (de) serialise objects from and to JSON in Python. Now, let’s have a look into how to dynamically create and extend classes in Python. Basically, we are using the library that Python itself is using. This is the dynamic type function in Python. This function takes several parameters, we will only focus on three relevant one’s for our sample.

How to use the dynamic type function in Python

Basically, this function takes several parameters. We utilize 3 parameters. These are:

type(CLASS_NAME, INHERITS, PARAMETERS)

These parameters have the following meaning:

  • CLASS_NAME: The name of the new class
  • INHERITS: from which the new type should inherit
  • PARAMETERS: new methods or parameters added to the class

In our following example, we want to extend the Person class with a new attribute called “location”. We call our new class “PersonNew” and instruct Python to inherit from “Person”, which we have created some tutorials earlier. Strange is that it is passed as an array, even there can only be one inheritance hierarchy in Python. Last, we specify the method “location” as key-value pair. Our sample looks like the following:

pn = type("PersonNew", (Person,), {"location": "Vienna"})
pn.age = 35
pn.name = "Mario"

If you test the code, it will just work like expected. All other objects such as age and name can also be retrieved. Now, let’s make it a bit more complex. We extend our previous sample with the JSON serialisation to be capable of dynamically creating a JSON object from a string.

Dynamically creating a class in Python from JSON

We therefore create a new function that takes the object to serialise and takes all values out of that. In addition, we add one more key-value pair, which we call “__class__” in order to store the name of the class. getting the class-name is a bit more complex, since it is written like “class ‘main.PersonNew'”. Therefore, we first split the object name with a “.”, take the last entry and again split it by the ‘ and take the first one. There are more elegant ways for this, but I want to keep it simple. Once we have the classname, we store it in the dictionary and return the dictionary. The complex sample is here:

def map_proxy(obj):
    dict = {}
    for k in obj.__dict__.keys():
        dict.update({k : obj.__dict__.get(k)})
    cls_name = str(obj).split(".")[1].split("'")[0]
    dict.update({"__class__" : cls_name})
    return dict

We can now use the json.dumps method and call the map_proxy function to return the JSON string:

st_pn = json.dumps(map_proxy(pn))
print(st_pn)

Now, we are ready to dynamically create a new class with the “type” method. We name the method after the class name that was provided above. This can be retrieved with “__class__”. We let it inherit from Person and pass the parameters from the entire object into it, since it is already a key/value pair:

def dyn_create(obj):
    return type(obj["__class__"], (Person, ), obj)

We can now also invoke the json.loads method to dynamically create the class:

obj = json.loads(st_pn, object_hook=dyn_create)
print(obj)
print(obj.location)

And the output should be like that:

{"location": "Vienna", "__module__": "__main__", "__doc__": null, "age": 35, "name": "Mario", "__class__": "PersonNew"}
<class '__main__.PersonNew'>
Vienna

As you can see, it is very easy to dynamically create new classes in Python. We could largely improve this code, but i’ve created this tutorial for explanatory reasons rather than usability ;).

In our next tutorial, we will have a look at logging.

If you are not yet familiar with Spark, have a look at the Spark Tutorial i created here. Also, I will create more tutorials on Python and Machine Learning in the future, so make sure to check back often to the Big Data & Data Science tutorial overview. I hope you liked this tutorial. If you have any suggestions and what to improve, please feel free to get in touch with me! If you want to learn more about Python, I also recommend you the official page.

One important aspect of working with Data is serialisation. Basically, this means that classes can be persisted to a storage (e.g. the file system, HDFS or S3). With Spark, a lot of file formats are possible. However, in this tutorial we will have a look on how to deal with JSON, a very popular file format and often used in Spark. Now we will have a look at Python serialization.

What is it and how does Python serialization work?

JSON stands for “Java Script Object Notation” and was usually developed for Client-Server applications with JavaScript as main user of it. It was built to have less overhead than XML.

First, let’s start with copying objects. Basically, Python knows two ways: normal copies and deep copies. The difference is that with normal copies, references to objects within the copied object are built. This is relevant when using objects as classes. In a deep copy, no references are built but every value is copied to the new object. This means that you can now use it independent from the previous one.

To copy objects to another, you only need to import copy and call the copy or deepcopy function. The following code shows how this works.

import copy
ps1 = Person("Mario", 35)
pss = copy.copy(ps1)
psd = copy.deepcopy(ps1)
ps1.name = "Meir-Huber"
print(ps1.name)
print(pss.name)
print(psd.name)

And the output should be this:

Meir-Huber
Mario
Mario

JSON serialization in Python

Now, let’s look at how we can serialise an object with the use of JSON. Basically, you need to import “json”. An object that you want to serialise needs to be serialise-able. A lot of classes in Python already implement that. However, when we want to serialise our own object (e.g. the “Person” class that we have created in this tutorial), we need to implement the serialise-function or a custom serialiser. However, Python is great and provides us the possibility to access all variables in an object via the “__dict__” dictionary. This means that we don’t have to write our own serialiser and can do this via an easy call to “dumps” of “json”:

import json
js = json.dumps(ps1.__dict__)
print(js)

The above function creates a JSON representation of the entire class

{"name": "Meir-Huber", "age": 35}

We might want to add more information to the JSON string – e.g. the class name that it was originally stored in. We can do this by calling a custom function in the “dumps” method. This method gets the object to be serialised as only parameter. We then only pass the original object (Person) and the function we want to execute. We name this function “make_nice”. In the function, we create a dictionary and add the name of the class as first parameter. We give this the key “obj_name”. We then join the dictionary of the object into the new dictionary and return it.

Finishing the serialization

Another parameter added to the “dumps” function is “indent”. The only thing it does is printing it pretty – by adding line breaks and indents. This is just for improved readability. The method and call looks like this:

def make_nice(obj):
    dict = {
        "obj_name": obj.__class__.__name__
    }
    dict.update(obj.__dict__)
    return dict
js_pretty = json.dumps(ps1, default=make_nice,indent=3)
print(js_pretty)

And the result should now look like the following:

{
   "obj_name": "Person",
   "name": "Meir-Huber",
   "age": 35
}

Now, we know how we can serialise an object to a JSON string. Basically, you can now store this string to a file or an object on S3. The only thing that we haven’t discussed yet is how to get back an object from a string. We therefore take the JSON object we “dumps” before. Our goal now is to create a Person object from it. This can be done via the call “loads” from the json-object. We also define a method to do the casting via the “object_hook” parameter. This object_hook method has one argument – the JSON object itself. We access each of the parameters from the object with named indexers and return the new object.

str_json = "{\"name\": \"Meir-Huber\", \"age\": 35}"
def create(obj):
    print(obj)
    return Person(obj["name"], obj["age"])
obj = json.loads(str_json, object_hook=create)
print(obj)

The output should now look like this.

{'name': 'Meir-Huber', 'age': 35}
<__main__.Person object at 0x7fb84831ddd8>

Now we know how to create JSON serialisers and how to get them back from a string value. In the next tutorial, we will have a look on how to improve this and make it more dynamic – by dynamic class creation in Python.

If you are not yet familiar with Spark, have a look at the Spark Tutorial i created here. Also, I will create more tutorials on Python and Machine Learning in the future, so make sure to check back often to the Big Data & Data Science tutorial overview. I hope you liked this tutorial. If you have any suggestions and what to improve, please feel free to get in touch with me! If you want to learn more about Python, I also recommend you the official page.

In the last tutorials, we already worked a lot with Strings and even manipulated some of them. Now, it is about time to have a look at the theory behind it. Basically, formatting strings is very easy. The only thing you need is the “format” method appended to a string with a variable amount of data. If you add numbers, the str() function is executed on them by itself, so no need to convert them. This tutorial is about String manipulations in Python.

String manipulations in Python

Basically, the annotation is very similar to the one from other string formatters you are used to. One really nice thing though is that you don’t need to provide the positional arguments. Python assumes that the positions are in-line with the parameters you provide. An easy sample is this:

str01 = "This is my string {} and the value is {}".format("Test", 11)
print(str01)

And the output should look like this:

This is my string Test and the value is 11

You can also use classes for this. Therefore, we define a class “Person”:

class Person:
    def __init__(self, name, age):
        self.name = name
        self.age = age
p = Person("Mario Meir-Huber", 35)
str02 = "The author \"{}\" is {} years old".format(p.name, p.age)
print(p.name)
print(str02)

The output for this should look like this:

Mario Meir-Huber
The author "Mario Meir-Huber" is 35 years old

The difflib in Python

One nice thing in Python is the difflib. This library enables us to easily check two array of strings for differences. One use-case would be to check my lastname for differences. Note that my lastname is one of the most frequent lastname combinations in the german speaking countries and thus allows different ways to write it.

To work with difflib, simply import it and call the difflib context_diff function. This prints the differences detected with “!”.

import difflib
arr01 = ["Mario", "Meir", "Huber"]
arr02 = ["Mario", "Meier", "Huber"]
for line in difflib.context_diff(arr01, arr02):
    print(line)

Below you can see the output. One difference was spotted. You can easily use this for spotting differences in datasets and creating golden records from it.

***
---
***************
*** 1,3 ****
  Mario
! Meir
  Huber
--- 1,3 ----
  Mario
! Meier
  Huber

Textwrap in Python

Another nice feature in Python is the usage of textwrap. This library has some basic features for text “prettyfying”. Basically, in the following sample, we use 5 different things:

  • Indent: creates an indent to a text, e.g. a tab before the text
  • Wrap: wraps the text into an array of strings in case it is longer than the maximum width. This is useful to split text into a maximum number of arrays
  • Fill: does the same as Wrap, but creates new lines out of it
  • Shorten: shortens the text with a specified maximum number. This is written like “[…]” and you might use it to add a “read more” around it
  • Detent: deletes any whitespace before or after the text

The functions are used in simple statements:

from textwrap import *
print(indent("Mario Meir-Huber", "\t"))
print(wrap("Mario Meir-Huber", width=10))
print(fill("Mario Meir-Huber", width=10))
print(shorten("Mario Meir-Huber Another", width=15))
print(dedent(" Mario Meir-Huber "))

And the output should look like this:

	Mario Meir-Huber
['Mario', 'Meir-Huber']
Mario
Meir-Huber
Mario [...]
Mario Meir-Huber 

Today’s tutorial was more of a “housekeeping” since we used it already. In the next tutorial, I will write about object serialisation with JSON, as this is also very useful.

If you are not yet familiar with Spark, have a look at the Spark Tutorial i created here. Also, I will create more tutorials on Python and Machine Learning in the future, so make sure to check back often to the Big Data & Data Science tutorial overview. I hope you liked this tutorial. If you have any suggestions and what to improve, please feel free to get in touch with me! If you want to learn more about Python, I also recommend you the official page.

In the last tutorials, we had a look at methods, classes and deorators. Now, let’s have a brief look at asynchronous operations in Python. Most of the time, this is anyway abstracted for us via Spark, but it is nevertheless relevant to have some basic understanding of it. In this Tutorial, we will look at Python async and await functionality.

Python Async and await functionality

Basically, you define a method to be asynchronous by simply adding “async” as keyword ahead of the method definition. This is written like that:

async def FUNCTION_NAME():

FUNCTION-BLOCK

Another keyword in that context is “await”. Basically, every function that is doing something asynchronous is awaitable. When adding “await”, nothing else happens until the asynchronous function has finished. This means that you might loose the benefit of asynchronous execution but get better handling when working with web data. In the following code, we create an async function that sleeps some seconds (between 1 and 10). We call the function twice with the “await” operator.

import asyncio
import random
async def func():
    tim = random.randint(1,10)
    await asyncio.sleep(tim)
    print(f"Function finished after {tim} seconds")
await func()
await func()

In the output, you can see that it was first waited for the first function to finish and only then the second one was executed. Basically, all of the execution happened sequentially, not in parallel.

Function finished after 9 seconds
Function finished after 9 seconds

Python also knows parallel execution. This is done via Tasks. We use the Method “create_task” from the asyncio library in order to execute a function in parallel. In order to see how this works, we invoke the function several times and add a print-statement at the end of the code.

Parallel execution in Python async

asyncio.create_task(func())
asyncio.create_task(func())
asyncio.create_task(func())
asyncio.create_task(func())
asyncio.create_task(func())
asyncio.create_task(func())
asyncio.create_task(func())
asyncio.create_task(func())
asyncio.create_task(func())
asyncio.create_task(func())
asyncio.create_task(func())
asyncio.create_task(func())
print("doing something else ...")

This now looks very different to the previous sample. The print statement is the first to show up, and all code path finish after 9 seconds max. This is due to the fact that (A) the first execution finishes after 1 second – thus the print statement is the first to be shown, since it is executed immediately. (B) Everything is executed in parallel and the maximum sleep interval is 9 seconds.

doing something else ...
Function finished after 1 seconds
Function finished after 1 seconds
Function finished after 3 seconds
Function finished after 4 seconds
Function finished after 5 seconds
Function finished after 7 seconds
Function finished after 7 seconds
Function finished after 7 seconds
Function finished after 8 seconds
Function finished after 10 seconds
Function finished after 10 seconds
Function finished after 10 seconds

However, there are also some issues with async operations. You can never say how long it takes a task to execute. It could finish fast or it could also take forever, due to a weak network connection or an overloaded server. Therefore, you might want to specify a timeout, which is the maximum an operation should be waited for. In Python, this is done via the “wait_for” method. It basically takes the function to execute and the timeout in seconds. In case the call runs into a timeout, a “TimeoutError” is raised. This allows us to surround it with a try-block.

Dealing with TimeoutError in Python

try:
    await asyncio.wait_for(func(), timeout=3.0)
except asyncio.TimeoutError:
    print("Timeout occured")

In two third of the cases, our function will run into a timeout. The function should return this:

Timeout occured

Each task that should be executed can also be controlled. Whenever you call the “create_task” function, it returns a Task-object. A task can either be done, cancelled or contain an error. In the next sample, we create a new task and wait for it’s completion. We then check if the task was done or cancelled. You could also check for an error and retrieve the error message from it.

Create_Task in Python

task = asyncio.create_task(func())
print("running task")
await task
if task.done():
    print("Task was done")
elif task.cancelled():
    print("Task was cancelled")

In our case, no error should have occurred and thus the output should be the following:

running task
Function finished after 8 seconds
Task was done

Now we know how to work with async operations in Python. In our next tutorial, we will have a deeper look into how to work with Strings.

If you are not yet familiar with Spark, have a look at the Spark Tutorial i created here. Also, I will create more tutorials on Python and Machine Learning in the future, so make sure to check back often to the Big Data & Data Science tutorial overview. I hope you liked this tutorial. If you have any suggestions and what to improve, please feel free to get in touch with me! If you want to learn more about Python, I also recommend you the official page.

Decorators are powerful things in most programming languages. They help us making code more readable and adding functionality to a method or class. Basically, decorators are added above the method or class declaration in order to create some behaviour. Basically, we differentiate between two kind of decorators: method decorators and class decorators. In this tutorial we will look at the class and how a Python decorator works.

The Python decorator for a class

Class decorators are used to add some behaviour to a class. Normally, you would use this when you want to add some kind of behaviour to a class that is outside of its inheritance structure – e.g. by adding something that is too abstract to bring it to the inheritance structure itself.

The definition of that is very similar to the method decorators:

@DECORATORNAME
class CLASSNAME():
CLASS-BLOCK

The decorator definition is also very similar to the last tutorial’s sample. We first create a method that takes a class and then create the inner method. Within the inner method, we create a new function that we want to “append” to the class. We call this method “fly” that simply prints “Now flying …” to the console. To add this function to the class, we call the “setattr” function of Python. We then return the class and the class wrapper.

def altitude(cls):
    def clswrapper(*args):
        def fly():
            print("Now flying ... ")
        setattr(cls, "fly", fly)
        return cls
    return clswrapper

How to use the Python decorator

Now, our decorator is ready to be used. We first need to create a class. Therefore, we re-use the sample of the vehicles, but simplify it a bit. We create a class “Vehicle” that has a function “accelerate” and create two sub classes “Car” and “Plane” that both inherit from “Vehicle”. The only difference now is that we add a decorator to the class “Plane”. We want to add the possibility to fly to the Plane.

class Vehicle:
    speed = 0
    def accelerate(self, speed):
        self.speed = speed
class Car(Vehicle):
    pass
@altitude
class Plane(Vehicle):
    pass

Now, we want to test our output:

c = Car()
p = Plane()
c.accelerate(100)
print(c.speed)
print(p.fly())

Output:

100
Now flying ... 

Basically, there are a lot of scenarios when you would use class decorators. For instance, you can add functionality to classes that contain data in order to convert this into a more readable table or alike.

In our next tutorial, we will look at the await-operator.

If you are not yet familiar with Spark, have a look at the Spark Tutorial i created here. Also, I will create more tutorials on Python and Machine Learning in the future, so make sure to check back often to the Big Data & Data Science tutorial overview. I hope you liked this tutorial. If you have any suggestions and what to improve, please feel free to get in touch with me! If you want to learn more about Python, I also recommend you the official page.

Decorators are powerful things in most programming languages. They help us making code more readable and adding functionality to a method or class. Basically, decorators are added above the method or class declaration in order to create some behaviour. Basically, we differentiate between two kind of decorators: method decorators and class decorators. In this tutorial, we will have a look at Python decorator for Methods.

Python decorators for Methods

Method decorators are used to perform some kind of behaviour on a method. For instance, you could add a stopwatch to check for performance, configure logging or make some checks on the method itself. All of that is done by “wrapping” the method into a decorator method. This basically means that the method “decorated” is executed in the decorator method. This, for instance, would allow us to surround a method with a try-catch block and thus add all exceptions occurred in a method into a global error handling tool.

The definition of that is very easy:

@DECORATORNAME
def METHODNAME():
METHOD-BLOCK

Basically, the only thing that you need is the “@” and the decorator name. There are several decorators available, but now we will create our own decorator. We start by creating a performance counter. The goal of that is to measure how long it takes a method to execute. We therefore create the decorator from scratch.

Basically, I stated that the decorator takes the function and executes it inside the decorator function. We start by defining our performance counter as function, that takes one argument – the function to wrap in. Within this function, we add another function (yes, we can do this in Python – creating inline functions!) – typically we call it either “wrapper” or “inner”. I call it “inner”. The inner function should provide the capability to pass on arguments; typically, a function call can have 0 to n arguments. In order to do this, we provide “*args” and “**kwargs”. Both mean that there is a variable number of arguments available. The only difference between args and kwargs is that kwargs are named arguments (e.g. “person = “Pete”).

The inner function of a Python decorator

In this inner function, we now create the start-variable that is the time once the performance counting should start. After the start-variable, we call the function (any function which we decorate) by passing on all the *args and **kwargs. After that, we measure the time again and do the math. Simple, isn’t it? However, we haven’t decorated anything yet. This is now done by creating a function that sleeps and prints text afterwards. The code for this is shown below.

import time
def perfcounter(func):
    def inner(*args, **kwargs):
        start = time.perf_counter()
        func(*args, **kwargs) #This is the invokation of the function!
        print(time.perf_counter() - start)
    return inner
@perfcounter
def printText(text):
    time.sleep(0.3)
    print(text)
printText("Hello Decorator")

Output:

Hello Decorator
0.3019062000021222

As you can see, we are now capable of adding this perfcounter decorator to any kind of function we like. Normally, it makes sense to add this to functions which take rather long – e.g. in Spark jobs or web requests. In the next sample, I create a type checker decorator. Basically, this type checker should validate that all parameters passed to any kind of function are of a specific type. E.g. we want to ensure that all parameters passed to a multiplication function are only of type integer, parameters passed to a print function are only of type string.

Basically, you could also do this check inline, but it is much easier if you write the function once and simply apply it to the function as a decorator. Also, it greatly decreases the number of code lines and thus increases the readability of your code. The decorator for that should look like the following:

@typechecker(int)

For integer values and

@typechecker(str)

for string values.

The only difference now is that the decorator itself takes parameters as well, so we need to wrap the function into another function – compared to the previous sample, another level is added.

What are the steps necessary?

  1. Create the method to get the parameter: def typechecker(type)
  2. Code the outer function that takes the function and holds the inner function
  3. Create the function block that holds the inner function and a type checker:
    1. We add a function called “isInt(arg)” that checks if the argument passed is of a specific type. We can use “isinstance” to check if an argument is of a specific type – e.g. int or str. If it isn’t of the expected type, we raise an error
    2. We add the inner function with args and kwargs. In this function, we iterate over all args and kwargs passed and check it against the above function (isInt). If all checks succeed, we invoke the wrapped function.
Sounds a bit complex? Don't worry, it isn't that complex at all. Let's have a look at the code:
def typechecker(type):
    def check(func):
        def isInt(arg):
            if not isinstance(arg, type):
                raise TypeError("Only full numbers permitted. Please check")
        def inner(*args, **kwargs):
            for arg in args:
                isInt(arg)
            for kwarg in kwargs:
                isInt(kwarg)
            return func(*args, **kwargs)
        return inner
    return check

Now, since we are done with the decorator itself, let’s decorate some functions. We create two functions. The first one multiplies all values passed to the function. The values can be of variable length. The second function prints all strings passed to the function. We decorate the two functions with the typechecker-decorator defined above.

@typechecker(int)
def mulall(*args):
    res = 0
    for arg in args:
        if res == 0: res = arg
        else: res *= arg
    return res
@typechecker(str)
def concat(*args):
    res = ""
    for arg in args:
        res += arg
    return res

The benefit of the Python decorator on methods

I guess you can now see the benefit of decorators. We can influence the behaviour of a function and create code-snippets that are re-usable. But now, let’s call the functions to see if our decorator works as expected. Note: the third invokation should produce an error 🙂

print(mulall(1,2,3))
print(concat("a", "b", "c"))
print(mulall(1,2,"a"))

Output:

6
abc

… and the error message:

TypeErrorTraceback (most recent call last)
<ipython-input-6-cd2213a0d884> in <module>
     35 print(mulall(1,2,3))
     36 print(concat("a", "b", "c"))
---> 37 print(mulall(1,2,"a"))
<ipython-input-6-cd2213a0d884> in inner(*args, **kwargs)
      7         def inner(*args, **kwargs):
      8             for arg in args:
----> 9                 isInt(arg)
     10
     11             for kwarg in kwargs:
<ipython-input-6-cd2213a0d884> in isInt(arg)
      3         def isInt(arg):
      4             if not isinstance(arg, type):
----> 5                 raise TypeError("Only full numbers permitted. Please check")
      6
      7         def inner(*args, **kwargs):
TypeError: Only full numbers permitted. Please check

I hope you like the Python decorator. In my opinion, they are very helpful and provide great value. In the next tutorial, I will show how class decorators work.

If you are not yet familiar with Spark, have a look at the Spark Tutorial i created here. Also, I will create more tutorials on Python and Machine Learning in the future, so make sure to check back often to the Big Data & Data Science tutorial overview. I hope you liked this tutorial. If you have any suggestions and what to improve, please feel free to get in touch with me! If you want to learn more about Python, I also recommend you the official page.