In my previous posts, I explained the Linear Regression and stated that there are some errors in it. This is called the error or prediction (for individual predictions) and there is also a standard error. A prediction is good if the individual errors of prediction and the standard error are small. Let’s now start by examining the error of prediction.

Error of prediction in Linear regression

Let’s recall the table from the previous tutorial:

YearAd Spend (X)Revenue (Y)Prediction (Y’)
2013 €    345.126,00  €   41.235.645,00  €   48.538.859,48
2014 €    534.678,00  €   62.354.984,00  €   65.813.163,80
2015 €    754.738,00  €   82.731.657,00  €   85.867.731,47
2016 €    986.453,00  € 112.674.539,00  € 106.984.445,76
2017 € 1.348.754,00  € 156.544.387,00  € 140.001.758,86
2018 € 1.678.943,00  € 176.543.726,00  € 170.092.632,46
2019 € 2.165.478,00  € 199.645.326,00  € 214.431.672,17

We can see that there is a clear difference in between the prediction and the actual numbers. We calculate the error in each prediction by taking the real value minus the prediction:

Y-Y’
-€   7.303.214,48
-€   3.458.179,80
-€   3.136.074,47
 €   5.690.093,24
 € 16.542.628,14
 €   6.451.093,54
-€ 14.786.346,17

In the above table, we can see how each prediction differs from the real value. Thus it is our prediction error on the actual values.

Calculating the Standard Error

Now, we want to calculate the standard error. First, let’s have a look at the formular:

Basically, we take the sum of all error to the square, divide it by the number of occurrences and take the square root of it. We already have Y-Y’ calculated, so we only need to make the square of it:

Y-Y’(Y-Y’)^2
-€   7.303.214,48  €    53.336.941.686.734,40
-€   3.458.179,80  €    11.959.007.558.032,20
-€   3.136.074,47  €      9.834.963.088.101,32
 €   5.690.093,24  €    32.377.161.053.416,10
 € 16.542.628,14  €  273.658.545.777.043,00
 €   6.451.093,54  €    41.616.607.923.053,70
-€ 14.786.346,17  €  218.636.033.083.835,00

The sum of it is:  €  641.419.260.170.216,00

And N is 7, since it contains 7 Elements. Divided by 7, it is:  €    91.631.322.881.459,50

The last step is to take the square root, which results in the standard error of  € 9.572.425,13 for our linear regression.

Now, we have most items cleared for our linear regression and can move on to the logistic regression in our next tutorial.

In our previous tutorial, we had a look at how to (de) serialise objects from and to JSON in Python. Now, let’s have a look into how to dynamically create and extend classes in Python. Basically, we are using the library that Python itself is using. This is the dynamic type function in Python. This function takes several parameters, we will only focus on three relevant one’s for our sample.

How to use the dynamic type function in Python

Basically, this function takes several parameters. We utilize 3 parameters. These are:

type(CLASS_NAME, INHERITS, PARAMETERS)

These parameters have the following meaning:

  • CLASS_NAME: The name of the new class
  • INHERITS: from which the new type should inherit
  • PARAMETERS: new methods or parameters added to the class

In our following example, we want to extend the Person class with a new attribute called “location”. We call our new class “PersonNew” and instruct Python to inherit from “Person”, which we have created some tutorials earlier. Strange is that it is passed as an array, even there can only be one inheritance hierarchy in Python. Last, we specify the method “location” as key-value pair. Our sample looks like the following:

pn = type("PersonNew", (Person,), {"location": "Vienna"})
pn.age = 35
pn.name = "Mario"

If you test the code, it will just work like expected. All other objects such as age and name can also be retrieved. Now, let’s make it a bit more complex. We extend our previous sample with the JSON serialisation to be capable of dynamically creating a JSON object from a string.

Dynamically creating a class in Python from JSON

We therefore create a new function that takes the object to serialise and takes all values out of that. In addition, we add one more key-value pair, which we call “__class__” in order to store the name of the class. getting the class-name is a bit more complex, since it is written like “class ‘main.PersonNew'”. Therefore, we first split the object name with a “.”, take the last entry and again split it by the ‘ and take the first one. There are more elegant ways for this, but I want to keep it simple. Once we have the classname, we store it in the dictionary and return the dictionary. The complex sample is here:

def map_proxy(obj):
    dict = {}
    
    for k in obj.__dict__.keys():
        dict.update({k : obj.__dict__.get(k)})
        
    cls_name = str(obj).split(".")[1].split("'")[0]
    dict.update({"__class__" : cls_name})
        
    return dict

We can now use the json.dumps method and call the map_proxy function to return the JSON string:

st_pn = json.dumps(map_proxy(pn))
print(st_pn)

Now, we are ready to dynamically create a new class with the “type” method. We name the method after the class name that was provided above. This can be retrieved with “__class__”. We let it inherit from Person and pass the parameters from the entire object into it, since it is already a key/value pair:

def dyn_create(obj):
    
    return type(obj["__class__"], (Person, ), obj)

We can now also invoke the json.loads method to dynamically create the class:

obj = json.loads(st_pn, object_hook=dyn_create)
print(obj)
print(obj.location)

And the output should be like that:

{"location": "Vienna", "__module__": "__main__", "__doc__": null, "age": 35, "name": "Mario", "__class__": "PersonNew"}
<class '__main__.PersonNew'>
Vienna

As you can see, it is very easy to dynamically create new classes in Python. We could largely improve this code, but i’ve created this tutorial for explanatory reasons rather than usability ;).

In our next tutorial, we will have a look at logging.

Here you can go to the overview of the Python tutorial. If you want to dig deeper into the language, have a look at the official Python documentation.

In my previous posts, I introduced the basics of machine learning. Today, I want to focus on the two elementary algorithms: linear and logistic regression. Basically, you would learn them at the very beginning of your journey for machine learning, but eventually not use them much later on any more. But to understand the concepts of it, it is helpful to understand them.

Linear Regression

A Linear Regression is the simplest model for Data Science. Linear Regression is of supervised learning and used in Trend Analysis, Time-Series Analysis, Risk in Banking and many more.

In a linear regression, a relationship between a dependent variable y and a dataset of xn is linear. This basically means, that if there is data of a specific trend, a future trend can be predicted. Let’s assume that there is a significant relation between ad spendings and sales. We would have the following table:

YearAd SpendRevenue
2013 €      345.126,00  €      41.235.645,00
2014 €      534.678,00  €      62.354.984,00
2015 €      754.738,00  €      82.731.657,00
2016 €      986.453,00  €    112.674.539,00
2017 €   1.348.754,00  €    156.544.387,00
2018 €   1.678.943,00  €    176.543.726,00
2019 €   2.165.478,00  €    199.645.326,00

If you look at the data, it is very easy to figure out that that there is some kind of relation between how much money you spend on the ads and the revenue you get. Basically, the ratio is 1:92 to 1:119. Please not that I totally made up the numbers. however, based on this numbers, you could basically predict what revenues to obtain when spending X amount of data. The relation between them is therefore linear and we can easily plot it on a line chart:

Linear Regression

As you can see, some of the values are above the line and others below. Let’s now manually calculate the linear function. There are some steps necessary that should eventually lead to the prediction values. Let’s assume we want to know if we spend a specific money on ads, what revenue we can expect. Let’s assume we want to know how much value we create for 1 Million spend on ads. The linear regression function for this is:

predicted score (Y') = bX + intercept (A)

This means that we now need to calculate several values: (A) the slope (it is our “b” and the intercept (it is our A). X is the only value we know – our 1 Million spend. Let’s first calculate the slope

Calculating the Slope

The first thing we need to do is calculating the slope. For this, we need to have the standard deviation of both X and XY. Let’s first start with X – our revenues. The standard deviation is calculated for each revenue individually. There are some steps involved:

  • Creating the average of the revenues
  • Subtracting the individual revenue
  • Building the square

The first step is to create the average of both values. The average for the revenues should be:  € 118.818.609,14 and the average for the spend should be:  € 1.116.310,00.

Next, we need to create the standard deviation of each item. For the ad spend, we do this by substracting each individual ad spend and building the square. The table for this should look like the following:

The formular is: (Average of Ad spend – ad spend) ^ 2

YearAd spendStddev (X)
2013 €    345.126,00  €              594.724.761.856,00
2014 €    534.678,00  €              338.295.783.424,00
2015 €    754.738,00  €              130.734.311.184,00
2016 €    986.453,00  €                16.862.840.449,00
2017 € 1.348.754,00  €                54.030.213.136,00
2018 € 1.678.943,00  €              316.555.892.689,00
2019 € 2.165.478,00  €           1.100.753.492.224,00

Quite huge numbers already, right? Now, let’s create the standard deviation for the revenues. This is done by taking the average of the ad spend – ad spend and multiplying it with the same procedure for the revenues. This should result in:

YearRevenueY_Ad_Stddev
2013 €                  41.235.645,00  €    59.830.740.619.545,10
2014 €                  62.354.984,00  €    32.841.051.219.090,30
2015 €                  82.731.657,00  €    13.048.031.460.197,10
2016 €                112.674.539,00  €         797.850.516.541,00
2017 €                156.544.387,00  €      8.769.130.708.225,71
2018 €                176.543.726,00  €    32.478.055.672.684,90
2019 €                199.645.326,00  €    84.800.804.871.574,80

Now, we only need to sum up the columns for Y and X. The sums should be:

€ 2.551.957.294.962,00 for the X-Row
€ 232.565.665.067.859,00 for the Y-Row

Now, we need to divide the Y-Row by the X-Row and would get the following slope: 91,1322715

Calculating the Intercept

The intercept is somewhat easier. The formular for it is: average(y) – Slope * average(x). We already have all relevant variables calculated in our previous step. Our intercept should equal:  € 17.086.743,14.

Predicting the value with the Linear Regression

Now, we can build our function. This is: Y = 91,1322715X + 17.086.743,14

As stated in the beginning, our X should be 1 Million and we want to know our revenue:  € 108.219.014,64

The prediction is actually lower than the values which are closer (2016 and 2017 values). If you change the values, e.g. to 2 Million or 400k, it will again get closer. Predictions always produce some errors and they are normally shown. In our case, the error table would look like the following:

ad spentreal revenue (Y)prediction (Y’)error
2013 €                       345.126,00  €                  41.235.645,00  €                  48.538.859,48 -€     7.303.214,48
2014 €                       534.678,00  €                  62.354.984,00  €                  65.813.163,80 -€     3.458.179,80
2015 €                       754.738,00  €                  82.731.657,00  €                  85.867.731,47 -€     3.136.074,47
2016 €                       986.453,00  €                112.674.539,00  €                106.984.445,76  €     5.690.093,24
2017 €                    1.348.754,00  €                156.544.387,00  €                140.001.758,86  €   16.542.628,14
2018 €                    1.678.943,00  €                176.543.726,00  €                170.092.632,46  €     6.451.093,54
2019 €                    2.165.478,00  €                199.645.326,00  €                214.431.672,17 -€   14.786.346,17

The error calculation is done by using the real value and deducting the predicted value from it. And voila – you have your error. One common thing in machine learning is to reduce the error and make predictions more accurate.

One important aspect of working with Data is serialisation. Basically, this means that classes can be persisted to a storage (e.g. the file system, HDFS or S3). With Spark, a lot of file formats are possible. However, in this tutorial we will have a look on how to deal with JSON, a very popular file format and often used in Spark.

JSON stands for “Java Script Object Notation” and was usually developed for Client-Server applications with JavaScript as main user of it. It was built to have less overhead than XML.

First, let’s start with copying objects. Basically, Python knows two ways: normal copies and deep copies. The difference is that with normal copies, references to objects within the copied object are built. This is relevant when using objects as classes. In a deep copy, no references are built but every value is copied to the new object. This means that you can now use it independent from the previous one.

To copy objects to another, you only need to import copy and call the copy or deepcopy function. The following code shows how this works.

import copy
ps1 = Person("Mario", 35)
pss = copy.copy(ps1)
psd = copy.deepcopy(ps1)
ps1.name = "Meir-Huber"
print(ps1.name)
print(pss.name)
print(psd.name)

And the output should be this:

Meir-Huber
Mario
Mario

Now, let’s look at how we can serialise an object with the use of JSON. Basically, you need to import “json”. An object that you want to serialise needs to be serialise-able. A lot of classes in Python already implement that. However, when we want to serialise our own object (e.g. the “Person” class that we have created in this tutorial), we need to implement the serialise-function or a custom serialiser. However, Python is great and provides us the possibility to access all variables in an object via the “__dict__” dictionary. This means that we don’t have to write our own serialiser and can do this via an easy call to “dumps” of “json”:

import json
js = json.dumps(ps1.__dict__)
print(js)

The above function creates a JSON representation of the entire class

{"name": "Meir-Huber", "age": 35}

We might want to add more information to the JSON string – e.g. the class name that it was originally stored in. We can do this by calling a custom function in the “dumps” method. This method gets the object to be serialised as only parameter. We then only pass the original object (Person) and the function we want to execute. We name this function “make_nice”. In the function, we create a dictionary and add the name of the class as first parameter. We give this the key “obj_name”. We then join the dictionary of the object into the new dictionary and return it.

Another parameter added to the “dumps” function is “indent”. The only thing it does is printing it pretty – by adding line breaks and indents. This is just for improved readability. The method and call looks like this:

def make_nice(obj):
    dict = {
        "obj_name": obj.__class__.__name__
    }
    
    dict.update(obj.__dict__)
    
    return dict
js_pretty = json.dumps(ps1, default=make_nice,indent=3)
print(js_pretty)

And the result should now look like the following:

{
   "obj_name": "Person",
   "name": "Meir-Huber",
   "age": 35
}

Now, we know how we can serialise an object to a JSON string. Basically, you can now store this string to a file or an object on S3. The only thing that we haven’t discussed yet is how to get back an object from a string. We therefore take the JSON object we “dumps” before. Our goal now is to create a Person object from it. This can be done via the call “loads” from the json-object. We also define a method to do the casting via the “object_hook” parameter. This object_hook method has one argument – the JSON object itself. We access each of the parameters from the object with named indexers and return the new object.

str_json = "{\"name\": \"Meir-Huber\", \"age\": 35}"
def create(obj):
    
    print(obj)
    
    return Person(obj["name"], obj["age"])
    
obj = json.loads(str_json, object_hook=create)
print(obj)

The output should now look like this.

{'name': 'Meir-Huber', 'age': 35}
<__main__.Person object at 0x7fb84831ddd8>

Now we know how to create JSON serialisers and how to get them back from a string value. In the next tutorial, we will have a look on how to improve this and make it more dynamic – by dynamic class creation in Python.

One of the frequent statements vendors make is “Agile Analytics”. In pitches towards business units, they often claim that it would only take them some weeks to do agile analytics. However, this isn’t necessarily true, since they can easily abstract the hardest part of “agile” analytics: data access, retrieval and preparation. On the one hand side, this creates “bad blood” within a company: business units might ask why it takes their internal department so long (and there most likely has been some history to get the emotions going). But on the other side, it is necessary to solve this problem, as agile analytics is still possible – if done right.

In my opinion, there are several aspects necessary to go for agile analytics. First, it is about culture. Second, it is about organization and third is it about technology. Let’s start with culture first.

Culture

The company must be silo-free. Sounds easy, in fact it is very difficult. Different business units use data as a “weapon” which could easily be thermo-nuclear. If you own the data, you can easily create your own truth. This means that marketing could create their view of the market in terms of reach, sales could tweak the numbers (until the overall performance is measured by controlling), … So, business units might fight giving away data and will try to keep it in their ownership. However, data should be a company-wide good that is available to all units (of course, on the need to know basis and with adhering to legal and regulatory standards). This can only be achieved if the data unit is close to the CEO or any other powerful board member. Once this is achieved, it is easier to go for self-service analytics.

Organisation

Similar like culture, it is necessary to organize yourself for agile analytics. This is now more focused on the internal structure of an organization (e.g. the data unit). There is now silver bullet for this available, it very much depends on the overall culture of a company. However, certain aspects have to be fulfilled:

  • BizDevOps: I outlined it in one of my previous posts and I insist on this approach being necessary for many things around data. One of them is agile analytics, since handover of tasks is always complicated. End-to-end responsibility is really crucial for agile analytics
  • Data Governance: There is no way around it; either do it or forget about anything close to agile analytics. It is necessary to have security and privacy at control and to allow users to access data easy but secure. Also, it is very important to log what is going on (SOX!)
  • Self-Service Tools: Have tools available that enable you to access data without complex processes. I will write about this in “Technology”.

Technology

Last but not least, agile analytics is done via technology. Technology is just an enabler, so if you don’t get the previous 2 right, you will most likely fail here – even though you invest millions into it. You will need different tools that handle security and privacy, but also a clear and easy to use Metadata repository (let’s face it – a data catalog!). Also, you need tools that allow easy access of data via a data science workbench, a fully functional data lake and a data abstraction layer. That sounds quite a lot – and it is. The good news though is, that most of that comes for free – as all of them are mainly open source tools. At some point, you might need an enterprise license but cost-wise it is still manageable. And remember one thing: technology comes last. If you don’t fix culture and organization, you won’t be capable to deliver.

In the last tutorials, we already worked a lot with Strings and even manipulated some of them. Now, it is about time to have a look at the theory behind it. Basically, formatting strings is very easy. The only thing you need is the “format” method appended to a string with a variable amount of data. If you add numbers, the str() function is executed on them by itself, so no need to convert them.

Basically, the annotation is very similar to the one from other string formatters you are used to. One really nice thing though is that you don’t need to provide the positional arguments. Python assumes that the positions are in-line with the parameters you provide. An easy sample is this:

str01 = "This is my string {} and the value is {}".format("Test", 11)
print(str01)

And the output should look like this:

This is my string Test and the value is 11

You can also use classes for this. Therefore, we define a class “Person”:

class Person:    
    def __init__(self, name, age):
        self.name = name
        self.age = age
    
p = Person("Mario Meir-Huber", 35)
str02 = "The author \"{}\" is {} years old".format(p.name, p.age)
print(p.name)
print(str02)

The output for this should look like this:

Mario Meir-Huber
The author "Mario Meir-Huber" is 35 years old

One nice thing in Python is the difflib. This library enables us to easily check two array of strings for differences. One use-case would be to check my lastname for differences. Note that my lastname is one of the most frequent lastname combinations in the german speaking countries and thus allows different ways to write it.

To work with difflib, simply import it and call the difflib context_diff function. This prints the differences detected with “!”.

import difflib
arr01 = ["Mario", "Meir", "Huber"]
arr02 = ["Mario", "Meier", "Huber"]
for line in difflib.context_diff(arr01, arr02):
    print(line)

Below you can see the output. One difference was spotted. You can easily use this for spotting differences in datasets and creating golden records from it.

*** 
--- 
***************
*** 1,3 ****
  Mario
! Meir
  Huber
--- 1,3 ----
  Mario
! Meier
  Huber

Another nice feature in Python is the usage of textwrap. This library has some basic features for text “prettyfying”. Basically, in the following sample, we use 5 different things:

  • Indent: creates an indent to a text, e.g. a tab before the text
  • Wrap: wraps the text into an array of strings in case it is longer than the maximum width. This is useful to split text into a maximum number of arrays
  • Fill: does the same as Wrap, but creates new lines out of it
  • Shorten: shortens the text with a specified maximum number. This is written like “[…]” and you might use it to add a “read more” around it
  • Detent: deletes any whitespace before or after the text

The functions are used in simple statements:

from textwrap import *
print(indent("Mario Meir-Huber", "\t"))
print(wrap("Mario Meir-Huber", width=10))
print(fill("Mario Meir-Huber", width=10))
print(shorten("Mario Meir-Huber Another", width=15))
print(dedent(" Mario Meir-Huber "))

And the output should look like this:

	Mario Meir-Huber
['Mario', 'Meir-Huber']
Mario
Meir-Huber
Mario [...]
Mario Meir-Huber 

Today’s tutorial was more of a “housekeeping” since we used it already. In the next tutorial, I will write about object serialisation with JSON, as this is also very useful.

In the last tutorials, we had a look at methods, classes and deorators. Now, let’s have a brief look at asynchronous operations in Python. Most of the time, this is anyway abstracted for us via Spark, but it is nevertheless relevant to have some basic understanding of it. Basically, you define a method to be asynchronous by simply adding “async” as keyword ahead of the method definition. This is written like that:

async def FUNCTION_NAME():

FUNCTION-BLOCK

Another keyword in that context is “await”. Basically, every function that is doing something asynchronous is awaitable. When adding “await”, nothing else happens until the asynchronous function has finished. This means that you might loose the benefit of asynchronous execution but get better handling when working with web data. In the following code, we create an async function that sleeps some seconds (between 1 and 10). We call the function twice with the “await” operator.

import asyncio
import random
async def func():
    tim = random.randint(1,10)
    await asyncio.sleep(tim)
    print(f"Function finished after {tim} seconds")
    
await func()
await func()

In the output, you can see that it was first waited for the first function to finish and only then the second one was executed. Basically, all of the execution happened sequentially, not in parallel.

Function finished after 9 seconds
Function finished after 9 seconds

Python also knows parallel execution. This is done via Tasks. We use the Method “create_task” from the asyncio library in order to execute a function in parallel. In order to see how this works, we invoke the function several times and add a print-statement at the end of the code.

asyncio.create_task(func())
asyncio.create_task(func())
asyncio.create_task(func())
asyncio.create_task(func())
asyncio.create_task(func())
asyncio.create_task(func())
asyncio.create_task(func())
asyncio.create_task(func())
asyncio.create_task(func())
asyncio.create_task(func())
asyncio.create_task(func())
asyncio.create_task(func())
print("doing something else ...")

This now looks very different to the previous sample. The print statement is the first to show up, and all code path finish after 9 seconds max. This is due to the fact that (A) the first execution finishes after 1 second – thus the print statement is the first to be shown, since it is executed immediately. (B) Everything is executed in parallel and the maximum sleep interval is 9 seconds.

doing something else ...
Function finished after 1 seconds
Function finished after 1 seconds
Function finished after 3 seconds
Function finished after 4 seconds
Function finished after 5 seconds
Function finished after 7 seconds
Function finished after 7 seconds
Function finished after 7 seconds
Function finished after 8 seconds
Function finished after 10 seconds
Function finished after 10 seconds
Function finished after 10 seconds

However, there are also some issues with async operations. You can never say how long it takes a task to execute. It could finish fast or it could also take forever, due to a weak network connection or an overloaded server. Therefore, you might want to specify a timeout, which is the maximum an operation should be waited for. In Python, this is done via the “wait_for” method. It basically takes the function to execute and the timeout in seconds. In case the call runs into a timeout, a “TimeoutError” is raised. This allows us to surround it with a try-block.

try:
    await asyncio.wait_for(func(), timeout=3.0)
except asyncio.TimeoutError:
    print("Timeout occured")

In two third of the cases, our function will run into a timeout. The function should return this:

Timeout occured

Each task that should be executed can also be controlled. Whenever you call the “create_task” function, it returns a Task-object. A task can either be done, cancelled or contain an error. In the next sample, we create a new task and wait for it’s completion. We then check if the task was done or cancelled. You could also check for an error and retrieve the error message from it.

task = asyncio.create_task(func())
print("running task")
await task
if task.done():
    print("Task was done")
elif task.cancelled():
    print("Task was cancelled")

In our case, no error should have occurred and thus the output should be the following:

running task
Function finished after 8 seconds
Task was done

Now we know how to work with async operations in Python. In our next tutorial, we will have a deeper look into how to work with Strings.

Agility is almost everywhere and it also starts to get more into other hyped domains – such as Data Science. One thing which I like in this respect is the combination with DevOps – as this eases up the process and creates end-to-end responsibility. However, I strongly believe that it doesn’t make much sense to exclude the business. In case of Analytics, I would argue that it is BizDevOps.

Basically, Data Science needs a lot of business integration and works throughout different domains and functions. I outlined several times and in different posts here, that Data Science isn’t a job that is done by Data Scientists. It is more of a team work, and thus needs different people. With the concept of BizDevOps, this can be easily explained; let’s have a look at the following picture and I will afterwards outline the interdependencies on it:

BizDevOps for Data Science

Basically, there must be exactly one person that takes the end-to-end responsibility – ranging from business alignments to translation into an algorithm and finally in making it productive by operating it. This is basically the typical workflow for BizDevOps. This one person taking the end-to-end responsibility is typically a project or program manager working in the data domain. The three steps were outlined in the above figure, let’s now have a look at each of them.

Biz

The program manager for Data (or – you could also call this person the “Analytics Translator”) works closely with the business – either marketing, fraud, risk, shop floor, … – on getting their business requirements and needs. This person has a great understanding of what is feasible with their internal data as well in order to be capable of “translating a business problem to an algorithm”. In here, it is mainly about the Use-Case and not so much about tools and technologies. This happens in the next step. Until here, Data Scientists aren’t necessarily involved yet.

Dev

In this phase, it is all about implementing the algorithm and working with the Data. The program manager mentioned above already aligned with the business and did a detailed description. Also, Data Scientists and Data Engineers are integrated now. Data Engineers start to prepare and fetch the data. Also, they work with Data Scientists in finding and retrieving the answer for the business question. There are several iterations and feedback loops back to the business, once more and more answers arrive. Anyway, this process should only take a few weeks – ideally 3-6 weeks. Once the results are satisfying, it goes over to the next phase – bringing it into operation.

Ops

This phase is now about operating the algorithms that were developed. Basically, the data engineer is in charge of integrating this into the live systems. Basically, the business unit wants to see it as (continuously) calculated KPI or any other action that could result in some sort of impact. Also, continuous improvement of the models is happening there, since business might come up with new ideas on it. In this phase, the data scientist isn’t involved anymore. It is the data engineer or a dedicated devops engineer alongside the program manager.

Eventually, once the project is done (I dislike “done” because in my opinion a project is never done), this entire process moves into a CI process.

Decorators are powerful things in most programming languages. They help us making code more readable and adding functionality to a method or class. Basically, decorators are added above the method or class declaration in order to create some behaviour. Basically, we differentiate between two kind of decorators: method decorators and class decorators. In this tutorial, we will have a look at Class decorators.

Class decorators

Class decorators are used to add some behaviour to a class. Normally, you would use this when you want to add some kind of behaviour to a class that is outside of its inheritance structure – e.g. by adding something that is too abstract to bring it to the inheritance structure itself.

The definition of that is very similar to the method decorators:

@DECORATORNAME
class CLASSNAME():
CLASS-BLOCK

The decorator definition is also very similar to the last tutorial’s sample. We first create a method that takes a class and then create the inner method. Within the inner method, we create a new function that we want to “append” to the class. We call this method “fly” that simply prints “Now flying …” to the console. To add this function to the class, we call the “setattr” function of Python. We then return the class and the class wrapper.

def altitude(cls):
    def clswrapper(*args):
        def fly():
            print("Now flying ... ")
        setattr(cls, "fly", fly)
        return cls
    return clswrapper

Now, our decorator is ready to be used. We first need to create a class. Therefore, we re-use the sample of the vehicles, but simplify it a bit. We create a class “Vehicle” that has a function “accelerate” and create two sub classes “Car” and “Plane” that both inherit from “Vehicle”. The only difference now is that we add a decorator to the class “Plane”. We want to add the possibility to fly to the Plane.

class Vehicle:
    
    speed = 0
        
    def accelerate(self, speed):
        self.speed = speed
class Car(Vehicle):
    pass
@altitude
class Plane(Vehicle):
    pass

Now, we want to test our output:

c = Car()
p = Plane()
c.accelerate(100)
print(c.speed)
print(p.fly())

Output:

100
Now flying ... 

Basically, there are a lot of scenarios when you would use class decorators. For instance, you can add functionality to classes that contain data in order to convert this into a more readable table or alike.

In our next tutorial, we will look at the await-operator.

Decorators are powerful things in most programming languages. They help us making code more readable and adding functionality to a method or class. Basically, decorators are added above the method or class declaration in order to create some behaviour. Basically, we differentiate between two kind of decorators: method decorators and class decorators. In this tutorial, we will have a look at Method decorators.

Method decorators

Method decorators are used to perform some kind of behaviour on a method. For instance, you could add a stopwatch to check for performance, configure logging or make some checks on the method itself. All of that is done by “wrapping” the method into a decorator method. This basically means that the method “decorated” is executed in the decorator method. This, for instance, would allow us to surround a method with a try-catch block and thus add all exceptions occurred in a method into a global error handling tool.

The definition of that is very easy:

@DECORATORNAME
def METHODNAME():
METHOD-BLOCK

Basically, the only thing that you need is the “@” and the decorator name. There are several decorators available, but now we will create our own decorator. We start by creating a performance counter. The goal of that is to measure how long it takes a method to execute. We therefore create the decorator from scratch.

Basically, I stated that the decorator takes the function and executes it inside the decorator function. We start by defining our performance counter as function, that takes one argument – the function to wrap in. Within this function, we add another function (yes, we can do this in Python – creating inline functions!) – typically we call it either “wrapper” or “inner”. I call it “inner”. The inner function should provide the capability to pass on arguments; typically, a function call can have 0 to n arguments. In order to do this, we provide “*args” and “**kwargs”. Both mean that there is a variable number of arguments available. The only difference between args and kwargs is that kwargs are named arguments (e.g. “person = “Pete”).

In this inner function, we now create the start-variable that is the time once the performance counting should start. After the start-variable, we call the function (any function which we decorate) by passing on all the *args and **kwargs. After that, we measure the time again and do the math. Simple, isn’t it? However, we haven’t decorated anything yet. This is now done by creating a function that sleeps and prints text afterwards. The code for this is shown below.

import time
def perfcounter(func):
    def inner(*args, **kwargs):
        start = time.perf_counter()
        func(*args, **kwargs) #This is the invokation of the function!
        print(time.perf_counter() - start)
    return inner
    
@perfcounter
def printText(text):
    time.sleep(0.3)
    print(text)
    
printText("Hello Decorator")

Output:

Hello Decorator
0.3019062000021222

As you can see, we are now capable of adding this perfcounter decorator to any kind of function we like. Normally, it makes sense to add this to functions which take rather long – e.g. in Spark jobs or web requests. In the next sample, I create a type checker decorator. Basically, this type checker should validate that all parameters passed to any kind of function are of a specific type. E.g. we want to ensure that all parameters passed to a multiplication function are only of type integer, parameters passed to a print function are only of type string. Basically, you could also do this check inline, but it is much easier if you write the function once and simply apply it to the function as a decorator. Also, it greatly decreases the number of code lines and thus increases the readability of your code. The decorator for that should look like the following:

@typechecker(int)

For integer values and

@typechecker(str)

for string values.

The only difference now is that the decorator itself takes parameters as well, so we need to wrap the function into another function – compared to the previous sample, another level is added. What are the steps necessary?

  1. Create the method to get the parameter: def typechecker(type)
  2. Create the outer function that takes the function and holds the inner function
  3. Create the function block that holds the inner function and a type checker:
    1. We add a function called “isInt(arg)” that checks if the argument passed is of a specific type. We can use “isinstance” to check if an argument is of a specific type – e.g. int or str. If it isn’t of the expected type, we raise an error
    2. We add the inner function with args and kwargs. In this function, we iterate over all args and kwargs passed and check it against the above function (isInt). If all checks succeed, we invoke the wrapped function.
Sounds a bit complex? Don't worry, it isn't that complex at all. Let's have a look at the code:
def typechecker(type):
    def check(func):
        def isInt(arg):
            if not isinstance(arg, type):
                raise TypeError("Only full numbers permitted. Please check")
        def inner(*args, **kwargs):
            for arg in args:
                isInt(arg)
            for kwarg in kwargs:
                isInt(kwarg)
            return func(*args, **kwargs)
        return inner
    return check

Now, since we are done with the decorator itself, let’s decorate some functions. We create two functions. The first one multiplies all values passed to the function. The values can be of variable length. The second function prints all strings passed to the function. We decorate the two functions with the typechecker-decorator defined above.

@typechecker(int)
def mulall(*args):
    res = 0
    for arg in args:
        if res == 0: res = arg
        else: res *= arg
    return res
@typechecker(str)
def concat(*args):
    res = ""
    for arg in args:
        res += arg
    
    return res

I guess you can now see the benefit of decorators. We can influence the behaviour of a function and create code-snippets that are re-usable. But now, let’s call the functions to see if our decorator works as expected. Note: the third invokation should produce an error 🙂

print(mulall(1,2,3))
print(concat("a", "b", "c"))
print(mulall(1,2,"a"))

Output:

6
abc

… and the error message:

TypeErrorTraceback (most recent call last)
<ipython-input-6-cd2213a0d884> in <module>
     35 print(mulall(1,2,3))
     36 print(concat("a", "b", "c"))
---> 37 print(mulall(1,2,"a"))

<ipython-input-6-cd2213a0d884> in inner(*args, **kwargs)
      7         def inner(*args, **kwargs):
      8             for arg in args:
----> 9                 isInt(arg)
     10 
     11             for kwarg in kwargs:

<ipython-input-6-cd2213a0d884> in isInt(arg)
      3         def isInt(arg):
      4             if not isinstance(arg, type):
----> 5                 raise TypeError("Only full numbers permitted. Please check")
      6 
      7         def inner(*args, **kwargs):

TypeError: Only full numbers permitted. Please check

I hope you like decorators. In my opinion, they are very helpful and provide great value. In the next tutorial, I will show how class decorators work.