A current trend in AI is not a much technical one – it is rather a societal one. Basically, technologies around AI in Machine Learning and Deep Learning are getting more and more complex. This is making it even more complex for humans to understand what is happening and why a prediction is happening. The current approach in „throwing data in, getting a prediction out“ is not necessarily working for that. It is somewhat dangerous building knowledge and making decisions based on algorithms that we don‘t understand. To solve this problem, we need to have explainable AI.

What is explainable AI?

Explainable AI is getting even more important with new developments in the AI space such as Auto ML. With Auto ML, the system takes most of the data scientist‘s work. It needs to be ensured that everyone understands what‘s going on with the algorithms and why a prediction is happening exactly the way it is. So far (and without AutoML), Data Scientists were basically in charge of the algorithms. At least there was someone that could explain an algorithm. NOTE: it didn‘t prevent us from bias in it, nor will AutoML do. With AutoML, when the tuning and algorithm selection is done more or less automatically, we need to ensure to have some vital and relevant documentation of the predictions available.

And one last note: this isn‘t a primer against AutoML and tools that do so – I believe that democratisation of AI is an absolute must and a good thing. However, we need to ensure that it stays – explainable!

This post is part of the “Big Data for Business” tutorial. In this tutorial, I explain various aspects of handling data right within a company. A comprehensive article about explainable AI can also be found on wikipedia.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply