A current trend in AI is not a much technical one – it is rather a societial one. Basically, technologies around AI in Machine Learning and Deep Learning are getting more and more complex – thus making it even more complex for humans to understand what is happening and why a prediction is happening. The current approach in „throwing data in, getting a prediction out“ is not necessarily working for that. It is somewhat dangerous building knowledge and making decisions based on algorithms that we don‘t understand.

Explainable AI is getting even more important with new developments in the AI space such as Auto ML, where the system takes most of the data scientist‘s work. It needs to be ensured that everyone understands what‘s going on with the algorithms and why a prediction is happening exactly the way it is. So far (and without AutoML), Data Scientists were basically in charge of the algorithms and thus at least there was someone that could explain an algorithm (note: it didn‘t prevent us from bias in it, nor will AutoML do). With AutoML, when the tuning and algorithm selection is done more or less automatically, we need to ensure to have some vital and relevant documentation of the predictions available.

And one last note: this isn‘t a primer against AutoML and tools that do so – I believe that democratisation of AI is an absolute must and a good thing. However, we need to ensure that it stays – explainable!

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!