Back to Search View Original Cite This Article

Abstract

<jats:p>AI has grown phenomenally and has created space for itself in every industry domain with the increasing technological advances in deep learning (CNN, RNN, GAN, Variational Auto-Encoders). As the ability to solve complex problems increases, the architecture of the model also gets complicated, making it directly proportional to the ability of solving complex problems. The model turns to be a black-box not allowing humans or end users to understand the reason of why the model gave a particular output. This has led to the need to make models more trustworthy and dependable on its output which was only possible when the black-box was converted to a white box – making the human understand what the model is learning at the intermediate layers of the architecture. Explainable AI is the emerging field of study which makes the model interpretable. While there are numerous methods identified, this article presents a detailed classification and grouping of methods based on domain of use and mode of explanation and detailed explanation of 10 methods.</jats:p>

Show More

Keywords

model methods domain learning ability

Related Articles

PORE

About

Connect