INTRODUCTION TO EXPLAINABLE AI
The need for interpreting neural networks is necessary because, in every setting where deep
learning is adopted, there is a requirement for an accurate and justifiable prediction. In essence,
a deep neural network lacks justification, with this Explainable AI (XAI) tends to justify
decisions by bridging between accuracy and interpretability without interpreting a model
directly.
What is XAI
Explainable AI is the application of artificial intelligence (AI) technology with methods and
techniques so that the results of a solution can be understood by a human expert. Furthermore,
is the implementation of the social right to explanation.
The XAI decisions face a technological problem of interpretability and info-obesity that boils
down that transparency may not be always possible. Hence, in test data, a human can audit
rules of XAI to gain insight into how a system would generalize on future real-world data
outside a test-set.
XAI Antiquity
It all started with the following:
•
•
•
Symbolic reasoning systems like MYCIN, SOPHIE, etc. that were explored so they can
represent, reason and make explanations for machine learning purposes.
Truth maintenance systems (TMS) were developed to provide explanations by tracking
lines of reasoning and justification for conclusions through rule operations or logical
inferences.
Researchers study on extracting meaning from rules generated by opaque neural
networks gave birth to neural network-powered decision support.
After this, sprang up new methods in Modern Complex A1 techniques such as deep learning
and genetic algorithms to make models more explainable/interpretable and transparent – these
models built are replication on Explainable Artificial Intelligence.
XAI Principles
As many come to depend on AI-based dynamic systems i.e. XAI, the need for clearer
accountability led to a global conference on AI to ensure trust and transparency for decisionmaking processes.
Hence, to deal with potential problems stemming from the importance of algorithm
implementations, there is a regulation to follow to earn the right to explanation with regards to
AI.
Divisions of XAI
XAI has researched and covered the following sectors where AI is applied:
•
•
•
•
•
•
•
Neural Network Tank imaging
Antenna design (evolved antenna)
Algorithmic trading (high-frequency trading)
Medical diagnoses
Autonomous vehicles
Designing feature detectors from optimal computer designs (Computer Vision)
Text analytics
It is worthy to note that, Explainable AI doesn’t fully explain how a neural network reaches a
prediction. In the sense that even though the impact on model predictions is explained, the
decision process isn’t. This can be resolved with decision trees using Kryptonite like images4,
to obtain accuracy in the process.
Invariably, this suggests we can combine neural networks i.e. XAI and decision trees to address
the failures to justify and attaining high accuracy.