AI should not be a black box. To engender trust it is imperative that consumers of machine learning predictive analytics understand how the AI reached its decision.
Whether for decision support or as a decision maker, an AI application must be able to explain itself and be accountable.
Explainable AI (XAI) is a burgeoning field in machine learning that tackles this challenge, so we, as ultimate end-users, have faith in machine intelligence and can strive for its continuous improvement.