Data Science Strategy: Explainabily di AI

From OnnoWiki
Jump to navigation Jump to search

Securing Explainability in AI Explainable AI (XAI), also referred to as Transparent AI, involves the ability to explain how an algorithm has reached a particular insight or conclusion that results in a certain decision to take action. Though an important aspect to con- sider as part of the evolution of AI, it isn’t easy to solve technically, especially if the AI is acting in real-time and thus using streaming data that hasn’t been stored. To bring this point home, imagine, if you will, that you cannot explain to your customer why the machine made a certain decision — a decision you would not have made based on your own experience. What do you tell the customer then? Addressing explainable AI is becoming increasingly important in terms of our human ability to understand more about why and how the AI is performing in a certain way. In other words, what can be understood by studying how the machine is learning by processing these huge amounts of data from many dimensions, looking for certain patterns or deviations? What is it that the machine detects and understands that you missed or interpreted differently or simply were not capable of detecting? Which conclusions can be drawn from that? CHAPTER 3 Dealing with Difficult Challenges 45Ethically, AI explainability will be even more important when data scientists start building more advanced artificial intelligence, where many different algorithms are working together. It will be the key to understanding exactly what machines interpret as well as how the machine’s decision-making process is carried out. Knowing this information is crucial to staying on top of the policy framework needed to set the boundaries for what the machine shall and shall not do, as well as how these policies need to be expanded, or perhaps restricted, going forward. From a purely existential perspective on one hand and the need for humans to remain in control of the intelligent machines that are being built on the other, you cannot simply view AI as black box. (The black box challenge in AI refers to the need to ensure that, when an algorithm takes a decision based on the techniques that have been used to train the algorithm, that decision-making process must be transparent to humans. Algorithm transparency is possible when many of the more basic ML techniques — supervised learning, for example — are being used, but so far nobody has yet found a way to gain transparency when it comes to algo- rithms based on deep learning techniques. For example, there must be a way to explain why a certain decision was taken when something went wrong. A perti- nent example is the self-driving car, where a bunch of algorithms are in play, working together and (hopefully) following policies predefined for how to act in certain circumstances. All works according to plan, but then a totally unknown and unexpected event occurs and the car takes an unexpected action that causes an accident. In such situations, people in general would naturally expect that there would be some way to extract information from the self-driving car on why this specific decision was made — hence, they expect explainability in AI. Apart from the technical, ethical, and existential reasons for ensuring the explain- ability of AI, there is now also a legal reason. The EU’s General Data Protection Regulation (GDPR) has a clause that requests algorithmic interpretability. Right now, these demands aren’t too strict, but over time this will likely change dra- matically. The GDPR request now requires the ability to explain how the algorithm functions based on the following questions: » » Which data is used? » » Which logic is used in the algorithm? » » What process is used? » » What is the impact of the decision made by the algorithm? 46 PAR