How understanding your model can lead to trust, knowledge and better performance in production
We seem to be in the golden era of AI. Every week there is a new service that can do anything from creating short stories to original images. These innovations are powered by machine learning. We use powerful computers and vast amounts of data to train these models. The problem is, this process leaves us with a poor understanding of how they actually work.
Ever increasing abilities? No idea how they work? Sounds like we want a robot uprising! Don’t worry, there is a parallel effort being made to get under the hood of these beasts. This comes from the field of interpretable machine learning (IML). This research is being driven by the many benefits a better understanding of our models can bring.
No, IML won’t stop an AI apocalypse. It can, however, help increase trust in machine learning and lead to greater adoption in other fields. You can also gain knowledge of your dataset and tell better stories about your results. You can even improve accuracy and performance in production. We will discuss these 6 benefits in depth. To end, we will touch on the limitations of IML.
In a previous article, we discuss IML in depth. To summarise, it is the field of research aimed at building machine learning models that can be understood by humans. This also involves developing tools that can help us understand complex models. The two main approaches to doing this are:
- Intrinsically interpretable models — modelling methodologies to build models that are easy to interpret
- Model agnostic methods — applied to any black-box models after they have been trained
The exact benefits will depend on which approach you take. We will focus on the latter. Model agnostic methods can be applied to any model after it has been trained. This gives us flexibility in our model choice. That is we can use complicated models while still gaining insight into how they work.
The obvious benefit is the aim of IML — an understanding of a model. That is how it makes individual predictions or its general behaviour over a group of predictions. From this, flows many other benefits.
Increased accuracy
The first is that IML can improve the accuracy of machine learning. Without model-agnostic methods, we were faced with a trade-off:
- Option 1 — use an accurate black-box model that we do not understand.
- Option 2 — build a less accurate model that is intrinsically interpretable.
Now we can model our cake and predict it too. By applying methods like SHAP, LIME or PDPs after the model is trained, we can interpret our black box models. We no longer have to exchange accuracy for interpretability. In other words, through increased flexibility in model choice, IML can improve accuracy.
More directly, model-agnostic methods can also improve the accuracy of black box models. By understanding how a model makes predictions, we can also understand why it is making incorrect predictions. Using this knowledge, we can improve our data collection process or build better features.
Improve performance in production
We can take this idea one step further. That is accuracy on a training dataset is not the same as on new data in production. Bias and proxy variables can lead to unforeseen issues. IML methods can help us identify these issues. In other words, they can be used to debug and build more robust models.
An example comes from a model used to power an automated car. It makes predictions to turn left or right based on images of a track. It performed well on both the training and a validation set. Yet, when we moved to a new room the automated car went horribly wrong. The SHAP plots in Figure 1 can help us understand why. Notice that the pixels in the background have high SHAP values.
What this means is that the model is using background information to make predictions. It was trained on data from only one room and the same objects and background are present in all the images. As a result, the model associates these with left and right turns. When we moved to a new location, the background changed and our predictions became unreliable.
The solution is to collect more data. We can continue to use SHAP to understand if this has led to a more robust model. In fact, we do this in the article below. Check it out if you want to learn more about this application. Otherwise, if you want the basics, you can do my Python SHAP course. Get free access if you sign up for my Newsletter.
Decrease harm and increase trust
Debugging is not only about making predictions correctly. It also means ensuring they are made ethically. Scott Lundberg (the creator of SHAP), discusses an example in this presentation. A screenshot is shown in Figure 2. Using SHAP, he shows that the model is using months of credit history to predict default. This is a proxy for age — a protected variable.
What this shows is that retired customers were more likely to be denied loads. This was because of their age and not true risk drivers (e.g. existing debt). In other words, the model was discriminating against customers based on age.
If we blindly trust black box models these types of problems will go unnoticed. IML can be used in your analysis of fairness to ensure they will not be used to make decisions that will harm users. This can help build trust in our AI systems.
Another way IML can build trust is by providing the basis for human-friendly explanations. We can explain why you were denied a loan or why a product recommendation was made. Users will be more likely to accept these decisions if they are given a reason. The same goes for professionals making use of machine learning tools.
Extend the reach of ML
Machine learning is everywhere. It is improving or replacing processes in finance, law or even farming. An interesting application is to immediately assess the quality of grass used to feed dairy cows. A process that used to be both invasive and lengthy.
You would not expect your average farmer to have an understanding of neural networks. The black-box nature would make it difficult for them to accept predictions. Even in more technical fields, there can be mistrust of deep learning methods.
Many scientists in hydrology remote sensing, atmospheric remote sensing, and ocean remote sensing etc. even do not believe the prediction results from deep learning, since these communities are more inclined to believe models with a clear physical meaning. — Prof. Dr. Lizhe Wang
IML can be seen as a bridge between computer science and other industries/ scientific fields. Providing a lens into the black box will make them more likely to accept results. This will increase the adoption of machine learning methods.
Improves your ability to tell stories
The previous two benefits have been about building trust. The trust of customers and professionals. You may still need to build trust even in environments where ML is readily adopted. That is to convince your colleagues that a model will do its job.
Data scientists do this through data storytelling. That is relating results found in data to the experience of less technical colleagues. By providing a link between data exploration and modelling results, IML can help with this.
Take the scatter plot below. When an employee has a degree (degree = 1), their annual bonus tends to increase with their years of experience. However, when they do not have a degree their bonus is stable. In other words, there is an interaction between degree and experience.
Now take the ICE plot below. It comes from a model used to predict bonuses using a set of features that includes experience and degree. We can see that the model captures the interaction. It is using the relationship we observed in the data to make predictions.
With IML we go from saying, “We think the model is using this relationship we observed in data” to “Look! See!! The model is using this relationship.” We can also compare model results to our colleague's experience. This allows them to use their domain knowledge to validate trends captured by the model. Sometimes we can even learn something completely new.
Gain knowledge
Black-box models can automatically model interactions and non-linear relationships in data. Using IML, we can analyze the model to reveal these relationships in our dataset. This knowledge can be used to:
- Inform feature engineering for non-linear models.
- Help when making decisions that go beyond models.
Ultimately, IML helps machine learning to become a tool for data exploration and knowledge generation. If nothing else, it can be fascinating to dive into a model to understand how it works.
With all these benefits, IML still has its limitations. We need to consider these when coming to conclusions using the methods. The most important are the assumptions made. For example, both SHAP and PDPs assume there are no feature dependencies (i.e model features are uncorrelated). If this assumption is not true, the methods can be unreliable.
Another limitation is that the methods can be abused. It is up to us to interpret results and we can force stories onto the analysis. We can do this unconsciously as a result of confirmation bias. It can also be done maliciously to support a conclusion that will benefit someone. This is similar to p-hacking — we torcher the data until it gives us the results we want.
The last thing to consider is that these methods only provide technical interpretations. They are useful for a data scientist to understand and debug a model. Yet, we cannot use them to explain a model to a lay customer or colleague. To do that requires a new set of skills and approach. One we discuss in this article:
You can also find introductory articles for some of the IML methods mentioned in this article:
I hope you enjoyed this article! You can support me by becoming one of my referred members :)
| Twitter | YouTube | Newsletter — sign up for FREE access to a Python SHAP course
FAQs
What are the advantages of interpretable models? ›
Interpretable models are not just for making predictions: they can help us understand which data sources are actually relevant. The insights delivered by interpretable models can help augment the understanding of domain experts.
Why do we need interpretable machine learning? ›Machine learning model fairness and interpretability are critical for data scientists, researchers and developers to explain their models and understand the value and accuracy of their findings. Interpretability is also important to debug machine learning models and make informed decisions about how to improve them.
What are interpretable models in machine learning? ›A machine learning model is interpretable if we can fundamentally understand how it arrived at a specific decision. A model is explainable if we can understand how a specific node in a complex model technically influences the output.
Which machine learning model is most interpretable? ›Linear regression, logistic regression and the decision tree are commonly used interpretable models.
What makes a model interpretable? ›What does it mean to be interpretable? Models are interpretable when humans can readily understand the reasoning behind predictions and decisions made by the model. The more interpretable the models are, the easier it is for someone to comprehend and trust the model.
What is interpretable feature? ›The distinction between two types of features, namely the interpretable (features that have a semantic content) and uninterpretable (features that are devoid of a semantic content) ones, introduced in Chomsky (1995), remains important, but the processes of matching and deletion that were integral parts of checking now ...
What is meant by interpretability in machine learning? ›To summarise, interpretability is the degree to which a model can be understood in human terms. Model A is more interpretable than model B if it is easier for a human to understand how model A makes predictions.
What is the difference between explainable and interpretable machine learning? ›Interpretability has to do with how accurate a machine learning model can associate a cause to an effect. Explainability has to do with the ability of the parameters, often hidden in Deep Nets, to justify the results.
What all are the most important aspects of model interpretation? ›The key to model interpretation is transparency, the ability to question, and the ease of understanding model decisions by humans. The three most important aspects of model interpretation are explained as follows.
What are the most interpretable regression models? ›Linear regression is one of the most interpretable prediction models. However, the linearity in a simple linear regression worsens its predictability.
What is the meaning of interpretable? ›
If something is interpretable, it is possible to find its meaning or possible to find a particular meaning in it: The research models failed to produce interpretable solutions.
Why is decision tree interpretable? ›Decision trees are very interpretable – as long as they are short. The number of terminal nodes increases quickly with depth. The more terminal nodes and the deeper the tree, the more difficult it becomes to understand the decision rules of a tree. A depth of 1 means 2 terminal nodes.