Interpretable Multivariate Forecasting with Deep Learning
The first part of the session underpins the importance of Machine Learning Interpretation. Fundamentally, it is needed because machine learning by itself is incomplete as a solution. After all, the problems they solve are not deterministic, so the solution cannot cover all of it because it is an optimization. The machine learning interpretability toolkit can help us first learn from our models. Then, leverage what was learned or our domain knowledge to place guardrails, mitigate bias, and enhance model reliability, making them safe to use even in rare and unexpected situations and free from discrimination. The second part is hands-on. It involves a multivariate forecasting problem with weather and a deep learning model. We will leverage integrated gradients and SHAP to interpret the model to understand what features are more important to the model and why, recognizing that even when a model performs well, it might be doing so for the wrong reasons. Forecasting and uncertainty are intrinsically linked, and Sensitivity Analysis is a family of methods designed to measure the uncertainty of the model’s output in relation to its input, so we will leverage two of these methods. Lastly, taking what was learned, understand how to improve it.