Local Explanation Methods for Time Series Classification Models
Main Article Content
Abstract
This paper focuses on researching and proposing perturbation-based model-agnostic
methods to explain time series classification models. The main objective of this study is to explain
the predictions of the model, or in other words, to give reasons why it classifies a time series into
a particular label in a set of labels. In this work, we aim to provide the reliability of the decision
and the importance of features in the model. Moreover, in real-world time series, variations in the
speed or scale of a particular action can determine the class, so modifying this type of feature leads
to arbitrary explanations of the time series. To achieve the set objectives, we provide two methods,
each with its own strategies and advantages: the LIME-based method and the SHAP method, with
the novelty of using them in combination with data perturbation techniques, especially the ones that
affect the above-mentioned characteristics of the time series.