Interpretable Discrete Representation Learning on Time Series

SOM-VAE, an interpretable probabilistic deep variational autoencoder based approach for representation learning

Effective and efficient time series representation learning poses an important topic for a vast array of applications like, e.g. clustering. Many currently used approaches share the property of being difficult to interpret though. In many areas it is important that intermediate learned representations are easy to interpret for efficient downstream processing. Fortuin et al. proposed a novel variational autoencoder based time series representation learning framework, making use of discrete dimensionality reduction and deep generative modeling, called SOM-VAE. Their approach tries to overcome non-differentiability in discrete representation learning by introducing a gradient-based version of the self organizing map ([SOM – Kohonen1990]) that is more performant while simultanuously allowing for probabilistic interpretation by making use of an internal Markov model in the representation space.

Foruin et al. evaluate SOM-VAE using the static (Fashion-)MNIST data from Zalando research, a chaotic Lorenz attractor system with two macro states, and medical time series data from the eICU dataset.

What Are The Benefits?

Learning lower dimensional representations of high dimensional data often enables and facilitates processing by reducing processing and memory complexity on downstream tasks. Furthermore, the availability of human interpretable, lower-dimensional time series representations often makes data interpretability possible while reducing human error during the process, which in turn facilitates research and business. In case of SOM-VAE this is achieved by incooperation of well understood model components, like utilizing a self-organizing map for mapping „states from an uninterpretable continuous space to a lower-dimensional space with a predefined topologically interpretable structure…“ – [Foruin et al.] in conjunction with a Markov component for addressing the contextual attribute of a time series.

How Does It Work?

Essentially, Foruin et al. used a generative deep learning network (VAE) and extended it to be able to capture temporal smoothness in a discrete representation space by combining it with a self-organizing map (SOM) which introduces topological neighborhood relationships. As the classical SOM formulation has no notion of time, they extend SOM with a probabilistic transition Markov model in such a way, that single time point representations are enriched with information from adjacent time points in the series.

Schematic overview of the SOM-VAE model architecture. Image: SOM-VAE: Interpretable Discrete Representation Learning On Time Series

The schematic overview depicts time series from the data space [green] being encoded by a neural network [black] time-point wise into the latent space. With the help of a self-organizing map [red] data from the data manifold is being approximated. The discrete representation is achieved by mapping every data point $z_e$ from the latent space to its closest node $z_q$ in the self-organizing map. Subsequently, they train a Markov transition model [blue] for predicting the next discrete representation $z_q^{t+1}$ using the current representation $z_q^t$. Eventually, learned discrete representations can be decoded with the help of another neural network.

Make sure to have a look at their research paper, as it is quite facinating! Foruin et al.’s original tensorflow-based SOM-VAE implementation, as well as a PyTorch version are available from GitHub.

Kind regards,

Henrik Hain

This article has been first published on:
https://henrikhain.io/post/interpretable-discrete-representation-learning-on-time-series/

Gefällt Ihnen der Artikel?

Share on linkedin
Share on Linkdin
Share on xing
Share on XING
Share on twitter
Share on Twitter
Share on facebook
Share on Facebook
Ihre Daten werden gemäß unserer Datenschutzerklärung erhoben und verarbeitet.
Künstliche Intelligenz Partials
Data Analytics

Time Series Data Clustering Distance Measures

As ubiquitous as time series are, it is often of interest to identify clusters of similar time series in order to gain better insight into the structure of the available data. However, unsupervised learning from time series data has its own stumbling blocks. For this reason, the following article presents some helpful time series specific distance metrics and basic procedures to work successfully with time series data.

Weiterlesen »
Künstliche Intelligenz Parts
Künstliche Intelligenz

Unsupervised Skill Discovery in Deep Reinforcement Learning

Scientists from Google AI have published exciting research regarding unsupervised skill discovery in deep reinforcement learning. Essentially it will be possible to utilize unsupervised learning methods to learn model dynamics and promising skills in an unsupervised, model-free reinforcement learning enviroment, subsequently enabling to use model-based planning methods in model-free reinforcement learning setups.

Weiterlesen »
Künstliche Intelligenz Business Prozess
Data Analytics

Personalize Learning to Rank Results through Reinforcement Learning

Learning to optimally rank and personalize search results is a difficult and important topic in scientific information retrieval as well as in online retail business, where we typically want to bias customer query results with respect to specific preferences for the purpose of increasing revenue. Reinforcement learning, as a generic-flexible learning model, is able to bias, e.g. personalize, learning-to-rank results at scale, so that externally specified goals, e.g. an increase in sales and probably revenue, can be achieved. This article introduces the topics learning-to-rank and reinforcement learning in a problem-specific way and is accompanied by the example project ‚cli-ranker‘, a command line tool utilizing reinforcement learning principles for learning user information retrieval preferences regarding text document ranking.

Weiterlesen »