Exploiting efficient representations in large-scale tensor decompositions

Nico Vervliet, Lieven De Lathauwer

Abstract

Decomposing tensors into simple terms is often an essential step toward discovering and understanding underlying processes or toward compressing data. However, storing the tensor and computing its decomposition is challenging in a large-scale setting. Though in many cases a tensor is structured, it can be represented using few parameters: a sparse tensor is determined by the positions and values of its nonzeros, a polyadic decomposition by its factor matrices, a Tensor Train by its core tensors, a Hankel tensor by its generating vector, etc. The complexity of tensor decomposition algorithms can be reduced significantly in terms of time and memory if these efficient representations are exploited directly. Only a few core operations such as norms and inner products need to be specialized to achieve this, thereby avoiding the explicit construction of multiway arrays. To improve the interpretability of tensor models, constraints are often imposed or multiple datasets are fused through joint factorizations. While imposing these constraints prohibits the use of traditional compression techniques, our framework allows constraints and compression, as well as other efficient representations, to be handled trivially as the underlying optimization variables do not change. To illustrate this, large-scale nonnegative tensor factorization is performed using MLSVD and Tensor Train compression. We also show how vector and matrix data can be analyzed using tensorization while keeping a vector or matrix complexity through the concept of implicit tensorization, as illustrated for Hankelization and Löwnerization. The concepts and numerical properties are extensively investigated through experiments.

Code description

This paper discusses the computation of tensor decompositions starting from a structured tensor, i.e., a tensor that can be represented using fewer parameters than the number of entries. The matlab code for the developed algorithms is available in Tensorlab 3.0 and the experiments as well as some additional files are given here.

Reference

N. Vervliet, O. Debals, L. De Lathauwer, "Exploiting efficient representations in large-scale tensor decompositions," SIAM Journal on Scientific Computating, vol. 41, no. 2, pp. A789-A815, Mar. 2019.

Download code



This repository can be cited as:
S. Hendrikx, M. Boussé, N. Vervliet, M. Vandecappelle, R. Kenis, and L. De Lathauwer, Tensorlab⁺, Available online, Version of Dec 2022 downloaded from https://www.tensorlabplus.net.