Tensor Rank Visualization

Martin J. Mohlenkamp

Copyright (c) 2009, 2013 Martin J. Mohlenkamp. This documentation is distributed under the GNU Free Documentation License.

Introduction

Since the notation and terminology are inconsistent in the literature, we first have to define what we are talking about. A tensor is an object with some number of indices that yields a real or complex number. We denote it

T(i_1,i_2,\ldots,i_d).

The dimension d of the tensor is the number of indices. Thus an ordinary vector is a tensor in dimension one, and an ordinary matrix is a tensor in dimension two. Each index has some range of allowed values. The resolution M_k in an index i_k is the number of allowed values, so

i_k=1,2,\ldots,M_k\,.

For simplicity we will often assume the resolution is the same in all indices and denote it M.

A tensor T in dimension d with resolution M contains M^d numbers. For even moderate d and M this is an astronomical number, and there is no way to directly store, compute, or manipulate such a tensor. This effect is known as the Curse of Dimensionality [BELLMA1961].

A separable tensor is one that can be written as a product

T(i_1,i_2,\ldots,i_d)=V_1(i_1)V_2(i_2)\cdots V_d(i_d)
=\prod_{k=1}^d V_k(i_k)
\,.

To write T without explicitly listing its indices, we use the tensor product

T=V_1\otimes V_2\otimes\cdots \otimes V_d
=\bigotimes_{k=1}^d V_k
\,.

A tensor can be expressed with separation rank r if it can be written

T(i_1,i_2,\ldots,i_d)&=V^1_1(i_1)V^1_2(i_2)\cdots V^1_d(i_d)+
V^2_1(i_1)V^2_2(i_2)\cdots V^2_d(i_d)+\cdots
+V^r_1(i_1)V^r_2(i_2)\cdots V^r_d(i_d)\\
&=\sum_{l=1}^r \prod_{k=1}^d  V^l_k(i_k)
\,,

or equivalently

T=\sum_{l=1}^r \bigotimes_{k=1}^d  V^l_k
\,,

where the superscript in V^l_k is an index rather than a power. Such a tensor can be stored using only rdM numbers instead of M^d. Any tensor can be written in this form with r=M^{d-1}, but only when r is small is this representation useful.

Given a tensor with separation rank r, it may or may not be possible to express it with smaller r. There are several examples where tensors have been written or approximated with much smaller r than expected (see e.g. [BEY-MOH2005] [MOH-MON2005] ). In each of these examples a proof can be given, but there is no apparent connection between the proofs. Overall, there is very little understanding of why approximations with small r seem to be effective. Our understanding is hampered by the following:

  • The set of tensors with separation rank at most r is not a subspace, since in general the sum of two such tensors has separation rank 2r. Thus this set is not flat, but instead has nontrivial geometry and topology.
  • The smallest interesting example is 2\times 2 \times 2, which has 8 entries and so would need to be plotted in R^8.

The goal of this work is to provide a way to visually explore the set of tensors with a given r. We hope that this exploration will yield unexpected structure and inspire better understanding of such tensors. This understanding may then lead to proofs on why approximations with small r are unexpectedly effective.

See [KOL-BAD2009] for a general review of tensor decompositions.

Software (including download)

In this section we explain what the software does to produce the visualizations and how to use it.

Acknowledgments

This material is based upon work supported by the National Science Foundation under Grant DMS-0545895.

I would like to thank Sruthi Devarachetty, Nam Nguyen, Krishna Chaitanya Palavarpu, and Robert Vanyo, who were research assistants on this project while students at Ohio University.

I would also like to thank Shmuel Friedland, Lek-Heng Lim, Fernando Perez, Alwin Stegeman, and Jos ten Berge for their comments, corrections, and suggestions.

Bibliography

[BELLMA1961]Richard Bellman, Adaptive Control Processes: a Guided Tour Princeton Univ. Press, Princeton, New Jersey, 1961.
[BEY-MOH2005]G. Beylkin and M.J. Mohlenkamp, Algorithms for numerical analysis in high dimensions, SIAM J. Sci. Comput., 26 (2005), pp.2133-2159.
[DES-LIM2008]Vin De Silva and Lek-Heng Lim, Tensor Rank and The Ill-posedness of The Best Low-Rank Approximation Problem, SIAM Journal on Matrix Analysis and Applications, Volume 30, Issue 3, September 2008.
[FRI-MEH]S. Friedland and V. Mehrmann, Best Subspace Tensor Approximations, http://arxiv.org/abs/0805.4220
[KOL-BAD2009]Tamara G. Kolda and Brett W. Bader Tensor Decompositions and Applications SIAM Review 51(3)455-500, 2009
[KRUSKA1989]J. B. Kruskal, Rank, decomposition, and uniqueness for 3-way and N -way arrays, in Multiway Data Analysis, R. Coppi and S. Bolasco, eds., North-Holland, Amsterdam, 1989, pp. 7-18.
[MOH-MON2005]M.J. Mohlenkamp and L. Monzon, Trigonometric identities and sums of separable functions, The Mathematical Intelligencer, 27 (2005), pp.65–69. http://www.ohio.edu/people/mohlenka/research/sine.pdf

Indices and tables

Table Of Contents

Next topic

Visualization Theory

This Page