International Conference on the spectral theory of the tensor
نویسندگان
چکیده
A tensor singular value decomposition based upon the *-product defined on third-order tensors by Kilmer, Martin and Perrone provides an effective means of generalizing Principal Component Analysis. This talk will focus on tensor eigendecompostions and generalized eigendecompostions under the *-product. Discusion will include an overview of the theoretical underpinings, some computational concerns and also the use of tensor eigendecompositions in a generalization of Fischer's Linear Discriminant Analysis as applied to object recognition. The number of eigenvalues of a tensor Dustin Cartwright (Yale University / MPI) Abstract I will discuss the computation of the number of eigenpairs of a generic tensor. I will also discuss what may happen for special tensors, for which the number of eigenpairs may coincide or their number may become infinite. These questions are related to the structure of the characteristic polynomial of a tensor.I will discuss the computation of the number of eigenpairs of a generic tensor. I will also discuss what may happen for special tensors, for which the number of eigenpairs may coincide or their number may become infinite. These questions are related to the structure of the characteristic polynomial of a tensor. On the Z-eigenvalues of the signless Laplacian tensor for an even uniform hypergraph An Chang (Fuzhou University) Abstract In last two decades, the spectral graph theory has become one of the most active branches in the graph theory duo to its various applications in many disciplines. In fact, it is the branch of mathematics that studies graphs by using algebraic properties of associated matrices, such as the adjacency matrix or Laplace matrix. As graphs are related to matrices , hypergraphs are in a natural way related to tensors which could reveal more higher order structures than matrices. In this talk, we generalize the signless Laplacian matrices for graphs to the signless Laplacian tensors for even uniform hypergraphs and set some fundamental properties for the spectral hypergraph theory based upon the signless Laplacian tensors. Especially, the smallest and the largest Z-eigenvalues of the signless Laplacian tensor for an even uniform hypergraph are studied and their connections with hypergraph bipartition, maximum degree and edge cut are discussed.In last two decades, the spectral graph theory has become one of the most active branches in the graph theory duo to its various applications in many disciplines. In fact, it is the branch of mathematics that studies graphs by using algebraic properties of associated matrices, such as the adjacency matrix or Laplace matrix. As graphs are related to matrices , hypergraphs are in a natural way related to tensors which could reveal more higher order structures than matrices. In this talk, we generalize the signless Laplacian matrices for graphs to the signless Laplacian tensors for even uniform hypergraphs and set some fundamental properties for the spectral hypergraph theory based upon the signless Laplacian tensors. Especially, the smallest and the largest Z-eigenvalues of the signless Laplacian tensor for an even uniform hypergraph are studied and their connections with hypergraph bipartition, maximum degree and edge cut are discussed. Eigenvalues for stochastic tensors Kungching Chang (Peking University) Abstract Basic properties of transition probability tensors, introduced by Ng et al., are studied. The uniqueness of the positive eigenvector is not necessarily true in general. Sufficient conditions on this class of tensors are investigated to ensure the uniqueness of positive Z-eigenvector. Three different methods: Contraction mappings, monotone operator, and the Brouwer index of fixed points, are applied in this study. This is a joint work with T. Zhang.Basic properties of transition probability tensors, introduced by Ng et al., are studied. The uniqueness of the positive eigenvector is not necessarily true in general. Sufficient conditions on this class of tensors are investigated to ensure the uniqueness of positive Z-eigenvector. Three different methods: Contraction mappings, monotone operator, and the Brouwer index of fixed points, are applied in this study. This is a joint work with T. Zhang. Spectra of Hypergraphs Joshua Cooper (University of South Carolina) Abstract We present a spectral theory of hypergraphs that closely parallels graph spectral theory. Classic work by Gel'fand-Kapranov-Zelevinsky and Canny, as well as more recent developments by Chang, Lim, Pearson, Qi, Zhang, and others has led to a rich understanding of “hyperdeterminants” of hypermatrices, a.k.a. multidimensional arrays. Hyperdeterminants share many properties with determinants, but the context of multilinear algebra is substantially more complicated than the linear algebra required to understand spectral graph theory (i.e., ordinary matrices). Nonetheless, it is possible to define eigenvalues of a tensor via its characteristic polynomial and variationally. We apply this notion to the “adjacency hypermatrix” of a uniform hypergraph, and prove a number of natural analogues of graph theoretic results. Computations are particularly cumbersome with hyperdeterminants, so we discuss software developed in Sage which can perform basic calculations on small hypergraphs. Open problems abound, and we present a few directions for further research. Joint work with Aaron Dutle of the University of South Carolina.We present a spectral theory of hypergraphs that closely parallels graph spectral theory. Classic work by Gel'fand-Kapranov-Zelevinsky and Canny, as well as more recent developments by Chang, Lim, Pearson, Qi, Zhang, and others has led to a rich understanding of “hyperdeterminants” of hypermatrices, a.k.a. multidimensional arrays. Hyperdeterminants share many properties with determinants, but the context of multilinear algebra is substantially more complicated than the linear algebra required to understand spectral graph theory (i.e., ordinary matrices). Nonetheless, it is possible to define eigenvalues of a tensor via its characteristic polynomial and variationally. We apply this notion to the “adjacency hypermatrix” of a uniform hypergraph, and prove a number of natural analogues of graph theoretic results. Computations are particularly cumbersome with hyperdeterminants, so we discuss software developed in Sage which can perform basic calculations on small hypergraphs. Open problems abound, and we present a few directions for further research. Joint work with Aaron Dutle of the University of South Carolina. A subspace projection method for finding the extreme Z-eigenvalues of supersymmetric positive definite tensor Yu-Hong Dai (AMSS,Chinese Academy of Sciences) Abstract A subspace projection method is presented for finding the exteme Z-eigenvalues of m-th order n-dimensional supersymmetric positive definite tensor $A$. The idea is based on the geometric properties of Z-eigenvalues and Z-eigenvectors that the shortest distance from $S=\{x\inR^n|Ax^m=c,c>0\}$ to the origin is $\sigma_{min}=(\frac{c}{\lambda_{max}})^{\frac{1}{m}}$. And the shortest distance can be obtained by projecting the original problem into a two dimension subspace in each iteration where the Z-eigenvalue of the reduced tensor can be computed directly. A subspace projection algorithm is proposed. The preliminary numerical results show that the algorithm performs very well. Some extensions are also considered. This is a joint work with Chunlin Hao.A subspace projection method is presented for finding the exteme Z-eigenvalues of m-th order n-dimensional supersymmetric positive definite tensor $A$. The idea is based on the geometric properties of Z-eigenvalues and Z-eigenvectors that the shortest distance from $S=\{x\inR^n|Ax^m=c,c>0\}$ to the origin is $\sigma_{min}=(\frac{c}{\lambda_{max}})^{\frac{1}{m}}$. And the shortest distance can be obtained by projecting the original problem into a two dimension subspace in each iteration where the Z-eigenvalue of the reduced tensor can be computed directly. A subspace projection algorithm is proposed. The preliminary numerical results show that the algorithm performs very well. Some extensions are also considered. This is a joint work with Chunlin Hao. Canonical Polyadic Decomposition and Block Term Decompositions: Uniqueness and Signal Separation Lieven De Lathauwer (KU Leuven) Abstract Canonical Polyadic Decomposition (CPD) writes a higher-order tensor as a minimal sum of rank-1 tensors. Kruskal derived the following uniqueness result. Let A, B, C denote the factor matrices in a PD of a tensor T, and let kA + kB + kC ≥ 2R+2, where kA, kB, kC denote the k-rank of A, B, C, respectively, then R is the rank of T, i.e., the PD is canonical, and this decomposition is unique. Other uniqueness conditions have been derived by Jiang and Sidiropoulos, and by De Lathauwer. These conditions are on one hand more restrictive than Kruskal’s in the sense that one of the factor matrices, say C, needs to have full column rank, which implies that the rank R may not exceed all tensor dimensions. On the other hand, the results are an order of magnitude more relaxed than Kruskal’s in terms of the conditions on A and B. In this talk we present new CPD uniqueness conditions, connecting Jiang – Sidiropoulos – De Lathauwer’s results with Kruskal’s result. We relax the condition on C, no longer requiring that it has full column rank, while making the conditions on A and B more restrictive. Part of the study leads to new Kruskal-type conditions that are more relaxed than the original. For instance, we explain that, if kA + rB + rC ≥ 2R+2, and rA + kB + rC ≥ 2R+2, and rA + rB + kC ≥ 2R+2, where rA, rB, rC denote the rank of A, B, C, respectively, then the PD is canonical and the decomposition is unique. Recently introduced Blok Term Decompositions (BTD) write a higher-order tensor as a minimal sum of tensors of certain low multilinear rank. We present generalizations of some of the CPD uniqueness conditions. The uniqueness properties of CPD and BTDs make them basic tools for signal separation. Time permitting, we explain their use and show application examples. Joint work with Ignat Domanov (KU Leuven).Canonical Polyadic Decomposition (CPD) writes a higher-order tensor as a minimal sum of rank-1 tensors. Kruskal derived the following uniqueness result. Let A, B, C denote the factor matrices in a PD of a tensor T, and let kA + kB + kC ≥ 2R+2, where kA, kB, kC denote the k-rank of A, B, C, respectively, then R is the rank of T, i.e., the PD is canonical, and this decomposition is unique. Other uniqueness conditions have been derived by Jiang and Sidiropoulos, and by De Lathauwer. These conditions are on one hand more restrictive than Kruskal’s in the sense that one of the factor matrices, say C, needs to have full column rank, which implies that the rank R may not exceed all tensor dimensions. On the other hand, the results are an order of magnitude more relaxed than Kruskal’s in terms of the conditions on A and B. In this talk we present new CPD uniqueness conditions, connecting Jiang – Sidiropoulos – De Lathauwer’s results with Kruskal’s result. We relax the condition on C, no longer requiring that it has full column rank, while making the conditions on A and B more restrictive. Part of the study leads to new Kruskal-type conditions that are more relaxed than the original. For instance, we explain that, if kA + rB + rC ≥ 2R+2, and rA + kB + rC ≥ 2R+2, and rA + rB + kC ≥ 2R+2, where rA, rB, rC denote the rank of A, B, C, respectively, then the PD is canonical and the decomposition is unique. Recently introduced Blok Term Decompositions (BTD) write a higher-order tensor as a minimal sum of tensors of certain low multilinear rank. We present generalizations of some of the CPD uniqueness conditions. The uniqueness properties of CPD and BTDs make them basic tools for signal separation. Time permitting, we explain their use and show application examples. Joint work with Ignat Domanov (KU Leuven). From nonnegative matrices to nonnegative tensors Shmuel Friedland (University of Illinois at Chicago) Abstract In this talk we will discuss a number of generalizations of results on nonnegative matrices to nonnegative tensors as: irreducibility and weak irreducibility, Perron-Frobenius theorem, Collatz-Wielandt characterization, Kingman's inequality, Karlin-Ost and Friedland theorems, tropical spectral radius, diagonal scaling, Friedland-Karlin inequality, nonnegative multilinear forms.In this talk we will discuss a number of generalizations of results on nonnegative matrices to nonnegative tensors as: irreducibility and weak irreducibility, Perron-Frobenius theorem, Collatz-Wielandt characterization, Kingman's inequality, Karlin-Ost and Friedland theorems, tropical spectral radius, diagonal scaling, Friedland-Karlin inequality, nonnegative multilinear forms. A Framework for Tensor Spectral decomposition Edinah K. Gnang (Rutgers University) Abstract We propose a general framework for Tensor spectral decomposition. Our proposed factorization decomposes a tensor into a product of Orthogonal and Scaling tensors. At the same time, our factorization yields an expansion of a tensor as a summation of outer products of lower order tensors. We shows the relationship between the Eigen-objects and the generalized characteristic polynomials. Our framework is based on a consistent multilinear algebra which suggests how to generalise the notion of matrix Hermicity, matrix Transpose, and most importantly the notion of Orthogonality. Our proposed factorization for a tensor in terms of lower order tensors can be recursively applied so as to yield a Tensor Spectral Hiearchy.We propose a general framework for Tensor spectral decomposition. Our proposed factorization decomposes a tensor into a product of Orthogonal and Scaling tensors. At the same time, our factorization yields an expansion of a tensor as a summation of outer products of lower order tensors. We shows the relationship between the Eigen-objects and the generalized characteristic polynomials. Our framework is based on a consistent multilinear algebra which suggests how to generalise the notion of matrix Hermicity, matrix Transpose, and most importantly the notion of Orthogonality. Our proposed factorization for a tensor in terms of lower order tensors can be recursively applied so as to yield a Tensor Spectral Hiearchy. The Complex Optimal Step-Size for Tensor Decompositions Deren Han (Nanjing Normal University) Abstract In signal processing, data analysis and scientific computing, one often encounters the problem of decomposing a tensor into a sum of contributions. To solve such problems, both the search direction and the step size are two crucial elements in numerical algorithms, such as alternating least squares algorithm (ALS). Owing to the nonlinearity of the problem, the often used linear search direction is not always powerful enough. In this paper, we propose two higher-order search directions. The first one, geometric search direction, is constructed via a combination of two successive linear directions. The second one, algebraic search direction, is constructed via a quadratic approximation of three successive iterates. Then, in an enhanced line search along these directions, the optimal complex step size contains two arguments: modulus and phase. A current strategy is ELSCS that finds these two arguments alternately. So it may suffer from a local optimum. We broach a direct method, which determines these two arguments simultaneously, so as to obtain the global optimum. Finally, numerical comparisons on various search direction and step size schemes are reported in the context of blind separation-equalization of convolutive DS-CDMA mixtures. The results show that the new search directions have greatly improvements the efficiency of ALS and the new step size strategy is competitive.In signal processing, data analysis and scientific computing, one often encounters the problem of decomposing a tensor into a sum of contributions. To solve such problems, both the search direction and the step size are two crucial elements in numerical algorithms, such as alternating least squares algorithm (ALS). Owing to the nonlinearity of the problem, the often used linear search direction is not always powerful enough. In this paper, we propose two higher-order search directions. The first one, geometric search direction, is constructed via a combination of two successive linear directions. The second one, algebraic search direction, is constructed via a quadratic approximation of three successive iterates. Then, in an enhanced line search along these directions, the optimal complex step size contains two arguments: modulus and phase. A current strategy is ELSCS that finds these two arguments alternately. So it may suffer from a local optimum. We broach a direct method, which determines these two arguments simultaneously, so as to obtain the global optimum. Finally, numerical comparisons on various search direction and step size schemes are reported in the context of blind separation-equalization of convolutive DS-CDMA mixtures. The results show that the new search directions have greatly improvements the efficiency of ALS and the new step size strategy is competitive. An unconstrained optimization approach for finding eigenvalues of even order symmetric tensors Lixing Han (University of Michigan-Flint) Approximation Schemes for Polynomial and Tensor Optimization Simai He (City University of Hong Kong) Abstract Both polynomial and tensor optimization problems have been challenging research topics and wide range of applications. We first construct the constant ratio transformation from polynomial optimization problem to tensor optimization problem. For a wide class of tensor optimization problems, we discuss various approximation schemes with the assistance of randomized algorithms. We also discuss some recent developments on probability bounds which contribute to fast algorithms for approximation of tensor problems.Both polynomial and tensor optimization problems have been challenging research topics and wide range of applications. We first construct the constant ratio transformation from polynomial optimization problem to tensor optimization problem. For a wide class of tensor optimization problems, we discuss various approximation schemes with the assistance of randomized algorithms. We also discuss some recent developments on probability bounds which contribute to fast algorithms for approximation of tensor problems. The geometric measure of entanglement of pure states with nonnegative amplitudes and the spectral theory of nonnegative tensors Shenglong Hu (The Hong Kong Polytechnic University) Abstract The geometric measure of entanglement for a symmetric pure state with nonnegative amplitudes has attracted much attention, and the spectral theory of nonnegative tensors (hypermatrices) has been developed rapidly. In this talk, we show how the spectral theory of nonnegative tensors can be applied to the study of the geometric measure of entanglement for a pure state with nonnegative amplitudes. Especially, an elimination method for computing the geometric measure of entanglement for symmetric pure multipartite qubit or qutrit states with nonnegative amplitudes is given. For symmetric pure multipartite qudit states with nonnegative amplitudes, a numerical algorithm with randomization is presented and proven to be convergent. We show that for the geometric measure of entanglement for pure states with nonnegative amplitudes, the nonsymmetric ones can be converted to the symmetric ones.The geometric measure of entanglement for a symmetric pure state with nonnegative amplitudes has attracted much attention, and the spectral theory of nonnegative tensors (hypermatrices) has been developed rapidly. In this talk, we show how the spectral theory of nonnegative tensors can be applied to the study of the geometric measure of entanglement for a pure state with nonnegative amplitudes. Especially, an elimination method for computing the geometric measure of entanglement for symmetric pure multipartite qubit or qutrit states with nonnegative amplitudes is given. For symmetric pure multipartite qudit states with nonnegative amplitudes, a numerical algorithm with randomization is presented and proven to be convergent. We show that for the geometric measure of entanglement for pure states with nonnegative amplitudes, the nonsymmetric ones can be converted to the symmetric ones. Eigenvector fields on m-th root Finsler metric Benling Li (Ningbo University) Abstract In this talk, we will introduce the m-th root Finsler metric which can be regarded as an mtensor. By the definition of eigenvalues of tensors, we define and study these eigenvalues and corresponding eigenvector fields on m-th root metrics. Some recent results will be discussed.In this talk, we will introduce the m-th root Finsler metric which can be regarded as an mtensor. By the definition of eigenvalues of tensors, we define and study these eigenvalues and corresponding eigenvector fields on m-th root metrics. Some recent results will be discussed. Maximum Eigenvalues of a Symmetric Tensor: a Variational Analysis Approach Guoyin Li (University of New South Wales, Australia) Abstract Determining the maximum eigenvalue of a symmetric tensor is of great importance in applied mathematics and engineering, and is an intrinsically hard problem. This problem arises in various important engineering applications such as stability problem of nonlinear autonomous systems in automatic control, and provides a rich and fruitful interaction between multilinear algebra and modern variational analysis. We establish some new theoretical results on the maximum eigenvalue function of an even order symmetric tensor from the variational analysis point of view. In particular, for an $m$th-order $n$-dimensional symmetric tensor $\mathcal{A}$, we establish that the maximum $Z$-eigenvalue and the maximum $H$-eigenvalue function are $\rho$th-order semismooth at $\mathcal{A}$ under some suitable regularity conditions, and provide explicit estimates (in terms of the order $m$ and dimension $n$) of the exponent $\rho$. Finally, we present some preliminary studies on applications to spectral hypergraph theory. This is a joint work with Prof. Liqun Qi and Dr. Gaohang Yu.Determining the maximum eigenvalue of a symmetric tensor is of great importance in applied mathematics and engineering, and is an intrinsically hard problem. This problem arises in various important engineering applications such as stability problem of nonlinear autonomous systems in automatic control, and provides a rich and fruitful interaction between multilinear algebra and modern variational analysis. We establish some new theoretical results on the maximum eigenvalue function of an even order symmetric tensor from the variational analysis point of view. In particular, for an $m$th-order $n$-dimensional symmetric tensor $\mathcal{A}$, we establish that the maximum $Z$-eigenvalue and the maximum $H$-eigenvalue function are $\rho$th-order semismooth at $\mathcal{A}$ under some suitable regularity conditions, and provide explicit estimates (in terms of the order $m$ and dimension $n$) of the exponent $\rho$. Finally, we present some preliminary studies on applications to spectral hypergraph theory. This is a joint work with Prof. Liqun Qi and Dr. Gaohang Yu. Perturbation analysis for the largest eigenvalue of nonnegative tensors Wen Li (South China Normal University) Abstract In this talk, we present some perturbation bounds for the largest eigenvalue of an mth-order n-dimensional non-negative tensor. The computable bound is also given. Our result can be applied to compute the largest eigenvalue for a general non-negative tensor with a given precision. Also we prove the Ng-Qi-Zhou algorithm for computing the largest eigenvalue is backward stable. Numerical examples are presented to illustrate the theoretical results of our perturbation analysis.In this talk, we present some perturbation bounds for the largest eigenvalue of an mth-order n-dimensional non-negative tensor. The computable bound is also given. Our result can be applied to compute the largest eigenvalue for a general non-negative tensor with a given precision. Also we prove the Ng-Qi-Zhou algorithm for computing the largest eigenvalue is backward stable. Numerical examples are presented to illustrate the theoretical results of our perturbation analysis. This is a joint work with Michael Ng New eigenvalue inclusion sets for tensors and their applications Yaotang Li Chao-Qian Li (Yunnan University) Abstract Two new eigenvalue inclusion sets for tensors are established. It is proved that the new eigenvalue inclusion sets are tighter than that in [Qi L. Eigenvalues of a real supersymmetric tensor. Journal of Symbolic Computation 2005; 40:1302-1324]. As applications, upper bounds for the spectral radius of a nonnegative tensor are obtained, and it is proved that these upper bounds are sharper than that in [Yang Y, Yang Q. Further results for Perron-Frobenius Theorem for nonnegative tensors. SIAM Journal on Matrix Analysis and Applications 2010; 31:2517-2530], and some sufficient conditions of the positive definiteness for an even-order real supersymmetric tensor are given.Two new eigenvalue inclusion sets for tensors are established. It is proved that the new eigenvalue inclusion sets are tighter than that in [Qi L. Eigenvalues of a real supersymmetric tensor. Journal of Symbolic Computation 2005; 40:1302-1324]. As applications, upper bounds for the spectral radius of a nonnegative tensor are obtained, and it is proved that these upper bounds are sharper than that in [Yang Y, Yang Q. Further results for Perron-Frobenius Theorem for nonnegative tensors. SIAM Journal on Matrix Analysis and Applications 2010; 31:2517-2530], and some sufficient conditions of the positive definiteness for an even-order real supersymmetric tensor are given. New method for polynomial and tensor optimization Zhening Li (Shanghai University) Abstract We propose an efficient method for solving polynomial optimization and tensor optimization problems. The new approach has the following three main ingredients. First, we establish a block coordinate descent type search method for nonlinear optimization, with the novelty being that we accept only a block update that achieves the maximum improvement, hence the name of our new search method: maximum block improvement (MBI). Convergence of the sequence produced by the MBI method to a stationary point is proved. Second, we establish that maximizing a homogeneous polynomial over a sphere is equivalent to its tensor relaxation problem; thus we can maximize a homogeneous polynomial function over a sphere by its tensor optimization via the MBI approach. Third, we propose a scheme to reach a KKT point of the polynomial optimization, provided that a stationary solution for the relaxed tensor problem is available. Numerical experiments have shown that our new method works very efficiently: for a majority of the test instances that we have experimented with, the method finds the global optimal solution at a low computational cost.We propose an efficient method for solving polynomial optimization and tensor optimization problems. The new approach has the following three main ingredients. First, we establish a block coordinate descent type search method for nonlinear optimization, with the novelty being that we accept only a block update that achieves the maximum improvement, hence the name of our new search method: maximum block improvement (MBI). Convergence of the sequence produced by the MBI method to a stationary point is proved. Second, we establish that maximizing a homogeneous polynomial over a sphere is equivalent to its tensor relaxation problem; thus we can maximize a homogeneous polynomial function over a sphere by its tensor optimization via the MBI approach. Third, we propose a scheme to reach a KKT point of the polynomial optimization, provided that a stationary solution for the relaxed tensor problem is available. Numerical experiments have shown that our new method works very efficiently: for a majority of the test instances that we have experimented with, the method finds the global optimal solution at a low computational cost. Decidability, Complexity, and Approximability of Tensor Eigenvalues and Singular Values Lek-Heng Lim (University of Chicago) Abstract We will discuss difficulties associated with determining various spectral attributes of a 3-tensor: (i) Over the rational numbers, deciding eigenvalues or singular values of a 3-tensor are undecidable problems. (ii) Over real or complex numbers, deciding eigenvalues or singular values of a 3-tensor or a symmetric 3-tensor are NP-hard problems. (iii) Over real numbers, enumerating eigenvectors of a 3-tensor is a #P-complete problem. (iv) Over real numbers, eigenvalues, eigenvectors, singular values, and singular vectors of a 3-tensor are inapproximable in polynomial time. This is joint work with Chris Hillar.We will discuss difficulties associated with determining various spectral attributes of a 3-tensor: (i) Over the rational numbers, deciding eigenvalues or singular values of a 3-tensor are undecidable problems. (ii) Over real or complex numbers, deciding eigenvalues or singular values of a 3-tensor or a symmetric 3-tensor are NP-hard problems. (iii) Over real numbers, enumerating eigenvectors of a 3-tensor is a #P-complete problem. (iv) Over real numbers, eigenvalues, eigenvectors, singular values, and singular vectors of a 3-tensor are inapproximable in polynomial time. This is joint work with Chris Hillar. A tensor singular values and its symmetric embedding eigenvalues Linzhang Lu (Xiamen University) Abstract We study the connection between a tensor singular values and its symmetric embedding eigenvalues, and extend the results given by S. Ragnarsson, C. F. Van Loan (in L A A, in press, June, 2011). The proofs of their several lemmas are simplified since the tensor-vector multiplication is adopted in our analysis. Furthermore, according to the connection, an iterative algorithm is proposed for finding the largest singular value of a nonnegative tensor, and numerical experiments are given to demonstrate viability of the iterative algorithm.We study the connection between a tensor singular values and its symmetric embedding eigenvalues, and extend the results given by S. Ragnarsson, C. F. Van Loan (in L A A, in press, June, 2011). The proofs of their several lemmas are simplified since the tensor-vector multiplication is adopted in our analysis. Furthermore, according to the connection, an iterative algorithm is proposed for finding the largest singular value of a nonnegative tensor, and numerical experiments are given to demonstrate viability of the iterative algorithm. A Unified Convergence Analysis of Block Successive Upper-bound Minimization Methods for Nonsmooth Optimization Zhi-Quan Luo (University of Minnesota) Abstract A popular approach to solve a large scale optimization problem under independent constraints is to cyclically update a subset of variables by minimizing a locally tight convex upper bound of the original (possibly nonsmooth) cost function. This approach includes the well-known block coordinate descent method (BCD), the block coordinate proximal point method (BCD) and the expectation maximization (EM) method, among others. In this work, we establish the convergence of the method under mild assumptions on the convex upper bound used at each iteration. Our work unifies, extends and strengthens the existing convergence analysis of the BCD and the EM method, and can be used to derive the convergence of block successive upper minimization methods for tensor decomposition, linear transceiver design in wireless networks, and DC (difference of convex functions) programming, among others.A popular approach to solve a large scale optimization problem under independent constraints is to cyclically update a subset of variables by minimizing a locally tight convex upper bound of the original (possibly nonsmooth) cost function. This approach includes the well-known block coordinate descent method (BCD), the block coordinate proximal point method (BCD) and the expectation maximization (EM) method, among others. In this work, we establish the convergence of the method under mild assumptions on the convex upper bound used at each iteration. Our work unifies, extends and strengthens the existing convergence analysis of the BCD and the EM method, and can be used to derive the convergence of block successive upper minimization methods for tensor decomposition, linear transceiver design in wireless networks, and DC (difference of convex functions) programming, among others. On Projectively Flat Finsler Metrics Xiaohuan Mo (Peking University) Abstract In this lecture,we discuss some basic theory of projectively flat Finsler metrics. We describe non-trivial examples of projective Finsler metrics satisfying different curvature conditions. We review local and global results in projective Finsler geometry.In this lecture,we discuss some basic theory of projectively flat Finsler metrics. We describe non-trivial examples of projective Finsler metrics satisfying different curvature conditions. We review local and global results in projective Finsler geometry. An algebraic view on tensor decomposition Bernard Mourrain (INRIA GALAAD) Abstract Tensors are used to collect data according to different "modes" or dimensions. Decomposing them is often used to extract intrinsic information hidden in this data. This problem appears inTensors are used to collect data according to different "modes" or dimensions. Decomposing them is often used to extract intrinsic information hidden in this data. This problem appears in many domains such as signal processing, data analysis, complexity analysis, phylogenetic, ... In such domains, the input data coming from measurements is known with some uncertainty. Revisiting the approach of J.J.Sylvester for the decomposition of binary forms, from a dual point of view, we will describe how it extends to general forms and related it to some recent developments of the decomposition problem for symmetric and multi-homogeneous tensors. We will see how solving truncated moment problems related to Hankel matrices help computing such decomposition. An algorithm based on an extension of Sylvester approach will be described. This will also leads us to a reformulation of the approximate decomposition problem in terms of structured low rank approximation for approximate input data. Examples will illustrate the approach. New Algorithms for Tensor Decomposition based on a Reduced Functional Carmeliza Navasca (University of Alabama) Abstract We study the least-squares functional of the canonical polyadic decomposition by elimination of one factor matrix, which leads to a reduced functional. An analysis of the reduced functional gives several equivalent optimization problems, like a Rayleigh quotient or a projection. These formulations are the basis of several new algorithms: the centroid projection method for efficient computation of suboptimal solutions and two fixed point iterations for approximating the best rank-one and the best rank-R decomposition under certain non-degeneracy conditions. (This is joint work with S. Kindermann.)We study the least-squares functional of the canonical polyadic decomposition by elimination of one factor matrix, which leads to a reduced functional. An analysis of the reduced functional gives several equivalent optimization problems, like a Rayleigh quotient or a projection. These formulations are the basis of several new algorithms: the centroid projection method for efficient computation of suboptimal solutions and two fixed point iterations for approximating the best rank-one and the best rank-R decomposition under certain non-degeneracy conditions. (This is joint work with S. Kindermann.) Sparse Non-negative Tensor Equations: Algorithms and Applications Michael Ng (Hong Kong Baptist University) Abstract The main aim of this talk is to develop iterative methods for solving a set of sparse non-negative tensor equations arising from information sciences such as networks analysis. Based on the structure of non-negative tensors, we develop Jacobi and Gauss-Seidel methods for solving such non-negative tensor equations. The advantage of the proposed method is that the multiplication of tensors with vectors are required at each iteration. Thus we have only sparse tensor-vector operations, and the set of non-negative tensor equations can be solved very efficiently. Experimental results on information retrieval by query search and community discovery in networks are reported to demonstrate that the effectiveness and efficiency of the proposed methods.The main aim of this talk is to develop iterative methods for solving a set of sparse non-negative tensor equations arising from information sciences such as networks analysis. Based on the structure of non-negative tensors, we develop Jacobi and Gauss-Seidel methods for solving such non-negative tensor equations. The advantage of the proposed method is that the multiplication of tensors with vectors are required at each iteration. Thus we have only sparse tensor-vector operations, and the set of non-negative tensor equations can be solved very efficiently. Experimental results on information retrieval by query search and community discovery in networks are reported to demonstrate that the effectiveness and efficiency of the proposed methods. A quadratically convergent algorithm for finding the largest eigenvalue of a nonnegative homogeneous polynomial map Qin Ni (Nanjing University of Aeronautics and Astronautics) Abstract In this talk we propose a quadratically convergent algorithm for finding the largestIn this talk we propose a quadratically convergent algorithm for finding the largest eigenvalue of a nonnegative homogeneous polynomial map where the Newton method is used to solve an equivalent system of nonlinear equations. The semi-symmetric tensor is introduced to reveal the relation between homogeneous polynomial map and its associated semi-symmetric tensor. Based on this relation a globally and quadratically convergent algorithm is established where the line search is inserted. Some numerical results of this method are reported. Eigenvectors and tensor decomposition Giorgio Ottaviani (Università di Firenze) Abstract Let S^pV be the p-th symmetric power of a complex vector space V. A vector v in V is called a (generalized) eigenvector of a linear map M: S^pV --> V, if M(v^p)=aV for some scalar a, called eigenvalue. When a=1 this notion coincides with the usual definition of eigenvector. We show how eigenvectors are useful in the problem of tensor decomposition (how to decompose a tensor in the sum of decomposable ones) and we sketch the connection with the complexity of the matrix multiplication algorithm.Let S^pV be the p-th symmetric power of a complex vector space V. A vector v in V is called a (generalized) eigenvector of a linear map M: S^pV --> V, if M(v^p)=aV for some scalar a, called eigenvalue. When a=1 this notion coincides with the usual definition of eigenvector. We show how eigenvectors are useful in the problem of tensor decomposition (how to decompose a tensor in the sum of decomposable ones) and we sketch the connection with the complexity of the matrix multiplication algorithm. Tensor Analysis and Its Applications in Image and Video Processing Lizhong Peng (Peking University) Abstract The concept of tensor is a higher order generalization of vector and matrix. Because tensor methods take into account the inherent high-dimensional structure of the data, they are considered to have more potential than traditional matrix methods such as the SVD. In the recent years, interest in tensor methods recently expanded to signal and image processing and other areas, which initiates new idea and method for image and video processing. Tensor eigenvalues and decompositions are promising methods to process and analyze high-dimensional data. Illumination detection is a key technology for image analysis and video surveillance. We propose the definition of D-eigenvalue for an arbitrary order tensor related with a second-order tensor D, and introduce the gradient skewness tensor which involves a three-order tensor derived from the skewness statistic of gradient images. As we happen to find out that the skewness of oriented gradients can measure the directional characteristic of illumination in an image, the local illumination detection problem for an image can be abstracted as solving the largest D-eigenvalue of gradient skewness tensors. Numerical experiments show its effective application in illumination detection. The method also presents excellent results in a class of image authenticity verification problems, which is to distinguish artificial flat objects in a photograph. The tensor methods can effectively explore the native form and feature of dynamic texture. Therefore, a series of algorithms are proposed to describe and analyze the natures of dynamic textures. Experiments show that the methods are very effective in many applications, such as dynamic background subtraction, modeling, coding and classification of dynamic textures, and crowd density estimation.The concept of tensor is a higher order generalization of vector and matrix. Because tensor methods take into account the inherent high-dimensional structure of the data, they are considered to have more potential than traditional matrix methods such as the SVD. In the recent years, interest in tensor methods recently expanded to signal and image processing and other areas, which initiates new idea and method for image and video processing. Tensor eigenvalues and decompositions are promising methods to process and analyze high-dimensional data. Illumination detection is a key technology for image analysis and video surveillance. We propose the definition of D-eigenvalue for an arbitrary order tensor related with a second-order tensor D, and introduce the gradient skewness tensor which involves a three-order tensor derived from the skewness statistic of gradient images. As we happen to find out that the skewness of oriented gradients can measure the directional characteristic of illumination in an image, the local illumination detection problem for an image can be abstracted as solving the largest D-eigenvalue of gradient skewness tensors. Numerical experiments show its effective application in illumination detection. The method also presents excellent results in a class of image authenticity verification problems, which is to distinguish artificial flat objects in a photograph. The tensor methods can effectively explore the native form and feature of dynamic texture. Therefore, a series of algorithms are proposed to describe and analyze the natures of dynamic textures. Experiments show that the methods are very effective in many applications, such as dynamic background subtraction, modeling, coding and classification of dynamic textures, and crowd density estimation. From Sparsity to Rank, and Beyond: algebra, geometry, and convexity Pablo A. Parrilo (MIT) Abstract Optimization problems involving sparse vectors or low-rank matrices areofgreat importance in applied athematics and engineering. They provide a rich and fruitful interaction between algebraic-geometric concepts and convex optimization, with strong synergies with popular techniques like L1 and nuclear norm minimization. In this lecture we will provide a gentle introduction to this exciting research area, highlighting key algebraic-geometric ideas as well as a survey of recent developments, including extensions to very general families of parsimonious models such as sums of a few permutations matrices, low-rank tensors, orthogonal matrices, and atomic measures, as well as the corresponding structure-inducing norms. Based on joint work with Venkat Chandrasekaran, Maryam Fazel, Ben Recht,Sujay Sanghavi, and Alan Willsky.Optimization problems involving sparse vectors or low-rank matrices areofgreat importance in applied athematics and engineering. They provide a rich and fruitful interaction between algebraic-geometric concepts and convex optimization, with strong synergies with popular techniques like L1 and nuclear norm minimization. In this lecture we will provide a gentle introduction to this exciting research area, highlighting key algebraic-geometric ideas as well as a survey of recent developments, including extensions to very general families of parsimonious models such as sums of a few permutations matrices, low-rank tensors, orthogonal matrices, and atomic measures, as well as the corresponding structure-inducing norms. Based on joint work with Venkat Chandrasekaran, Maryam Fazel, Ben Recht,Sujay Sanghavi, and Alan Willsky. The Quantum Eigenvalue Problem Liqun Qi (The Hong Kong Polytechnic University) Abstract The quantum entanglement problem is a central problem in quantum physics. A geometric measure of the entanglement of a general m-partite state can be described as the minimum distance to the set of separable states. The optimality conditions for the square of this minimum distance form the quantum eigenvalue problem. The quantum eigenvalues are all real. The largest quantum eigenvalue, called the entanglement eigenvalue, is corresponding to the nearest separable state. The quantum eigenvalue problem has a close link with the Z-eigenvalue problem. In this talk, we will discuss the minimum Hartree value of the quantum entanglement problem, the geometric measure of entanglement of pure states with nonnegative amplitudes, the geometric measure of entanglement of mixed states, and the relations between the quantum eigenvalue problem and the Z-eigenvalue problem.The quantum entanglement problem is a central problem in quantum physics. A geometric measure of the entanglement of a general m-partite state can be described as the minimum distance to the set of separable states. The optimality conditions for the square of this minimum distance form the quantum eigenvalue problem. The quantum eigenvalues are all real. The largest quantum eigenvalue, called the entanglement eigenvalue, is corresponding to the nearest separable state. The quantum eigenvalue problem has a close link with the Z-eigenvalue problem. In this talk, we will discuss the minimum Hartree value of the quantum entanglement problem, the geometric measure of entanglement of pure states with nonnegative amplitudes, the geometric measure of entanglement of mixed states, and the relations between the quantum eigenvalue problem and the Z-eigenvalue problem. A Tensor Decomposition Framework for Mapping the Human Brain Connectome Thomas Schultz (MPI for Intelligent Systems) Abstract Diffusion MRI (dMRI) is a modern imaging technique that plays a central role in an ongoing effort to map the human connectome, the entirety of the nerve connections in the human brain, at a macro scale. In this talk, I will explain how symmetric low-rank tensor approximations can be used to overcome the crossing fiber problem, which is one of the fundamental challenges in producingDiffusion MRI (dMRI) is a modern imaging technique that plays a central role in an ongoing effort to map the human connectome, the entirety of the nerve connections in the human brain, at a macro scale. In this talk, I will explain how symmetric low-rank tensor approximations can be used to overcome the crossing fiber problem, which is one of the fundamental challenges in producing maps of neural fibers from dMRI data. I will carefully motivate the tensor model and formally show how another popular fiber model can be reduced to it. I will cover theoretical and algorithmic aspects of the approximation and touch upon the issue of geometric representations for visualization. Finally, I will discuss a principled yet pragmatic, machine learning based way to determine a suitable approximation rank. This is joint work with Lek-Heng Lim (UChicago). Approximating Norm-Constrained Polynomial Optimization Problems via the Algorithmic Theory of Convex Bodies Anthony Man-Cho SO ( The Chinese University of Hong Kong) Abstract In recent years, norm-constrained polynomial optimization has found applications in many different areas, including spectral theory of tensors, signal processing, data analysis and quantum physics. Given their generality, norm-constrained polynomial optimization problems are typically intractable, which leads to the question of their approximability. In this talk, we will discuss the close connection between norm-constrained polynomial optimization and the algorithmic theory of convex bodies. Then, we will demonstrate how techniques from the latter can be used to prove the best-known-to-date approximation results for various classes of norm-constrained polynomial optimization problems.In recent years, norm-constrained polynomial optimization has found applications in many different areas, including spectral theory of tensors, signal processing, data analysis and quantum physics. Given their generality, norm-constrained polynomial optimization problems are typically intractable, which leads to the question of their approximability. In this talk, we will discuss the close connection between norm-constrained polynomial optimization and the algorithmic theory of convex bodies. Then, we will demonstrate how techniques from the latter can be used to prove the best-known-to-date approximation results for various classes of norm-constrained polynomial optimization problems. Positive Semidefinite Generalized Diffusion Tensor Imaging via Quadratic Semidefinite Programming Wenyu Sun (Nanjing Normal University) Abstract Keeping the positive definiteness of a diffusion tensor is important in magnetic resonance imaging (MRI) because it reflects the phenomenon of water molecular diffusion in complicated biological tissues environment. To preserve this property, we represent it as an explicit positive semidefinite (PSD) matrix constraint and some linear matrix equalities. The objective function is the regularized linear least squares fitting for the log-linearized Stejskal-Tanner equation. The regularization term is the nuclear norm of the PSD matrix. We establish a convex quadratic semidefinite programming (SDP) model, whose global solution exists. For the primal problem, there are two state-of-the-art solvers: SDPT3 and QSDP. In this paper, we propose to use the augmented Lagrangian based alternating direction method (ADM) for the dual problem. Some sensitivity analysis on the coefficients of the optimal diffusion tensor and the optimal objective function value with respect to noise-corrupted signals are presented. Experiments on synthetic data show that the new method is robust to Rician noises and it is competitive to many existing methods. By use of some human brain data, we illustrate that the new method is efficient.Keeping the positive definiteness of a diffusion tensor is important in magnetic resonance imaging (MRI) because it reflects the phenomenon of water molecular diffusion in complicated biological tissues environment. To preserve this property, we represent it as an explicit positive semidefinite (PSD) matrix constraint and some linear matrix equalities. The objective function is the regularized linear least squares fitting for the log-linearized Stejskal-Tanner equation. The regularization term is the nuclear norm of the PSD matrix. We establish a convex quadratic semidefinite programming (SDP) model, whose global solution exists. For the primal problem, there are two state-of-the-art solvers: SDPT3 and QSDP. In this paper, we propose to use the augmented Lagrangian based alternating direction method (ADM) for the dual problem. Some sensitivity analysis on the coefficients of the optimal diffusion tensor and the optimal objective function value with respect to noise-corrupted signals are presented. Experiments on synthetic data show that the new method is robust to Rician noises and it is competitive to many existing methods. By use of some human brain data, we illustrate that the new method is efficient. Numerical methods with tensor representations of data Eugene Tyrtyshnikov (Institute of Numerical Mathematics, Russian Academy of Sciences) Abstract Most standard objects in numerical algorithms are vectors and matrices. However, they are often considered as d-dimensional arrays or tensors. Therefore, one may use various tensor decompositions for exact or approximate representation of those arrays. The advantage is that the data gets to be determined by a small number of representation parameters, instead of the full number of elements of corresponding array. It is also necessary that in all operations with vectors and matrices one uses only the decompositions and never the full set of elements of corresponding arrays. The choice of tensor decompositions is a nontrivial problem. A suitable solution has been found only in recent three years and is based on the tensor-train (TT) and hierarchical Tucker (HT) decompositions, both being implementations of the same approach to the reduction of dimensionality. We consider these new representation formats for tensors, the key achievements, applications, and research challenges.Most standard objects in numerical algorithms are vectors and matrices. However, they are often considered as d-dimensional arrays or tensors. Therefore, one may use various tensor decompositions for exact or approximate representation of those arrays. The advantage is that the data gets to be determined by a small number of representation parameters, instead of the full number of elements of corresponding array. It is also necessary that in all operations with vectors and matrices one uses only the decompositions and never the full set of elements of corresponding arrays. The choice of tensor decompositions is a nontrivial problem. A suitable solution has been found only in recent three years and is based on the tensor-train (TT) and hierarchical Tucker (HT) decompositions, both being implementations of the same approach to the reduction of dimensionality. We consider these new representation formats for tensors, the key achievements, applications, and research challenges. Minimum rank solutions to the matrix approximation problems in spectral norm Musheng Wei (Shanghai Normal University) Abstract In this talk, we discuss the following minimum rank matrix approximation problem in spectral norm: $\min\limits_X \rank(X)$, subject to $ \|A-BXC\|_2=\min$. By applying the norm-preserving dilation theorem, the restricted singular value decomposition (R-SVD), H-SVD, and S-H-SVD, we characterize the expression of the minimum rank and derive a general form of minimum rank solutions to the matrix approximation problems.In this talk, we discuss the following minimum rank matrix approximation problem in spectral norm: $\min\limits_X \rank(X)$, subject to $ \|A-BXC\|_2=\min$. By applying the norm-preserving dilation theorem, the restricted singular value decomposition (R-SVD), H-SVD, and S-H-SVD, we characterize the expression of the minimum rank and derive a general form of minimum rank solutions to the matrix approximation problems. Backward Error and Perturbation Bound for High Order Sylvester Tensor Equation Yimin Wei (Fudan University) Abstract In this talk, we analyze the backward error and perturbation bounds for the high order Sylvester tensor equation (STE). We present the bounds of the backward error and three types of upper bounds for the perturbed STE with or without dropping the second order terms. We extend several classic perturbation results of the Sylvester equation to the high order case.In this talk, we analyze the backward error and perturbation bounds for the high order Sylvester tensor equation (STE). We present the bounds of the backward error and three types of upper bounds for the perturbed STE with or without dropping the second order terms. We extend several classic perturbation results of the Sylvester equation to the high order case. Some properties of H-eigenvalues of nonnegative tensors Qingzhi Yang (Nankai University) Abstract In this talk I will introduce some properties of H-eigenvalue of the nonnegative tensors. A method is presented to determine if a nonnegative tensor is irreducible. Then we state the generalized Perron-Frobenius theorem for the irreducible tensors and its weak version for the nonnegative tensors. We also show that the complex and really geometrical simplicity of the spectral radius under certain conditions and characterize the distribution of eigenvalues on spectral circle for the irreducible tensor. Based on the minmax theorem, we show that the spectral radius can be expressed as the optimal objective value of a special convex optimization problem. This is a joint work with Yuning Yang.In this talk I will introduce some properties of H-eigenvalue of the nonnegative tensors. A method is presented to determine if a nonnegative tensor is irreducible. Then we state the generalized Perron-Frobenius theorem for the irreducible tensors and its weak version for the nonnegative tensors. We also show that the complex and really geometrical simplicity of the spectral radius under certain conditions and characterize the distribution of eigenvalues on spectral circle for the irreducible tensor. Based on the minmax theorem, we show that the spectral radius can be expressed as the optimal objective value of a special convex optimization problem. This is a joint work with Yuning Yang. Reformulation of the homogenous polynomial optimization over unit spheres and duality Yuning Yang (Nankai university) Abstract In this talk, we show that minimizing a quartic form over a unit sphere is equivalent to minimizing a convex quadratic function over the intersection of a unit sphere, a hyperplane and the semidefinite cone, which is a nonconvex SDP. We then study the Lagrangian dual of the nonconvex SDP and show that there is no duality gap between the primal and dual problem. This result can be extended to homogeneous polynomial case with degree being even. We implement an alternating direction method to solve the nonconvex SDP. Although the convergence is not guaranteed in theory, preliminary numerical results show that by providing a good initial point, the ADM can achieve the global optimal solution with high probability.In this talk, we show that minimizing a quartic form over a unit sphere is equivalent to minimizing a convex quadratic function over the intersection of a unit sphere, a hyperplane and the semidefinite cone, which is a nonconvex SDP. We then study the Lagrangian dual of the nonconvex SDP and show that there is no duality gap between the primal and dual problem. This result can be extended to homogeneous polynomial case with degree being even. We implement an alternating direction method to solve the nonconvex SDP. Although the convergence is not guaranteed in theory, preliminary numerical results show that by providing a good initial point, the ADM can achieve the global optimal solution with high probability. Theory of Semidefinite Programming for a Quartic Polynomial Minimization Problem: Sensor Network Localization Yinyu Ye (Stanford University) Abstract Graph realization is to determine the locations/positions of a set of points under incomplete pair-wise Euclidean distance information, which can be formulated as a quartic polynomial minimization problem. Often, such a problem is complicated by the presence of points whose positions cannot be uniquely determined. Most existing work uses the notion of global rigidity from rigidity theory to address the non–uniqueness issue for a given framework (G,P), where G is the problem graph and P is a given position matrix. However, such notions are not entirely satisfactory, as it has been shown that even if an instance is known to be globally rigid, the problem ofGraph realization is to determine the locations/positions of a set of points under incomplete pair-wise Euclidean distance information, which can be formulated as a quartic polynomial minimization problem. Often, such a problem is complicated by the presence of points whose positions cannot be uniquely determined. Most existing work uses the notion of global rigidity from rigidity theory to address the non–uniqueness issue for a given framework (G,P), where G is the problem graph and P is a given position matrix. However, such notions are not entirely satisfactory, as it has been shown that even if an instance is known to be globally rigid, the problem of determining the point positions is still intractable in general. In this talk, we analyze the notion of universal rigidity to bridge such disconnect. Although the notion of universal rigidity is more restrictive than that of global rigidity, it captures a large class of graphs and is much more relevant to the efficient solvability of the problem. Specifically, we show that both the problem of deciding whether a given graph realization instance is universally rigid and the problem of determining the point positions of a universally rigid instance can be solved efficiently in polynomial time using semidefinite programming (SDP). Then, we give various constructions of universally rigid instances. In particular, we show that trilateration graphs with points in general position are always universally rigid, and triangulation graphs in general position are also universally rigid with a suitable objective function to maximize in the SDP formulation. On algorithms for computing the spectral radius of a nonnegative tensor Liping Zhang (Tsinghua University) Abstract In this talk, we report some existing algorithms for computing the spectral radius of an irreducible nonnegative tensor. We establish the linear convergence of these algorithms for some special classes of nonnegative tensors. Finally, we propose an always linearly convergent algorithm for computing the spectral radius of any nonnegative tensor. Furthermore, we apply the proposed algorithm to study the positive definiteness of a multivariate form.In this talk, we report some existing algorithms for computing the spectral radius of an irreducible nonnegative tensor. We establish the linear convergence of these algorithms for some special classes of nonnegative tensors. Finally, we propose an always linearly convergent algorithm for computing the spectral radius of any nonnegative tensor. Furthermore, we apply the proposed algorithm to study the positive definiteness of a multivariate form. A Study of Nonnegative Polynomial Functions Shuzhong Zhang (University of Minnesota) Abstract In this talk we shall discuss the notion of nonnegative polynomial functions. Unlike their quadratic counterpart, nonnegative polynomial functions are not uniquely defined, nor are they easily computable. In this talk we will discuss 6 different cones of nonnegative quartic polynomial functions generated from nonnegative quartic polynomial functions. These convex cones are indecreasing order, much like the Russian Matryoshka dolls, with varying computational complexities. We discuss the modeling power and applications of these convex cones.In this talk we shall discuss the notion of nonnegative polynomial functions. Unlike their quadratic counterpart, nonnegative polynomial functions are not uniquely defined, nor are they easily computable. In this talk we will discuss 6 different cones of nonnegative quartic polynomial functions generated from nonnegative quartic polynomial functions. These convex cones are indecreasing order, much like the Russian Matryoshka dolls, with varying computational complexities. We discuss the modeling power and applications of these convex cones. Some recent development on Z-eigenvalues for nonnegative tensors Tan Zhang (Murray State University) Abstract Many important spectral properties of nonnegative matrices have recently been successfully extended to higher order nonnegative tensors based on the H-eigenvalues introduced by Qi. The main purpose of this talk is to reveal some similarities as well as differences between the Z-eigenvalue and H-eigenvalues of a nonnegative tensor.Many important spectral properties of nonnegative matrices have recently been successfully extended to higher order nonnegative tensors based on the H-eigenvalues introduced by Qi. The main purpose of this talk is to reveal some similarities as well as differences between the Z-eigenvalue and H-eigenvalues of a nonnegative tensor. The cubic spherical optimization problems Xinzhen Zhang (Tianjin University) Abstract In this talk, the cubic spherical optimization problems, including the cubic one-spherical/two-spherical/three-spherical optimization problems, are discussed. We first show that the two-spherical optimization problem is a special case of the three-spherical optimization problem. Then we show that the one-spherical optimization problem and the two-spherical optimization problem have the same optimal value when the tensor is symmetric. In addition, NP-hardness of them are established. For the cubic three-spherical optimization problem, we discuss the conditions under which the problem is polynomial time solvable and polynomial time approximation scheme (PTAS) exists. Then we present a relative quality bound by finding the largest singular values of matrices. Finally, a practical method for solving the cubic three-spherical optimization problem is proposed and preliminary numerical results are reported.In this talk, the cubic spherical optimization problems, including the cubic one-spherical/two-spherical/three-spherical optimization problems, are discussed. We first show that the two-spherical optimization problem is a special case of the three-spherical optimization problem. Then we show that the one-spherical optimization problem and the two-spherical optimization problem have the same optimal value when the tensor is symmetric. In addition, NP-hardness of them are established. For the cubic three-spherical optimization problem, we discuss the conditions under which the problem is polynomial time solvable and polynomial time approximation scheme (PTAS) exists. Then we present a relative quality bound by finding the largest singular values of matrices. Finally, a practical method for solving the cubic three-spherical optimization problem is proposed and preliminary numerical results are reported.
منابع مشابه
The Sign-Real Spectral Radius for Real Tensors
In this paper a new quantity for real tensors, the sign-real spectral radius, is defined and investigated. Various characterizations, bounds and some properties are derived. In certain aspects our quantity shows similar behavior to the spectral radius of a nonnegative tensor. In fact, we generalize the Perron Frobenius theorem for nonnegative tensors to the class of real tensors.
متن کاملTensor Spectral Clustering for Partitioning Higher-order Network Structures
Spectral graph theory-based methods represent an important class of tools for studying the structure of networks. Spectral methods are based on a first-order Markov chain derived from a random walk on the graph and thus they cannot take advantage of important higher-order network substructures such as triangles, cycles, and feed-forward loops. Here we propose a Tensor Spectral Clustering (TSC) ...
متن کاملOn the Exponent of Triple Tensor Product of p-Groups
The non-abelian tensor product of groups which has its origins in algebraic K-theory as well as inhomotopy theory, was introduced by Brown and Loday in 1987. Group theoretical aspects of non-abelian tensor products have been studied extensively. In particular, some studies focused on the relationship between the exponent of a group and exponent of its tensor square. On the other hand, com...
متن کاملOn tensor product $L$-functions and Langlands functoriality
In the spirit of the Langlands proposal on Beyond Endoscopy we discuss the explicit relation between the Langlands functorial transfers and automorphic $L$-functions. It is well-known that the poles of the $L$-functions have deep impact to the Langlands functoriality. Our discussion also includes the meaning of the central value of the tensor product $L$-functions in terms of the Langl...
متن کاملOn Some Properties of the Max Algebra System Over Tensors
Recently we generalized the max algebra system to the class of nonnegative tensors. In this paper we give some basic properties for the left (right) inverse, under the new system. The existence of order 2 left (right) inverse of tensors is characterized. Also we generalize the direct product of matrices to the direct product of tensors (of the same order, but may be different dimensions) and i...
متن کاملThe second International Conference on Holy Prophet Mohammad’s Tradition (Sireye Nabavi) in Medicine
The second International Conference on Holy Prophet Mohammad’s Tradition (Sireye Nabavi) in Medicine
متن کامل