ResearchRiemannian Geometry & Manifolds

Domain Adaptation

We currently put a special focus on the problem of domain adaptation. In a recent work, we proposed to view the data through the lens of covariance matrices and presented a method for domain adaptation using parallel transport on the cone manifold of symmetric positive-definite matrices. We showed that this is a natural and efficient solution for domain adaptation, which enjoys several important benefits. First, the solution is specially-designed for symmetric positive definite matrices, which have proven to be good features of data in a gamut of previous work. Second, the analytic form of parallel transport on the cone manifold circumvents approximations. Third, parallel transport can be efficiently implemented, in contrast for example to the computationally demanding Schild’s Ladder approximation. We established the mathematical foundation of the proposed domain adaptation method, providing new results in the geometry of symmetric positive-definite matrices. In addition, we showed applications to both simulation and real recorded data, obtaining improved performance compared to competing methods.

Particularly, we showed that in a Brain Computer Interface (BCI) experiment, the covariance matrices of data acquired from a single subject in a specific session capture well the overall geometric structure of the data. Conversely, when the data consist of measurements from several subjects or several sessions, then the covariance matrices do not live in the same region of the manifold. Such multi-domain data often pose significant challenges to learning approaches. Particularly in such a BCI experiment, our theory led to the development of a domain adaptation method that allows to train a classifier based on data from one subject (domain) and apply it to data from another subject (domain).

Riemannian Geometry of Diffusion Operators

We study the manifold of diffusion operators, on which we can define geometric, differential, and probabilistic structures. This research direction entails a fresh approach to manifold learning, departing from the traditional use of spectral decomposition of diffusion operators for embedding. Diffusion operators are typically positive (semi-)definite, and as such, have the manifold structure of a cone embedded within the ambient space of all symmetric operators. Consequently, discrete and finite diffusion operators can be projected onto a Euclidean space with explicit consideration of the geometry of the space of symmetric positive-definite matrices to which they belong.

By pursuing this research path, we benefit from the power of individual diffusion operators as well as from their “induced” Riemannian geometry as an ensemble. For example, while each diffusion operator extracts the manifold structure of each data set, transportation of diffusion operators on the associated Riemannian manifold enables us to appropriately merge and compare entire data sets.

Optimal Transport on Manifolds

Classical transformative results, e.g., the theory of transportation on Riemannian manifolds, enable us to broaden the scope well beyond the SPD manifold. Recently, optimal transport schemes have gained considerable attention in the context of image analysis and retrieval, computer vision, computer graphics and shape analysis, as well as domain adaptation and transfer learning, establishing connections with efficient computation schemes and metrics (e.g., the earth mover’s distance).

We develop new Optimal Transport (OT) techniques on manifolds. Furthermore, we devise a distance between data sets and distributions which is induced by Optimal Transport (OT) on manifolds. Our new tools, developed based on diffusion operators and manifold learning, are employed in a broad variety of applications such as combating patient effects in medical recordings and batch effects in bioinformatics.