Wasserstein gradient flows define evolution equations in the space of measures which play a fundamental role in PDEs, probability theory, and machine learning. But what happens when entropic optimal transport is used, instead of classical optimal transport defining the Wasserstein geometry? I will explain why it may be relevant to use Sinkhorn divergences, built on entropic optimal transport, as they allow the regularization parameter to remain fixed. This approach leads to studying the Riemannian geometry induced by Sinkhorn divergences, which retains some characteristics of optimal transport geometry while being smoother. The gradient flows of potential energies in this geometry reveal intriguing features that I will discuss. This is joint work with Mathis Hardion, Jonas Luckhardt, Gilles Mordant, Bernhard Schmitzer, and Luca Tamanini.
|