Motivated by their applications on machine learning and statistics, I consider variational problems on geometric graphs and study their continuum limits. For us a geometric graph is a graph whose nodes $x_1, \dots, x_n$ are i.i.d. samples from a distribution supported on some manifold embedded in Euclidean space, and where an edge between two nodes is present if the distance between them is below a certain length-scale . The first type of problem that I will consider in my talk is the minimization of balanced cut functionals such as Cheeger cuts, as well relaxations to these functionals closely related to spectral clustering. The second type of problem that I will present appears in the context of bayesian inverse problems; one specific example is the bayesian formulation of semi-supervised learning.
The main question that I will attempt to answer in this talk is: how and when do these variational problems on geometric graphs converge, as $n \rightarrow \infty$, towards meaningful variational problems in the continuum? Besides presenting an overarching mathematical framework allowing us to study the continuum limits of these problems, I will also discuss the qualitative and quantitative implications that our results entail. This talk is based on works with Dejan Slepcev and also on works with Daniel Sanz-Alonso.
Prager Assistant Professor