UFTDA2024 Talks

Hannah Santa Cruz Baur

Hodge Laplacians on Sequences

Hodge Laplacians have been previously proposed as a natural tool for understanding higher-order interactions in networks and directed graphs. In this talk, we will introduce a Hodge-theoretic approach to spectral theory and dimensionality reduction for probability distributions on sequences and simplicial complexes. We will demonstrate that this Hodge theory has desirable properties with respect to natural null-models, where the underlying vertices are independent. In particular, we will see that for the case of independent vertices in simplicial complexes, the appropriate Laplacians are multiples of the identity and thus have no meaningful Fourier modes. For the null model of independent vertices in sequences, we will see that the appropriate Hodge Laplacian has an integer spectrum with high multiplicities, and describe its eigenspaces.

Michael Catanzaro

Topological Parallax: A Geometric Specification for Deep Perception Models

In typical deep learning applications, models like deep neural networks can be trained to 100% accuracy with exceptional performance on validation sets but poor generalization on unseen data. We characterize this misalignment between model and data as the failure of a geometric property which is essential to trustworthy interpolation and perturbation. We introduce topological parallax to quantify the topological mismatch between model and data and using multiparameter persistence, we show parallax is stable to interpolation and noise perturbation. Parallax can indicate whether the model shares similar stable multiscale geometric features with its training dataset and thus provide better generalization guarantees. We will assume little topological and deep learning backgrounds and will explore these ideas through a variety of examples. 

Nate Clause

The discriminating power of the generalized rank invariant

In topological data analysis, the rank invariant is one of the best-known invariants of persistence modules over posets. The rank invariant is the map defined by sending each comparable pair p<=q in P to the rank of the linear map M(p<=q). The recently introduced notion of generalized rank invariant acquires more discriminating power than the rank invariant at the expense of enlarging the domain of the rank invariant to a collection I of subsets of P that contains all segments [p, q]={r in P: p<=r<=q} of P. We study the tension that exists between computational efficiency and the discriminating power of the generalized rank invariant as the size of the collection I varies. To do so, we make use of the Mobius inversion formula from combinatorics, which allows us to precisely quantify how the strength of the generalized rank invariant changes as its domain I changes. Notably, we find that Mobius inversion turns out to be useful even in cases where the domain I is not locally finite. Along the way, we establish a few sufficient conditions for the generalized rank invariant of M over a non-locally-finite I to be fully encoded via a multiset of signed intervals of P.

Parker Edwards

A computational viewpoint on distance functions and applications

Algorithms that deal with point cloud input in computational geometry and topology often provide some form of the following guarantee: If X is a compact subspace of R^n and P is finite point cloud with Hausdorff distance to X “small enough”, then the algorithm with input P correctly computes information about X. “Small enough” is determined by geometric feature sizes like the reach of X (Federer 1957) and the weak feature size X (Grove and Shiohama 1977; Chazal and Lieutier 2004).

The problem of computing these feature sizes is only partially understood, even in the “easy” case where one has as input a list of polynomial equations whose real solutions define an algebraic subspace of R^n. I will discuss this problem, as well as new theory and computational results which show that the weak feature size is well-behaved and computable with numerical algebraic geometry methods for generic algebraic manifolds.

This is joint work with Sandra Di Rocco, David Eklund, Oliver Gäfvert, and Jonathan D. Hauenstein.

Alex Elchesen

Möbius Homology as a Derived Functor

Möbius homology is a categorification of Möbius inversion formula recently introduced by Patel and Skraba. In this talk, we’ll introduce Möbius homology and then reinterpret it as a derived functor on the (derived) category of persistence modules. We’ll then relate this perspective to the Betti signed barcode introduced by Oudot and Scoccola. This is joint work with Amit Patel.

Aziz Burak Guelen

Orthogonal Möbius Inversion and Grassmannian Persistence Diagrams

We introduce the notion of Orthogonal Möbius Inversion, which can be applied to functions that take inner product spaces as values. When applied to the birth-death spaces, or to the space of harmonic persistent cycles (i.e. the kernel of the persistent Laplacian), we prove that one produces canonical representatives for each bar in the barcode of a filtration. Furthermore, we establish that these representatives are stable with respect to the edit distance.

Iryna Hartsock

Learning on Persistence Diagrams as Radon Measures 

Persistence diagrams are common descriptors of the topological structure of data appearing in various classification and regression tasks. They can be generalized to Radon measures supported on the upper half plane and endowed with an optimal transport distance. Examples of such measures are expectations of probability distributions on the space of persistence diagrams. In my talk, I will focus on the task of learning continuous functions on the space of Radon measures supported on the upper half plane. First, I will describe the compact-open dense subsets of the space of continuous functions on the space of such measures. Second, I will discuss complications with the characterization of relatively compact subsets of the space of Radon measures. Lastly, I will discuss how to implement learning tasks on Radon measures in practice.

This is a joint work with Alex Elchesen, Jose Perea, and Tatum Rask.

Michael Hull

Gaussian Persistence Curves

Gaussian Persistence curves are a family of smooth functional summaries of persistence diagrams. We will discuss some theoretic properties of these summaries as well as applications to texture classification of grey-scale images.

Hengrui Luo

Contrastive Learning in Multi-Scale Shape and Topological Data Analysis

In this ongoing work, we delve into contrastive learning within the realms of multi-scale shape and topological data analysis, transitioning from local to global contrastion. We begin by presenting a multi-scale contrastive landmark learning method for planar curves, adept at identifying features across various scales and capturing both local and global shape details. Progressing to global contrastion, we introduce an innovative topological contrastive dimension reduction technique. This method accentuates differences in topological features within reduced dimensional spaces, employing persistence diagrams to highlight multi-scale topological variations. The current work focuses on methodological advancements in multi-scale contrastive learning, aimed at unveiling new perspectives in understanding shape and topological data when there are intrinsic differences between more-than-one datasets.

Nikola Milicevic

Homotopy and singular homology groups of finite (di)graphs

We extend classical results in algebraic topology for higher homotopy groups and singular homology groups of pseudotopological spaces. Pseudotopological spaces are a generalization of topological spaces that also include graphs and directed graphs. More specifically, we show the existence of a long exact sequence for homotopy groups of pairs of closure spaces and that a weak homotopy equivalence induces isomorphisms for homology groups.

Our main result is the construction of a weak homotopy equivalences between the geometric realizations of (directed) clique complexes and their underlying (directed) graphs. This implies that singular homology groups of finite graphs can be efficiently calculated from finite combinatorial structures, despite their associated chain groups being infinite dimensional. This work is similar to the work McCord did for finite topological spaces, but in the context of pseudotopological spaces. Our results also give a novel approach for studying (higher) homotopy groups of discrete mathematical structures such as digital images. 

Rachel Neville

A TDA Approach to Snowpack Roughness 

Roughness of the snowpack surface exhibits a high degree of spatiotemporal variance.  Accurate estimation of the surface roughness is a parameter of fundamental importance for estimating turbulent fluxes and is an input into all existing numerical models of surface-atmosphere interactions. However, it proves difficult to estimate numerically from real data. We’ll describe geometric and topological techniques to estimate roughness from airborne LIDAR measurements and share some advantages of taking a topological perspective. 

Matt Pieckenbrock

Spectral relaxations of persistent rank invariants

Using the fact that the persistent rank invariant determines the persistence diagram and vice versa, we introduce a framework for constructing families of continuous relaxations of the persistent rank invariant for persistence modules indexed over the real line. Like the rank invariant, these families obey inclusion-exclusion, are derived from simplicial boundary operators, and encode all the information needed to construct a persistence diagram. Unlike the rank invariant, these spectrally-derived families enjoy a number of stability and continuity properties typically reserved for persistence diagrams, such as smoothness and differentiability. By leveraging its relationship with combinatorial Laplacian operators, we find the non-harmonic spectra of our proposed relaxation encode valuable geometric information about the underlying space, prompting several avenues for geometric data analysis. As these Laplacian operators are trace-class operators, we also find the corresponding relaxation can be efficiently approximated with a randomized algorithm based on the stochastic Lanczos quadrature method. We investigate the utility of our relaxation with applications in topological data analysis and machine learning, such as parameter optimization and shape classification.

Tatum Rask

Poincaré Duality for Generalized Persistence Diagrams of (co)Filtrations 

We dualize previous work on generalized persistence diagrams for filtrations to cofiltrations. When the underlying space is a manifold, we express this duality as a Poincaré duality between their generalized persistence diagrams. A heavy emphasis is placed on the recent discovery of functoriality of the generalized persistence diagram and its connection to Rota’s Galois Connection Theorem.

Chunyin Siu

Homology and Homotopy Properties of Scale-Free Networks

Many real-world networks are believed to be scale-free. We study the random model of preferential attachment for such networks. Our results show that preferential attachment favors higher-order connectivity, in the sense that it drives the growth of Betti numbers in the finite-graph setting, and it annihilates homotopy groups in the infinite-graph setting. More precisely, we determined the high-probability growth rates of the Betti numbers of the clique complexes of finite preferential attachment graphs, as well as the sharp threshold at which the infinite clique complex becomes homotopy-connected almost surely. This is joint work with Gennady Samorodnitsky, Christina Lee Yu, Rongyi He and Avhan Misra. The talk is based on the preprint [https://arxiv.org/abs/2305.11259] and a work in preparation.

Brantley Vose

Harmonic Representatives for Homology over Finite Fields

Practitioners of persistent homology often prefer to perform their calculations over finite fields due to the computational benefits that such fields afford. The real coefficient setting, on the other hand, furnishes a beautiful constellation of results that one might call the “theory of harmonicity”: discrete Laplacians, harmonic representatives, and Hodge decompositions. In this talk, we will attempt to bridge the gap between the real and finite field versions of homology by transporting some aspects of the theory of harmonicity into the setting of finite fields. We will explore the feasibility of Hodge decompositions over finite fields and the existence and uniqueness of harmonic representatives.

Alex Wagner

On-Manifold Projected Gradient Descent

This work provides a computable, direct, and mathematically rigorous approximation to the differential geometry of class manifolds for high-dimensional data, along with nonlinear projections from input space onto these class manifolds. The tools are applied to the setting of neural network image classifiers, where we generate novel, on-manifold data samples, and implement a projected gradient descent algorithm for on-manifold adversarial training. The susceptibility of neural networks (NNs) to adversarial attack highlights the brittle nature of NN decision boundaries in input space. Introducing adversarial examples during training has been shown to reduce the susceptibility of NNs to adversarial attack; however, it has also been shown to reduce the accuracy of the classifier if the examples are not valid examples for that class. Realistic “on-manifold” examples have been previously generated from class manifolds in the latent of an autoencoder. Our work explores these phenomena in a geometric and computational setting that is much closer to the raw, high-dimensional input space than can be provided by VAE or other black box dimensionality reductions. We employ conformally invariant diffusion maps (CIDM) to approximate class manifolds in diffusion coordinates, and develop the Nyström projection to project novel points onto class manifolds in this setting. On top of the manifold approximation, we leverage the spectral exterior calculus (SEC) to determine geometric quantities such as tangent vectors of the manifold. We use these tools to obtain adversarial examples that reside on a class manifold, yet fool a classifier. These misclassifications then become explainable in terms of human-understandable manipulations within the data, by expressing the on-manifold adversary in the semantic basis on the manifold.