Foundational Research
Understanding latent representations
We explore the structure and semantics of internal representations learned by machine learning models.
Selected publications:
- Similarity and Matching of Neural Network Representations
Adrián Csiszárik, Péter Kőrösi-Szabó, Ákos K. Matszangosz, Gergely Papp, Dániel Varga
NeurIPS 2021 - Mode Combinability: Exploring Convex Combinations of Permutation Aligned Models
Adrián Csiszárik, Melinda F. Kiss, Péter Kőrösi-Szabó, Márton Muntag, Gergely Papp, Dániel Varga
Neural Networks 2024.
Generative models, optimal transport and information theory
We employ tools from functional and convex analysis to study the mathematical foundations of generative models.
Selected publications:
- Moreau-Yosida 𝑓-divergences
Dávid Terjék
ICML 2021 - Optimal transport with f-divergence regularization and generalized Sinkhorn algorithm
Dávid Terjék, Diego González-Sánchez
AISTATS 2022
Neuro-symbolic AI
We investigate the integration of symbolic reasoning and neural learning. This includes using neural networks to guide reasoning, and leveraging symbolic knowledge to constrain neural models.
Selected publications:
- Towards Finding Longer Proofs
Zsolt Zombori, Adrián Csiszárik, Henryk Michalewski, Cezary Kaliszyk, Josef Urban
TABLEAUX 2021. - Towards Unbiased Exploration in Partial Label Learning
Zsolt Zombori, Agapi Rissaki, Kristóf Szabó, Wolfgang Gatterbauer, Michael Benedikt
JLMR 2024.
Computer search in pure mathematics
We employ modern computer search techniques to tackle problems in geometry, combinatorics and number theory.
Selected publications:
- The density of planar sets avoiding unit distances
Gergely Ambrus, Adrián Csiszárik, Máté Matolcsi, Dániel Varga, Pál Zsámboki
Mathematical Programming 2023 - Diverse beam search to find densest-known planar unit distance graphs
Peter Engel, Owen Hammond-Lee, Yiheng Su, Dániel Varga, Pál Zsámboki
Experimental Mathematics 2025