Our work on how LLMs store relations selected as NeurIPS Spotlight paper

Our paper The Structure of Relation Decoding Linear Operators in Large Language Models by Miranda Anna Christ, Adrián Csiszárik, Gergely Becsó, and Dániel Varga was accepted at the NeurIPS 2025 conference as a Spotlight paper (~3% of submissions).

Healthcare

Prevention & Prediction

Patient Pathways

Patient Pathway Mission

Leveraging Hungary’s healthcare data assets for prevention, prediction, and decision support.

Optimal transport with f-divergence regularization and generalized Sinkhorn algorithm

Publication
March 28, 2022
publications
Entropic regularization provides a generalization of the original optimal transport problem. It introduces a penalty term defined by the Kullback-Leibler divergence, making the problem more tractable via the celebrated Sinkhorn algorithm. Replacing the Kullback-Leibler divergence with a general $f$-divergence leads to a natural generalization. The case of divergences defined by superlinear functions was recently studied by Di Marino and Gerolin. Using convex analysis, we extend the theory developed so far to include all $f$-divergences defined by functions of Legendre type, and prove that under some mild conditions, strong duality holds, optimums in both the primal and dual problems are attained, the generalization of the $c$-transform is well-defined, and we give sufficient conditions for the generalized Sinkhorn algorithm to converge to an optimal solution.
March 28, 2022

On F ω 2-affine-exchangeable probability measures

Publication
March 16, 2022
publications
For any standard Borel space , let denote the space of Borel probability measures on . In relation to a difficult problem of Aldous in exchangeability theory, and in connection with arithmetic combinatorics, Austin raised the question of describing the structure of affine-exchangeable probability measures on product spaces indexed by the vector space F, i.e., the measures in F that are invariant under the coordinate permutations on F induced by all affine automorphisms of F.
March 16, 2022

Artificial Intelligence National Laboratory Program

Grant
March 1, 2022
grants
The National Laboratory for Artificial Intelligence (MILAB) aims to strengthen Hungary’s role in the field of AI. The European Union is making significant efforts to catch up with the development capabilities of the USA and China. The domestic research of artificial intelligence in Hungary is currently characterized by fragmentation, competition, and isolation. There is no unified AI umbrella over the large-scale, application-specific programs. Several universities and research institutions have good relations with business applicators, but societal innovation is also necessary for the acceptance of technological innovation.
March 1, 2022

Similarity and Matching of Neural Network Representations

Publication
December 6, 2021
publications
We employ a toolset — dubbed Dr. Frankenstein — to analyse the similarity of representations in deep neural networks. With this toolset we aim to match the activations on given layers of two trained neural networks by joining them with a stitching layer. We demonstrate that the inner representations emerging in deep convolutional neural networks with the same architecture but different initialisations can be matched with a surprisingly high degree of accuracy even with a single, affine stitching layer.
December 6, 2021
The Team
The AI group at the institute brings together experts with backgrounds in both industry and academia. We place equal emphasis on theoretical foundations, thorough experimentation, and practical applications. Our close collaboration ensures a continuous exchange of knowledge between scientific research and applied projects.
Balázs Szegedy
Mathematical Theory
Attila Börcs, PhD
NLP, Modeling, MLOps
Adrián Csiszárik
Representation Learning, Foundations
Győző Csóka
NLP, MLOps
Domonkos Czifra
NLP, Foundations
Botond Forrai
Modeling
Péter Kőrösi-Szabó
Modeling
Gábor Kovács
NLP, Modeling
Judit Laki, MD PhD
Healthcare
Márton Muntag
Time Series, NLP, Modeling
Dávid Terjék
Generalization, Mathematical Theory
Dániel Varga
Foundations, Computer aided proofs
Pál Zsámboki
Reinforcement Learning, Geometric Deep Learning
Zsolt Zombori
Formal Reasoning
Péter Ágoston
Combinatory, Geometry
Beatrix Mária Benkő
Representation Learning
Jakab Buda
NLP
Diego González Sánchez
Generalization, Mathematical Theory
Melinda F. Kiss
Representation Learning
Ákos Matszangosz
Topology, Foundations
Alex Olár
Foundations
Gergely Papp
Modeling
Open Positions
The Rényi AI group is actively recruiting both theorists and practitioners.
Announcement: December 1, 2023
Deadline: rolling
applications
Rényi Institute is seeking Machine Learning Engineers to join our AI Research & Development team. Preferred Qualifications: • MLOps experience (especially in cloud environments) • Industry experience working on ML solutions
Announcement: December 1, 2023
Deadline: rolling
theory, applications
Rényi Institute is seeking Research Scientists to join our AI Research & Development team. You will have the privilege to work at a renowned academic institute and do what you love: do research and publish in the field of machine learning / deep learning.
Rényi AI - Building bridges between mathematics and artificial intelligence.