Our work on how LLMs store relations selected as NeurIPS Spotlight paper

Our paper The Structure of Relation Decoding Linear Operators in Large Language Models by Miranda Anna Christ, Adrián Csiszárik, Gergely Becsó, and Dániel Varga was accepted at the NeurIPS 2025 conference as a Spotlight paper (~3% of submissions).

Healthcare

Prevention & Prediction

Patient Pathways

Patient Pathway Mission

Leveraging Hungary’s healthcare data assets for prevention, prediction, and decision support.

Felfedező a tudomány világáról - Egy új formaosztályt fedezett fel Domokos Gábor, a Gömböc felfedezője

Podcast
October 29, 2024
podcasts
Egy új formaosztályt fedezett fel Domokos Gábor a Gömböc felfedezője, akadémikus, matematikus, a Budapesti Műszaki és Gazdaságtudományi Egyetem professzora, doktorandusz hallgatója Regős Krisztina, és G. Horváth Ákos ugyancsak a BME matematikus professzora. Bizonyították, hogy a lágy cellának elnevezett forma építi fel például, a nautilus csigáspolip házát. Domokos Gáborral és Regős Krisztinával egyebek között arról is beszélgetünk, hogy vajon miért talált ki az evolúció ilyen formát, hogy miért kíváncsi a májsebész a felfedezésükre, és hogy a lágy cellák miért olyan érdekesek az építészek számára, akik felkeresték őket.
October 29, 2024

A characterization of complex Hadamard matrices appearing in families of MUB triplets

Publication
October 3, 2024
publications
It is shown that a normalized complex Hadamard matrix of order 6 having three distinct columns each containing at least one -1 entry, necessarily belongs to the transposed Fourier family, or to the family of 2-circulant complex Hadamard matrices. The proofs rely on solving polynomial systems of equations by Gröbner basis techniques, and make use of a structure theorem concerning regular Hadamard matrices. As a consequence, members of these two families can be easily recognized in practice.
October 3, 2024

Towards Unbiased Exploration in Partial Label Learning

Publication
September 6, 2024
publications
We consider learning a probabilistic classifier from partially-labelled supervision (inputs denoted with multiple possibilities) using standard neural architectures with a softmax as the final layer. We identify a bias phenomenon that can arise from the softmax layer in even simple architectures that prevents proper exploration of alternative options, making the dynamics of gradient descent overly sensitive to initialization. We introduce a novel loss function that allows for unbiased exploration within the space of alternative outputs.
September 6, 2024

Global Sinkhorn Autoencoder — Optimal Transport on the latent representation of the full dataset

Publication
June 26, 2024
publications
We propose an Optimal Transport (OT)-based generative model from the Wasserstein Autoencoder (WAE) family of models, with the following innovative property: the optimization of the latent point positions takes place over the full training dataset rather than over a minibatch. Our contributions are the following: We define a new class of global Wasserstein Autoencoder models, and implement an Optimal Transport-based incarnation we call the Global Sinkhorn Autoencoder. We implement several metrics for evaluating such models, both in the unsupervised setting, and in a semi-supervised setting, which are the following: the global OT loss, which measures the OT loss on the full test dataset; the reconstruction error on the full test dataset; a so-called covered area which measures how well the latent points are matched; and two types of clustering measures.
June 26, 2024
The Team
The AI group at the institute brings together experts with backgrounds in both industry and academia. We place equal emphasis on theoretical foundations, thorough experimentation, and practical applications. Our close collaboration ensures a continuous exchange of knowledge between scientific research and applied projects.
Balázs Szegedy
Mathematical Theory
Attila Börcs, PhD
NLP, Modeling, MLOps
Adrián Csiszárik
Representation Learning, Foundations
Győző Csóka
NLP, MLOps
Domonkos Czifra
NLP, Foundations
Botond Forrai
Modeling
Péter Kőrösi-Szabó
Modeling
Gábor Kovács
NLP, Modeling
Judit Laki, MD PhD
Healthcare
Márton Muntag
Time Series, NLP, Modeling
Dávid Terjék
Generalization, Mathematical Theory
Dániel Varga
Foundations, Computer aided proofs
Pál Zsámboki
Reinforcement Learning, Geometric Deep Learning
Zsolt Zombori
Formal Reasoning
Péter Ágoston
Combinatory, Geometry
Beatrix Mária Benkő
Representation Learning
Jakab Buda
NLP
Diego González Sánchez
Generalization, Mathematical Theory
Melinda F. Kiss
Representation Learning
Ákos Matszangosz
Topology, Foundations
Alex Olár
Foundations
Gergely Papp
Modeling
Open Positions
The Rényi AI group is actively recruiting both theorists and practitioners.
Announcement: December 1, 2023
Deadline: rolling
applications
Rényi Institute is seeking Machine Learning Engineers to join our AI Research & Development team. Preferred Qualifications: • MLOps experience (especially in cloud environments) • Industry experience working on ML solutions
Announcement: December 1, 2023
Deadline: rolling
theory, applications
Rényi Institute is seeking Research Scientists to join our AI Research & Development team. You will have the privilege to work at a renowned academic institute and do what you love: do research and publish in the field of machine learning / deep learning.
Rényi AI - Building bridges between mathematics and artificial intelligence.