Our work on how LLMs store relations selected as NeurIPS Spotlight paper

Our paper The Structure of Relation Decoding Linear Operators in Large Language Models by Miranda Anna Christ, Adrián Csiszárik, Gergely Becsó, and Dániel Varga was accepted at the NeurIPS 2025 conference as a Spotlight paper (~3% of submissions).

Healthcare

Prevention & Prediction

Patient Pathways

Patient Pathway Mission

Leveraging Hungary’s healthcare data assets for prevention, prediction, and decision support.

Ordering Subgoals in a Backward Chaining Prover

Publication
September 5, 2021
publications
Many automated theorem provers are based on backward chaining: reasoning starts from a goal statement which we aim to prove and each inference step reduces one of the goals to a (possibly empty) set of new subgoals. We thus maintain a set of open goals that need to be proven and the proof is complete once the open goal set becomes empty. For each goal, there can be several valid inferences, resulting in different successor goal sets and selecting the right inference constitutes the core problem of such theorem provers which has been thoroughly studied in the past half century.
September 5, 2021

Ordering Subgoals in a Backward Chaining Prover

Publication
September 5, 2021
publications
World models represent the basic mechanisms of a system and can provide predictions about how transformations (actions) affect the state of the system. Such models have recently gained attention in Reinforcement Learning (RL) and in several domains model based learning systems performed similarly or better than highly tuned model free variants [1, 8, 12]. World models can increase sample efficiency since trajectories can be generated without interacting with the environment, and they can aid exploration by yielding a semantically meaningful latent structure that allows for identifying promising directions.
September 5, 2021

A Refinement of Cauchy-Schwarz Complexity, with Applications

Publication
August 24, 2021
publications
We introduce a notion of complexity for systems of linear forms called \emph{sequential Cauchy-Schwarz complexity}, which is parametrized by two positive integers $k,\ell$ and refines the notion of Cauchy-Schwarz complexity introduced by Green and Tao. We prove that if a system of linear forms has sequential Cauchy-Schwarz complexity at most $(k,\ell)$ then any average of 1-bounded functions over this system is controlled by the $2^{1-\ell}$-th power of the Gowers $U^{k+1}$-norms of the functions.
August 24, 2021

Towards solving the 7-in-a-row game

Publication
August 17, 2021
publications
Our paper explores the game theoretic value of the 7-in-a-row game. We reduce the problem to solving a finite board game, which we target using Proof Number Search. We present a number of heuristic improvements to Proof Number Search and examine their effect within the context of this particular game. Although our paper does not solve the 7-in-a-row game, our experiments indicate that we have made significant progress towards it.
August 17, 2021
The Team
The AI group at the institute brings together experts with backgrounds in both industry and academia. We place equal emphasis on theoretical foundations, thorough experimentation, and practical applications. Our close collaboration ensures a continuous exchange of knowledge between scientific research and applied projects.
Balázs Szegedy
Mathematical Theory
Attila Börcs, PhD
NLP, Modeling, MLOps
Adrián Csiszárik
Representation Learning, Foundations
Győző Csóka
NLP, MLOps
Domonkos Czifra
NLP, Foundations
Botond Forrai
Modeling
Péter Kőrösi-Szabó
Modeling
Gábor Kovács
NLP, Modeling
Judit Laki, MD PhD
Healthcare
Márton Muntag
Time Series, NLP, Modeling
Dávid Terjék
Generalization, Mathematical Theory
Dániel Varga
Foundations, Computer aided proofs
Pál Zsámboki
Reinforcement Learning, Geometric Deep Learning
Zsolt Zombori
Formal Reasoning
Péter Ágoston
Combinatory, Geometry
Beatrix Mária Benkő
Representation Learning
Jakab Buda
NLP
Diego González Sánchez
Generalization, Mathematical Theory
Melinda F. Kiss
Representation Learning
Ákos Matszangosz
Topology, Foundations
Alex Olár
Foundations
Gergely Papp
Modeling
Open Positions
The Rényi AI group is actively recruiting both theorists and practitioners.
Announcement: December 1, 2023
Deadline: rolling
applications
Rényi Institute is seeking Machine Learning Engineers to join our AI Research & Development team. Preferred Qualifications: • MLOps experience (especially in cloud environments) • Industry experience working on ML solutions
Announcement: December 1, 2023
Deadline: rolling
theory, applications
Rényi Institute is seeking Research Scientists to join our AI Research & Development team. You will have the privilege to work at a renowned academic institute and do what you love: do research and publish in the field of machine learning / deep learning.
Rényi AI - Building bridges between mathematics and artificial intelligence.