Neurosymbolic AI

Symbolic Supervision

Partial Label Learning

Better Exploration for Symbolic Supervision

How can we train a neural network when the supervision is amgibuous and instead of specifying the true target, it only constraints the range of acceptable outputs? This can arise, for example, when supervision is missing but we have some background knowledge in the form of logical rules. It can also arise due to errors in the labelling process. In this blog we show that learning to satisfy such constraints can introduce unintended bias due to the learning dynamics, hindering the overall optimisation process. We also propose a new loss function called Libra-loss designed to circumvent the observed bias.

Gábor Kovács

Revolutionizing Archival Document Processing with AI: Enhancing Degraded Historical Document Images

Blog Post
November 22, 2024
posts
In recent years, the rapid advancements in Natural Language Processing (NLP) and the development of Large Language Models (LLMs) have opened new avenues for automating complex tasks across various industries. Archives, traditionally known for labor-intensive processes, are among the fields set to benefit significantly from these technologies. Historically, managing and interpreting archival documents has required manual sorting, reading, and interpreting—often under the added challenge of working with degraded or damaged materials.
November 22, 2024

Convergence and Generalization

Event
April 3, 2024
events
I will introduce some theoretical results related to the convergence and generalization capabilities of neural networks, in light of articles published at the NeurIPS 2023 conference in December. The first part of the presentation will summarize some important, earlier results, and then it will cover numerous articles based on these, which were presented at NeurIPS. The aim of the presentation is to provide a comprehensive overview of the current state of the field and the currently popular research directions.
April 3, 2024

Knot theory and AI

Event
March 20, 2024
events
I will overview some applications of supervised and reinforcement learning methods to knot theory that might be useful in other areas of mathematics.
March 20, 2024

Önfelügyelt reprezentációtanulás komplex adatokon (in hungarian)

Event
December 6, 2023
events
A reprezentációtanulás, vagyis adatokból magasabb szintű ismeret, „tudás” kinyerése a mesterséges intelligencia kutatásának egy kiemelt kérdése. Új, tudományos és társadalmi hasznosulás szempontjából is jelentős terület az önfelügyelt reprezentáció tanulás, amely a felügyelt tanuláshoz képest jóval nagyobb léptékű adatbázisokon, széles körben alkalmazható nagy modellek tanítását teszi lehetővé. Nagy nyelvi modellek tanításának mára alapeleme az önfelügyelt tanítás, és gépi látásban is több sikeres megközelítést publikáltak. Kevesebb kutatás irányul azonban arra, hogy a vizuális önfelügyelt előtanítással kialakított hálók összetett gépi látási feladatokon (pl.
December 6, 2023

Mode Combinability: Exploring Convex Combinations of Permutation Aligned Models (in hungarian)

Event
November 22, 2023
events
Mode Combinability: Exploring Convex Combinations of Permutation Aligned Models Adrián Csiszárik, Melinda F. Kiss, Péter Kőrösi-Szabó, Márton Muntag, Gergely Papp, Dániel Varga As recently discovered (Ainsworth-Hayase-Srinivasa 2022 and others), two wide neural networks with identical network topology and trained on similar data can be permutation-aligned. That is, we can shuffle their neurons (channels) so that linearly interpolating between the two networks in parameter space becomes a meaningful operation (linear mode connectivity).
November 22, 2023

Targeted Adversarial Attacks on Generalizable Neural Radiance Fields

Event
November 15, 2023
events
Contemporary robotics relies heavily on addressing key challenges like odometry, localization, depth perception, semantic segmentation, the creation of new viewpoints, and navigation with precision and efficiency. Implicit neural representation techniques, notably Neural Radiance Fields (NeRFs) and Generalizable NeRFs (GeNeRFs), are increasingly employed to tackle these issues. This talk focuses on exposing certain critical, but subtle flaws inherent in GeNeRFs. Adversarial attacks, while not new to various machine learning frameworks, present a significant threat.
November 15, 2023

Dániel Varga

Solving a Conjecture of Erdős

Blog Post
October 6, 2023
posts
Sets of points with the property that no two elements of the set are one unit distance apart are called unit-distance avoiding sets. If a point is in the unit-distance avoiding set, then the unit circle drawn around it does not intersect the set, but there is no restriction regarding the interior and the exterior of this circle. When searching for unit-distance avoiding sets with high densities, the following construction naturally comes to mind: an open disc with a unit diameter is unit-distance avoiding, as all distances between its two points are less than 1.
October 6, 2023
The Team
The Rényi AI team has a careful balance of theorists and machine learning practitioners. The practitioners in the team are well-versed and up-to-date in the extremely fast-paced world of deep learning, but at the same time, they can to contribute to foundational research. The theorists of the team are world-class experts in the highly abstract theoretical machinery, but at the same time, they do not shy away from running simulations. This balance creates an optimal environment for a free flow of ideas between theory and practice, thus, underpins pursuing the general goal to bridge the gap between mathematical theory and deep learning practice.
Balázs Szegedy
Mathematical Theory
Attila Borcs, PhD
NLP, Modeling, MLOps
Adrián Csiszárik
Representation Learning, Foundations
Győző Csóka
NLP, MLOps
Domonkos Czifra
NLP, Foundations
Diego González Sánchez
Generalization, Mathematical Theory
Botond Forrai
Modeling
Melinda F. Kiss
Representation Learning
Péter Kőrösi-Szabó
Modeling
Gábor Kovács
NLP, Modeling
Judit Laki, MD PhD
Healthcare
Ákos Matszangosz
Topology, Foundations
Márton Muntag
Time Series, NLP, Modeling
Dávid Terjék
Generalization, Mathematical Theory
Dániel Varga
Foundations, Computer aided proofs
Pál Zsámboki
Reinforcement Learning, Geometric Deep Learning
Zsolt Zombori
Formal Reasoning
Open Positions
The Rényi AI group is actively recruiting both theorists and practitioners.
Announcement: December 1, 2023
Deadline: rolling
applications
Rényi Institute is seeking Machine Learning Engineers to join our AI Research & Development team. Preferred Qualifications: • MLOps experience (especially in cloud environments) • Industry experience working on ML solutions
Announcement: December 1, 2023
Deadline: rolling
theory, applications
Rényi Institute is seeking Research Scientists to join our AI Research & Development team. You will have the privilege to work at a renowned academic institute and do what you love: do research and publish in the field of machine learning / deep learning.
Rényi AI - Building bridges between mathematics and artificial intelligence.