Neurosymbolic AI
Symbolic Supervision
Partial Label Learning
Better Exploration for Symbolic Supervision
How can we train a neural network when the supervision is amgibuous and instead of specifying the true target, it only constraints the range of acceptable outputs? This can arise, for example, when supervision is missing but we have some background knowledge in the form of logical rules. It can also arise due to errors in the labelling process. In this blog we show that learning to satisfy such constraints can introduce unintended bias due to the learning dynamics, hindering the overall optimisation process. We also propose a new loss function called Libra-loss designed to circumvent the observed bias.