Reproducibility Study of ”Label-Free Explainability for Unsupervised Models”

Authors
Gergely Papp
 • 
Julius Wagenbach
 • 
Laurens Jans de Vries
 • 
Niklas Mather
Published at
The ML Reproducibility Challenge 2022
Date
31 July 2023

In this work, we present our reproducibility study of Label-Free Explainability for Unsupervised Models, a paper that introduces two post‐hoc explanation techniques for neural networks: (1) label‐free feature importance and (2) label‐free example importance. Our study focuses on the reproducibility of the authors’ most important claims: (i) perturbing features with the highest importance scores causes higher latent shift than perturbing random pixels, (ii) label‐free example importance scores help to identify training examples that are highly related to a given test example, (iii) unsupervised models trained on different tasks show moderate correlation among the highest scored features and (iv) low correlation in example scores measured on a fixed set of data points, and (v) increasing the disentanglement with β in a β‐VAE does not imply that latent units will focus on more different features.