Outlier Reconstruction

Demo developed by Hyerim Jeon and Sangwoong Yoon.


Proposed by Rumelhart et al. in 1986, an autoencoder (AE) has been a canonical unsupervised learning algorithm that captures the regularities in data by learning to reconstruct them. The autoencoder's ability to reconstruct training data enables the application of an autoencoder to outlier detection, where outliers are assumed to have large reconstruction errors. However, the assumption has been rarely challenged throughout history. In fact, our ICML 2021 paper reveals that the assumption is false. An autoencoder can sometimes reconstruct outliers in surprisingly good quality, resulting in failures in outlier detection. We call this phenomenon Outlier Reconstruction.

In our paper, we propose Normalized Autoencoder (NAE) which is particularly effective at suppressing outlier reconstruction. The suppression is achieved via treating an autoencoder as an energy-based model. As a result, NAE significantly improves outlier detection performance over other existing autoencoder variants.


Demo

AE and NAE are trained to reconstruct MNIST digits. The architectures of the autoencoders are identical. Only the objective functions are different.

Will they reconstruct outliers? Click image icons below.


Input
AE Reconstruction
NAE Reconstruction


About This Demo

To learn more about outlier reconstruction and NAE,

    @InProceedings{yoon21autoencoding,
      title = 	 {Autoencoding Under Normalization Constraints},
      author =       {Yoon, Sangwoong and Noh, Yung-Kyun and Park, Frank},
      booktitle = 	 {Proceedings of the 38th International Conference on Machine Learning},
      pages = 	 {12087--12097},
      year = 	 {2021},
      editor = 	 {Meila, Marina and Zhang, Tong},
      volume = 	 {139},
      series = 	 {Proceedings of Machine Learning Research},
      month = 	 {18--24 Jul},
      publisher =    {PMLR},}