Anomaly detection using a convolutional Winner-Take-All autoencoder
Hanh Tran and David Hogg
We propose a method that uses a convolutional autoencoder to learn motion representations on foreground optical flow patches. The sparsity constraint, known as Winner-Take-All (WTA), is combined with the autoencoder to promote shift-invariant and generic flow features. These motion representations are then coupled with an one-class Support Vector Machine to model normalities and detect anomalies in video.
We evaluate the approach on UCSD and CUHK Avenue datasets.
UCSDPed1
UCSDPed2
Per-reviewed publication
Tran, Hanh TM, and Hogg, D. "Anomaly detection using a Convolutional Winner-Take-All Autoencoder." Proceedings of the British Machine Vision Conference, 2017.
Code
is available to download from Github