The field of deep learning is commonly concerned with optimizing predictive models using large pre-acquired datasets of densely sampled datapoints or signals. In this work, we demonstrate that the deep learning paradigm can be extended to incorporate a subsampling scheme that is jointly optimized under a desired sampling rate. We present Deep Probabilistic Subsampling (DPS), a widely applicable framework for task-adaptive compressed sensing that enables end-to-end optimization of an optimal subset of signal samples with a subsequent model that performs a required task. We demonstrate strong performance on reconstruction and classification tasks of a toy dataset, MNIST, and CIFAR10 under stringent subsampling rates in both the pixel and the spatial frequency domain. Thanks to the data-driven nature of the framework, DPS is directly applicable to all real-world domains that benefit from sample rate reduction. The code used for this paper is made publicly available.
|Number of pages||17|
|Publication status||Published - 2020|
|Event||8th International Conference on Learning Representations, ICLR 2020 - Addis Abeba, Ethiopia|
Duration: 26 Apr 2020 → 1 May 2020
|Conference||8th International Conference on Learning Representations, ICLR 2020|
|Period||26/04/20 → 1/05/20|