Header logo is

Sparsity Invariant CNNs

2017

Conference Paper

avg


In this paper, we consider convolutional neural networks operating on sparse inputs with an application to depth upsampling from sparse laser scan data. First, we show that traditional convolutional networks perform poorly when applied to sparse data even when the location of missing data is provided to the network. To overcome this problem, we propose a simple yet effective sparse convolution layer which explicitly considers the location of missing data during the convolution operation. We demonstrate the benefits of the proposed network architecture in synthetic and real experiments \wrt various baseline approaches. Compared to dense baselines, the proposed sparse convolution network generalizes well to novel datasets and is invariant to the level of sparsity in the data. For our evaluation, we derive a novel dataset from the KITTI benchmark, comprising 93k depth annotated RGB images. Our dataset allows for training and evaluating depth upsampling and depth prediction techniques in challenging real-world settings.

Author(s): Jonas Uhrig and Nick Schneider and Lukas Schneider and Uwe Franke and Thomas Brox and Andreas Geiger
Book Title: International Conference on 3D Vision (3DV) 2017
Year: 2017
Month: October

Department(s): Autonomous Vision
Research Project(s): 3D Datasets and Benchmarks
Sparsity Invariant CNNs
Bibtex Type: Conference Paper (conference)
Paper Type: Conference

Event Name: International Conference on 3D Vision (3DV) 2017
Event Place: Qingdao, China

Links: pdf
suppmat

BibTex

@conference{Uhrig2017THREEDV,
  title = {Sparsity Invariant CNNs},
  author = {Uhrig, Jonas and Schneider, Nick and Schneider, Lukas and Franke, Uwe and Brox, Thomas and Geiger, Andreas},
  booktitle = {International Conference on 3D Vision (3DV) 2017},
  month = oct,
  year = {2017},
  doi = {},
  month_numeric = {10}
}