Bringing background into the foreground: Making all classes equal in weakly-supervised video semantic segmentation

Select | Print


Saleh, Fatemeh Sadat; Aliakbarian, Mohammad Sadegh; Salzmann, Mathieu; Petersson, Lars; Alvarez Lopez, Jose


2017-10-01


Conference Material


The IEEE International Conference on Computer Vision (ICCV 2017), Venice, Italy, October 22-29, 2017


2125-2135


Pixel-level annotations are expensive and time-consuming to obtain. Hence, weak supervision using only image tags could have a significant impact in semantic segmentation. Recent years have seen great progress in weakly-supervised semantic segmentation, whether from a single image or from videos. However, most existing methods are designed to handle a single background class. In practical applications, such as autonomous navigation, it is often crucial to reason about multiple background classes. In this paper, we introduce an approach to doing so by making use of classifier heatmaps. We then develop a two-stream deep architecture that jointly leverages appearance and motion, and design a loss based on our heatmaps to train it. Our experiments demonstrate the benefits of our classifier heatmaps and of our two-stream architecture on challenging urban scene datasets and on the YouTube-Objects benchmark, where we obtain state-of-the-art results.


IEEE


Computer vision, weak supervision, semantic segmentation


Input, Output and Data Devices


https://doi.org/10.1109/ICCV.2017.232


EP185752


Conference Paper - Refereed


English


Saleh, Fatemeh Sadat; Aliakbarian, Mohammad Sadegh; Salzmann, Mathieu; Petersson, Lars; Alvarez Lopez, Jose. Bringing background into the foreground: Making all classes equal in weakly-supervised video semantic segmentation. In: The IEEE International Conference on Computer Vision (ICCV 2017); October 22-29, 2017; Venice, Italy. IEEE; 2017. 2125-2135.https://doi.org/10.1109/ICCV.2017.232



Loading citation data...

Citation counts
(Requires subscription to view)