Robotics: Science and Systems 2017

Workshop on Spatial-Semantic Representations in Robotics


Much attention in the robotics community over the last several decades has focused on low-level geometric environment representations in the form of primitive feature-based maps, occupancy grids, and point clouds. As robots perform a wider variety of tasks in increasingly large and complex environments, the fidelity and richness of the representation becomes critical. Choosing the correct representation is important because those that are overly rich may be too complex to evaluate, while those that are simplified may not be expressive enough for difficult tasks. Effective perception algorithms should be capable of learning the appropriate fidelity and complexity of an environment representation from multimodal observations. Recognizing this need, researchers have devoted greater attention to developing spatial-semantic models that jointly express the geometric, topologic, and semantic properties of the robot's environment.

Invited Talks

Dieter Fox
University of Washington

John Leonard
Massachusetts Institute of Technology

Michael Milford
Queensland University of Technology


Location: The workshop will be held on the MIT Campus in room 32-141 of the Stata Center. A map to the workshop location is provided here.

The Workshop on Spatial-Semantic Representations in Robotics is a half-day workshop composed of three invited talks, thirteen contributed papers (five long talks and eight short talks / poster), and a panel discussion. The detailed program is listed below.

1:00pm–1:15pm Introduction/Welcome
1:15pm–1:45pm Invited Talk - Dieter Fox
1:45pm–2:00pm Grounding Spatio-Semantic Referring Expressions for Human-Robot Interaction
Mohit Shridhar (NUS) and David Hsu (NUS)
2:00pm–2:15pm Learning Deep Generative Spatial Models for Mobile Robots
Andrzej Pronobis (U. Washington) and Rajesh P. N. Rao (U. Washington)
2:15pm–2:30pm Gaussian Processes Semantic Map Representation
Maani Ghaffari Jadidi (U. Michigan), Lu Gan (U. Michigan), Steven Parkison (U. Michigan)
Jie Li (U. Michigan), and Ryan M Eustice (U. Michigan)
2:30pm–3:00pm Short Talks
Semantic and Geometric Scene Understanding for Single-view Task-oriented Grasping of Novel Objects
Renaud Detry (NASA JPL), Jeremie Papon (NASA JPL), Larry Matthies (NASA JPL)
Visual Grounding of Spatial Relationships for Failure Detection
Akanksha Saran (UT Austin) and Scott Niekum (UT Austin)
Towards a Hybrid Scene Representation in VSLAM
Georges Younes (U. Waterloo), Daniel Asmar (American University of Beirut), and John Zelek (U. Waterloo)
Cognitive Mapping and Planning for Visual Navigation
Saurabh Gupta (Berkeley), James Davidson (Google), Sergey Levine (Berkeley), Rahul Sukthankar (Google),
Jitendra Malik (Berkeley)
A Semantic Layer to Improve AUV Autonomy
Francesco Maurelli (MIT), John Leonard (MIT), and David Lane (Heriot-Watt University)
An RGBD Segmentation Model for Robot Vision Learned from Synthetic Data
Jonathan C. Balloch (Georgia Tech) and Sonia Chernova (Georgia Tech)
Deep Spatial Affordance Hierarchy: Spatial Knowledge Representation for Planning in Large-scale Environments
Andrzej Pronobis (U. Washington), Francesco Riccio (Sapienza University of Rome), and Rajesh P. N. Rao (U. Washington)
Online Semantic Mapping for Autonomous Navigation and Scouting
Daniel Maturana (CMU), Sankalp Arora (CMU), Po-Wei Chou (CMU), Dong-Ki Kim (CMU),
Masashi Uenoyama (Yamaha), Sebastian Scherer (CMU)
3:00pm–4:00pm Poster Session / Coffee Break
4:00pm–4:30pm Invited Talk - Michael Milford
4:30pm–4:45pm Estimation of Surface Geometries in Point Clouds for the Manipulation of Novel Household Objects
Siddarth Jain (Northwestern) and Brenna Argall (Northwestern)
4:45pm–5:00pm Identifying Negative Exemplars in Grounded Language Data Sets
Nisha Pillai (UMBC) and Cynthia Matuszek (UMBC)
5:00pm–5:30pm Invited Talk - John Leonard
5:30pm–6:00pm Panel Discussion / Parting Thoughts


Thomas M. Howard
University of Rochester

Matthew R. Walter
Toyota Technological Institute at Chicago