Much attention in the robotics community over the last several decades has focused on low-level geometric environment representations in the form of primitive feature-based maps, occupancy grids, and point clouds. As robots perform a wider variety of tasks in increasingly large and complex environments, the fidelity and richness of the representation becomes critical. Choosing the correct representation is important because those that are overly rich may be too complex to evaluate, while those that are simplified may not be expressive enough for difficult tasks. Effective perception algorithms should be capable of learning the appropriate fidelity and complexity of an environment representation from multimodal observations. Recognizing this need, researchers have devoted greater attention to developing spatial-semantic models that jointly express the geometric, topologic, and semantic properties of the robot's environment.
The workshop is intended for a broad audience working on problems related to representations of environments across a variety of domains. We anticipate participation by researchers whose interests draw from robotics, machine learning, perception, mapping, motion planning, and human-robot interaction, among others.
Topics of interest include, but are not limited to:
We invite participants to submit extended abstracts or full papers that describe recent or ongoing research. We encourage authors to accompany their submissions with a video that describes or demonstrates their work. Authors of accepted abstracts/papers will have the opportunity to disseminate their work through an oral presentation and an interactive poster session.
Papers (max eight pages, excluding references) and abstracts (max two pages, excluding references) should be in PDF format and adhere to the RSS paper format. Note that reviews will not be double blind and submissions should include the author names and affiliations.
Papers, abstracts, and supplementary materials can be submitted by logging in to the conference management website located at https://cmt3.research.microsoft.com/SSRR2017.
University of Washington
Massachusetts Institute of Technology
Queensland University of Technology
|Abstract/Paper Submission||June 9, 2017|
|Abstract/Paper Notification||June 16, 2017|
|Workshop Date||July 16, 2017|
Location: The workshop will be held on the MIT Campus in room 32-141 of the Stata Center. A map to the workshop location is provided here.
The Workshop on Spatial-Semantic Representations in Robotics is a half-day workshop composed of three invited talks, thirteen contributed papers (five long talks and eight short talks / poster), and a panel discussion. The detailed program is listed below.
|1:15pm–1:45pm||Invited Talk - Dieter Fox|
|1:45pm–2:00pm||Grounding Spatio-Semantic Referring Expressions for Human-Robot Interaction|
|Mohit Shridhar (NUS) and David Hsu (NUS)|
|2:00pm–2:15pm||Learning Deep Generative Spatial Models for Mobile Robots|
|Andrzej Pronobis (U. Washington) and Rajesh P. N. Rao (U. Washington)|
|2:15pm–2:30pm||Gaussian Processes Semantic Map Representation|
|Maani Ghaffari Jadidi (U. Michigan), Lu Gan (U. Michigan), Steven Parkison (U. Michigan)|
|Jie Li (U. Michigan), and Ryan M Eustice (U. Michigan)|
|Semantic and Geometric Scene Understanding for Single-view Task-oriented Grasping of Novel Objects|
|Renaud Detry (NASA JPL), Jeremie Papon (NASA JPL), Larry Matthies (NASA JPL)|
|Visual Grounding of Spatial Relationships for Failure Detection|
|Akanksha Saran (UT Austin) and Scott Niekum (UT Austin)|
|Towards a Hybrid Scene Representation in VSLAM|
|Georges Younes (U. Waterloo), Daniel Asmar (American University of Beirut), and John Zelek (U. Waterloo)|
|Cognitive Mapping and Planning for Visual Navigation|
|Saurabh Gupta (Berkeley), James Davidson (Google), Sergey Levine (Berkeley), Rahul Sukthankar (Google),|