Robotics: Science and Systems 2015

Workshop on Model Learning for Human-Robot Communication


A long-standing goal is the realization of robots that can easily join and effectively work alongside people within our homes, manufacturing centers, and healthcare facilities. In order to achieve this vision, we need to develop robots that people are able to command, control, and communicate with in ways that are intuitive, expressive, and flexible. Recognizing this need, much attention has been paid of late to natural language speech as an effective medium for humans and robots to communicate. A primary challenge to language understanding is to relate free-form language to a robot's world model --- its understanding of our unstructured environments and the ways in which it can act in these environments. This problem dates back to the earliest days of artificial intelligence and has witnessed renewed interest with advances in machine learning and probabilistic inference.

This workshop will bring together a multidisciplinary group of researchers working at the intersection of robotics, machine perception, natural language processing, and machine learning. The forum will provide an opportunity for people to showcase recent efforts to develop models and algorithms capable of efficiently understanding natural, unstructured methods of communication in the context of complex, unstructured environments. The program will combine invited and contributed talks with interactive discussions to provide an atmosphere for discourse on progress towards and challenges that inhibit bidirectional human-robot communication.

Call for Papers

We welcome contributions from a broad range of areas related to the development of models and algorithms that enable natural communication between humans and robots. We particularly encourage recent and ongoing research at the intersection of robotics and fields that include natural language processing, machine learning, and computer vision.

Topics of interest include, but are not limited to:

  • Grounded language acquisition/learning
  • Spatial language modeling and interpretation
  • Gesture recognition
  • Sketch recognition
  • Semantic knowledge representations for space and actions
  • Spatial-semantic mapping
  • Semantic perception
  • Activity recognition
  • Language synthesis
  • Human-robot dialog
  • Common datasets for training and benchmarking

We invite participants to submit extended abstracts or full papers that describe recent or ongoing research. We encourage authors to accompany their submissions with a video that describes or demonstrates their work. Authors of accepted abstracts/papers will have the opportunity to disseminate their work through an oral presentation and an interactive poster session.

Papers (max six pages, excluding references) and abstracts (max two pages, excluding references) should be in PDF format and adhere to the RSS paper format. Note that reviews will not be double blind and submissions should include the author names and affiliations.

Papers, abstracts, and supplementary materials can be submitted by logging in to the conference management website located at

Invited Talks

Dieter Fox
University of Washington

Benjamin Kuipers
University of Michigan

Nicholas Roy
Massachusetts Institute of Technology

Ashutosh Saxena
Cornell University

Julie Shah
Massachusetts Institute of Technology


The Workshop on Model Learning for Human-Robot Communication is a one day workshop composed of invited talks, contributed papers, poster presentations, and a panel discussion. The detailed program is listed below.

09:00am–09:30am Introduction/Welcome
09:30am–10:00am Invited Talk - Nicholas Roy
10:00am–10:10am Autonomous Indoor Robot Navigation Using Sketched Maps and Routes
Federico Boniardi (U. Freiburg), Abhinav Valada (U. Freiburg), Gian Tipaldi (U. Freiburg), and Wolfram Burgard (U. of Freiburg)
10:10am–10:20am Constructing Abstract Maps from Spatial Descriptions for Goal-directed Exploration
Ruth Schulz (QUT), Ben Talbot (QUT), Ben Upcroft (QUT), and Gordon Wyeth (QUT)
10:20am–10:30am Softmax Modeling of Piecewise Semantics in Arbitrary State Spaces for `Plug and Play' Human-Robot Sensor Fusion
Nisar Ahmed (U. Colorado at Boulder) and Nicholas Sweet (U. Colorado at Boulder)
10:30am–11:00am Coffee Break
11:00am–11:30am Invited Talk - Ashutosh Saxena
11:30am–11:40am Modeling and Solving Human-Robot Collaborative Tasks Using POMDPs
Nakul Gopalan (Brown U.) and Stefanie Tellex (Brown U.)
11:40am–11:50am Incremental Grounded Language Learning
Michael Spranger (Sony CSL Tokyo)
11:50am–12:00pm Listen, Attend, and Walk: Neural Mapping of Navigational Instructions to Action Sequences
Hongyuan Mei (TTI-Chicago), Mohit Bansal (TTI-Chicago), and Matthew Walter (TTI-Chicago)
12:00pm–12:30pm Invited Talk - Dieter Fox
12:30pm–02:30pm Lunch
02:30pm–02:40pm Information Extraction under Communication Constraints within Assistive Robot Domains
Brenna Argall (Northwestern U.)
02:40pm–02:50pm Intent Communication between Autonomous Vehicles and Pedestrians
Milecia Matthews (Oklahoma State U.)
02:50pm–03:00pm Learning Efficient Models for Natural Language Understanding of Quantifiable Spatial Relationships
Jacob Arkin (U. Rochester) and Thomas Howard (U. Rochester)
03:00pm–04:00pm Poster Session
04:00pm–04:30pm Coffee Break
04:30pm–05:00pm Invited Talk - Benjamin Kuipers
05:00pm–05:30pm Invited Talk - Julie Shah
05:30pm–06:00pm Panel Discussion/Closing


Thomas M. Howard
University of Rochester

Matthew R. Walter
Toyota Technological Institute at Chicago