Robotics: Science and Systems 2016

2nd Workshop on Model Learning for Human-Robot Communication


A long-standing goal is the realization of robots that can easily join and effectively work alongside people within our homes, manufacturing centers, and healthcare facilities. In order to achieve this vision, we need to develop robots that people are able to command, control, and communicate with in ways that are intuitive, expressive, and flexible. Recognizing this need, much attention has been paid of late to natural language speech as an effective medium for humans and robots to communicate. A primary challenge to language understanding is to relate free-form language to a robot's world model --- its understanding of our unstructured environments and the ways in which it can act in these environments. This problem dates back to the earliest days of artificial intelligence and has witnessed renewed interest with advances in machine learning and probabilistic inference.

This workshop will bring together a multidisciplinary group of researchers working at the intersection of robotics, machine perception, natural language processing, and machine learning. The forum will provide an opportunity for people to showcase recent efforts to develop models and algorithms capable of efficiently understanding natural, unstructured methods of communication in the context of complex, unstructured environments. The program will combine invited and contributed talks with interactive discussions to provide an atmosphere for discourse on progress towards and challenges that inhibit bidirectional human-robot communication.

Call for Papers

We welcome contributions from a broad range of areas related to the development of models and algorithms that enable natural communication between humans and robots. We particularly encourage recent and ongoing research at the intersection of robotics and fields that include natural language processing, machine learning, and computer vision.

Topics of interest include, but are not limited to:

  • Grounded language acquisition/learning
  • Spatial language modeling and interpretation
  • Gesture recognition
  • Knowledge representations for space and actions
  • Spatial-semantic mapping
  • Semantic perception
  • Activity recognition
  • Language synthesis
  • Human-robot dialogue
  • Common datasets for training and benchmarking

We invite participants to submit extended abstracts or full papers that describe recent or ongoing research. We encourage authors to accompany their submissions with a video that describes or demonstrates their work. Authors of accepted abstracts/papers will have the opportunity to disseminate their work through an oral presentation and an interactive poster session.

Papers (max six pages, excluding references) and abstracts (max two pages, excluding references) should be in PDF format and adhere to the RSS paper format. Note that reviews will not be double blind and submissions should include the author names and affiliations.

Papers, abstracts, and supplementary materials can be submitted by logging in to the conference management website located at

Invited Talks

Hadas Kress-Gazit
Cornell University

Chad Jenkins
University of Michigan

Stefanie Tellex
Brown University

Andrea Thomaz
University of Texas at Austin


Location: The workshop will be held in North Quad 1255 on the University of Michigan campus

The Workshop on Model Learning for Human-Robot Communication is a one day workshop composed of invited talks, contributed papers, poster presentations, and a panel discussion. The detailed program is listed below.

09:00am–09:30am Introduction/Welcome
09:30am–10:00am Invited Talk - Stefanie Tellex (Brown University)
Title: Learning Models of Language, Action and Perception for Human-Robot Collaboration
10:00am–10:15am Perspective in Natural Language Instructions for Collaborative Manipulation
Shen Li (Carnegie Mellon University), Rosario Scalise (Carnegie Mellon University), Henny Admoni (Carnegie Mellon University),
Stephanie Rosenthal (Carnegie Mellon University), Siddhartha S. Srinivasa (Carnegie Mellon University)
10:15am–10:30am Towards Real-Time Natural Language Corrections for Assistive Robots
Alexander Broad (Northwestern University), Jacob Arkin (University of Rochester), Nathan Ratliff (Lula Robotics Inc.)
Thomas M Howard (University of Rochester), Brenna Argall (Northwestern University)
10:30am–11:00am Coffee Break
11:00am–11:30am Invited Talk - Hadas Kress-Gazit (Cornell University)
Title: Symbols, Logic and Synthesis for Language Interaction
11:30am–11:45am Improving Grounded Language Acquisition Efficiency Using Interactive Labeling
Nisha Pillai (University of Maryland, Baltimore County), Karan K. Budhraja (University of Maryland, Baltimore County),
Cynthia Matuszek (University of Maryland, Baltimore County)
11:45am–12:00pm Natural Language Generation in the Context of Providing Indoor Route Instructions
Andrea F Daniele (TTI-Chicago), Mohit Bansal (TTI-Chicago), Matthew Walter (TTI-Chicago)
12:00pm–12:30pm Invited Talk - Chad Jenkins (University of Michigan)
Title: Goal-Directed Manipulation through Axiomatic Scene Estimation
12:30pm–02:30pm Lunch
02:30pm–02:45pm Toward Natural Language Semantic Sensing in Dynamic State Spaces
Nicholas Sweet (University of Colorado, Boulder), Nisar R Ahmed (University of Colorado, Boulder)
02:45pm–03:00pm Natural Spatial Language Generation for Indoor Robot
Zhiyu Huo (University of Missouri-Columbia), Marjorie Skubic (University of Missouri-Columbia)
03:00pm–03:15pm Active Comparison Based Learning Incorporating User Uncertainty and Noise
Rachel M Holladay (Carnegie Mellon University), Shervin Javdani (Carnegie Mellon University),
Anca Dragan (University of California, Berkeley), Siddhartha Srinivasa (Carnegie Mellon University)
03:15pm–04:00pm Poster Session
04:00pm–04:30pm Coffee Break
04:30pm–04:45pm Spiking Neural Network for Human Hand Gesture Recognition: A Real-Time Approach
Banafsheh Rekabdar (University of Nevada, Reno), Luke Fraser (University of Nevada, Reno),
Monica Nicolescu (University of Nevada, Reno), Mircea Nicolescu (University of Nevada, Reno)
04:45pm–05:00pm Recognizing Unfamiliar Gestures for Human-Robot Interaction through Zero-Shot Learning
Wil Thomason (Cornell University), Ross Knepper (Cornell University)
05:00pm–05:30pm Invited Talk - Andrea Thomaz (University of Texas at Austin)
Title: Generating Multimodal Dialog for Collaborative Manipulation
05:30pm–06:00pm Panel Discussion/Closing

In addition to posters on the contributed papers, several additional posters will be presented. The papers for those posters are listed below.

Incrementally Identifying Objects from Referring Expressions using Spatial Object Models
Gaurav M Manek (Brown University), Stefanie Tellex (Brown University)
Social Feedback For Robotic Collaboration
Emily S Wu (Brown University), Nakul Gopalan (Brown University), James MacGlashan (Brown University),
Stefanie Tellex (Brown University), Lawson L.S. Wong (Brown University)


Thomas M. Howard
University of Rochester

Matthew R. Walter
Toyota Technological Institute at Chicago