Robotics: Science and Systems 2018

Workshop on Models and Representations for Natural Human-Robot Communication

Overview


A long-standing goal is the realization of robots that can easily join and effectively work alongside people within our homes, manufacturing centers, and healthcare facilities. In order to achieve this vision, we need to develop robots that people are able to command, control, and communicate with in ways that are intuitive, expressive, and flexible. Recognizing this need, much attention has been paid of late to natural language speech as an effective medium for humans and robots to communicate. A primary challenge to language understanding is to relate free-form language to a robot's world model --- its understanding of our unstructured environments and the ways in which it can act in these environments. This problem dates back to the earliest days of artificial intelligence and has witnessed renewed interest with advances in machine learning and probabilistic inference.

Call for Papers


We welcome contributions from a broad range of areas related to the development of models and algorithms that enable natural communication between humans and robots. We particularly encourage recent and ongoing research at the intersection of robotics and fields that include natural language processing, machine learning, and computer vision.

The workshop is intended for a broad audience working on problems related to representations of environments across a variety of domains. We anticipate participation by researchers whose interests draw from robotics, machine learning, perception, mapping, motion planning, and human-robot interaction, among others.

Topics of interest include, but are not limited to:

  • Robot knowledge representations
  • Spatial-semantic mapping
  • Semantic perception
  • Spatial-semantic representations for planning
  • Spatial language modeling and interpretation
  • Common datasets for training and benchmarking
  • Grounded language acquisition/learning
  • Spatial language modeling and interpretation
  • Robot knowledge representations
  • Interactive and active learning
  • Learning from demonstration
  • Natural language dialog

We invite participants to submit extended abstracts or full papers that describe recent or ongoing research. We encourage authors to accompany their submissions with a video that describes or demonstrates their work. Authors of accepted abstracts/papers will have the opportunity to disseminate their work through an oral presentation and/or interactive poster session.

Papers (max eight pages, excluding references) and abstracts (max two pages, excluding references) should be in PDF format and adhere to the RSS paper format. Note that reviews will not be double blind and submissions should include the author names and affiliations.

Papers, abstracts, and supplementary materials can be submitted by logging in to the conference management website located at https://cmt3.research.microsoft.com/MRHRC2018.

Invited Talks


Mohit Bansal
University of North Carolina, Chapel Hill

Dhruv Batra
Georgia Institute of Technology

Maya Cakmak
University of Washington

Joyce Chai
Michigan State University

Anca Dragan
University of California, Berkeley

Ross Knepper
Cornell University

Cynthia Matuszek
University of Maryland, Baltimore County

Zhou Yu
University of California, Davis

Program


Location: The workshop will be held over two days during RSS 2018 at Carnegie Mellon University in GHC 6115.

The Workshop on Models and Representations for Natual Human-Robot Communication is a two day workshop composed of invited talks, contributed papers, and poster presentations. The detailed program is listed below.

(This program is subject to change!
Last updated: June 29.)

If you'd like to suggest questions for the discussion forum, please use this Google Form (link)

Friday, June 29
09:00am–09:15am Introduction/Welcome
09:15am–09:45am A-STAR: Agents that See, Talk, Act, and Reason (Invited Talk)
Dhruv Batra (Georgia Tech/FAIR)
09:45am–10:00am Towards Learning User Preferences for Remote Robot Navigation
Cory Hayes (ARL), Matthew Marge (ARL), and Ethan Stump (ARL)
10:00am–10:30am Coffee Break
10:30am–10:45am Establishing Common Ground for Learning Robots
Preeti Ramaraj (UMich) and John E Laird (UMich)
10:45am–11:00am Simultaneous Intention Estimation and Knowledge Augmentation via Human-Robot Dialog
Sujay Bajracharya (Cleveland U), Saeid Amiri (Cleveland U), Jesse Thomason (UW), and Shiqi Zhang (SUNY Binghampton)
11:00am–11:30am Communication as Belief Influence (Invited Talk)
Anca Dragan (UC Berkeley)
11:30am–11:45am Specifying and Achieving Goals in Open Uncertain Robot-Manipulation Domains
Leslie Kaelbling (MIT), Alex LaGrassa (MIT), and Tomas Lozano-Perez (MIT)
11:45am–12:00pm Optimal Semantic Distance for Negative Example Selection in Grounded Language Acquisition
Nisha Pillai (UMBC), Frank Ferraro (UMBC), and Cynthia Matuszek (UMBC)
12:00pm–01:45pm Lunch
01:45pm–02:30pm Poster Session
02:30pm–03:00pm Coffee Break
03:00pm–03:30pm Poster Session (cont.)
03:30pm–04:00pm Communicative Actions in Human-Robot Teams (Invited Talk)
Ross Knepper (Cornell)
04:00pm–04:30pm Simple Models and Representations for Effective (but Perhaps Unnatural) Human-Robot Communication (Invited Talk)
Maya Cakmak (UW)
04:30pm–05:30pm Discussion (submit questions here)
Saturday, June 30
09:00am–09:15am Introduction/Welcome
09:15am–09:45am Invited Talk - Joyce Chai
09:45am–10:00am Towards Givenness and Relevance-Theoretic Open World Reference Resolution
Thomas Williams (Mines), Evan Krause (Tufts), Bradley Oosterveld (Tufts), and Matthias Scheutz (Tufts)
10:00am–10:30am Coffee Break
10:30am–10:45am Jointly Improving Parsing and Perception for Natural Language Commands through Human-Robot Dialog
Jesse Thomason (UW), Aishwarya Padmakumar (UTA), Jivko Sinapov (Tufts), Nick Walker (UW), Yuqian Jiang (UTA), Harel Yedidsion (UTA), Justin Hart (UTA), Peter Stone (UTA), and Raymond Mooney (UTA)
10:45am–11:00am Designing Questioning Strategies for an Active Learning Agent employing Diverse Query Types
Kalesha Bullard (GT), Sonia Chernova (GT), and Andrea Thomaz (UTA)
11:00am–11:30am Invited Talk - Cynthia Matuszek
11:30am–11:45am A Formal Model for Human Robot Collaboration using Hybrid Conditional Planning
Momina Rizwan (Sabanci U), Volkan Patoglu (Sabanci U), and Esra Erdem (Sabanci U)
11:45am–12:00pm Learning Group Communication from Demonstration
Navyata Sanghvi (CMU), Ryo Yonetani (U Tokyo), and Kris Kitani (CMU)
12:00pm–01:45pm Lunch
01:45pm–02:30pm Poster Session
02:30pm–03:00pm Coffee Break
03:00pm–03:30pm Poster Session (cont.)
03:30pm–04:00pm Spatially-Grounded, Personable, and Sensible Human-Robot Dialog (Invited Talk)
Invited Talk - Mohit Bansal
04:00pm–04:30pm Grounding Reinforcement Learning with Real-world Dialog Tasks (Invited Talk)
Zhou Yu (UC Davis)
04:30pm–05:30pm Discussion/Closing (submit questions here)

Organizers


Jacob Arkin
University of Rochester
j.arkin@rochester.edu

Andrea F. Daniele
Toyota Technological Institute at Chicago
afdaniele@ttic.edu

Nakul Gopalan
Brown University
ngopalan@cs.brown.edu

Thomas M. Howard
University of Rochester
thomas.howard@rochester.edu

Jesse Thomason
University of Washington
jdtho@cs.washington.edu

Matthew R. Walter
Toyota Technological Institute at Chicago
mwalter@ttic.edu

Lawson L.S. Wong
Brown University
lsw@brown.edu