Our research spans several domains of robotics and artificial intelligence. A common thread of our investigations involves the development of computationally efficient algorithms and models that enable robots to make more informed decisions by exploiting sophisticated predictive models. Below is a brief sample of our research.
The ability for a robot to efficiently communicate using natural language in the context of its world representation remains a significant challenge in robotics. Contemporary approaches estimate correspondences between language instructions and possible groundings such as objects, regions, and goals for actions that robots must execute. We are interested in developing computationally efficient models for natural language symbol grounding by adapting the structure of the graphical model based on linguistic information, expressed symbols, and environment observations and dynamics. We have developed and applied variations of Distributed Correspondence Graphs for monologic natural language interaction and verifiable natural language understanding for robotic platforms including robot torsos, robotic manipulators, and unmanned ground vehicles.
R. Paul, J. Arkin, D. Aksaray, N. Roy, and T.M. Howard, "Efficient Grounding of Abstract Spatial Concepts for Natural Language Interaction with Robot Platforms," International Journal of Robotics Research. Jun. 2018 [abstract] [paper] [bibtex]
J. Arkin, M. Walter, A. Boteanu, M. Napoli, H. Biggie, H. Kress-Gazit, and T.M. Howard, "Contextual Awareness: Understanding Monologic Natural Language Instructions for Autonomous Robots," In IEEE International Symposium on Robot and Human Interactive Communication. Aug. 2017 [abstract] [paper] [bibtex]
D. Yi, T.M. Howard, K. Seppi, and M. Goodrich, "Expressing Homotopic Requirements for Mobile Robot Navigation through Natural Language Instructions," In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, Oct. 2016, pp. 1462-1468 [abstract] [paper] [bibtex]
A. Boteanu, J. Arkin, T.M. Howard, and H. Kress-Gazit, "A Model for Verifiable Grounding and Execution of Complex Language Instructions," In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, Oct. 2016, pp. 2649-2654 [abstract] [paper] [bibtex]
Robots need to understand the consequences of their own actions in order to make informed decisions about safety and optimality of candidate motions. Our work in this area primary focuses on the development and application of techniques that invert arbitrarily complex models of motion and environment and effectively samples the continuum of possible actions for computationally efficient search. Our recent work has studied the problem of refining the representation of recombinant motion planning search spaces based on statistics and learned models of expected improvement to find optimal trajectories in complex, cluttered environments for nonholonomic mobile robots. We are also investigating models for adapting the interpretation of sensor observations for scalable approaches to hybrid metric-semantic mapping.
S. Patki and T.M. Howard, "Language-guided Adaptive Perception for Efficient Grounded Communication with Robotic Manipulators in Cluttered Environments," In 19th Annual Meeting of the Special Interest Group on Discourse and Dialogue. Jul. 2018 [abstract] [paper] [bibtex]
M. Napoli, H. Biggie, and T.M. Howard, "Learning Models for Predictive Adaptation in State Lattices," In Field and Service Robotics: Results of the 11th International Conference. Springer Proceedings in Advanced Robotics. Springer, Cham, 2018, vol. 5, pp. 285-300 [abstract] [paper] [bibtex]
M. Napoli, H. Biggie, and T.M. Howard, "On the Performance of Selective Adaptation in State Lattices for Mobile Robot Motion Planning," In IEEE/RSJ International Conference on Intelligent Robots and Systems. Sep. 2017 [abstract] [paper] [bibtex]
D. Yi, M. Goodrich, T.M. Howard, and K. Seppi, "Topology-Aware RRT* for Parallel Optimal Sampling in Topologies," In 2017 IEEE International Conference on Systems, Man, and Cybernetics. IEEE, Oct. 2017 [abstract] [paper] [bibtex]
Human-robot teams in assistive, rehabilitative, and medical domains must consider realtime, complex, and dynamic interactions. We are pursuing applications of our research in natural language understanding, representation adaptation, and motion planning and control of robots to develop systems that improve the performance of tasks of daily living and medical procedures. We specifically have explored natural language corrections for assistive robotic manipulators to update planning constraints from user feedback, adaptation of vision-based classification models for prosthetic devices from multi-modal interactions, and hybrid force-velocity controlled cooperative human-robot systems for improving the predictive value of strain elastography in medical ultrasound.
M. Esponda and T.M. Howard, "Adaptive Grasp Control through Multi-Modal Interactions for Assistive Prosthetic Devices," In 5th AAAI Fall Symposium Series on Artificial Intelligence for Human-Robot Interaction. Oct. 2018, forthcoming [bibtex]
M. Napoli, C. Freitas, S. Goswami, S. McAleavey, M. Doyley, and T.M. Howard, "Hybrid Force/Velocity Control with Compliance Estimation via Strain Elastography for Robot Assisted Ultrasound Screening," In 7th IEEE International Conference on Biomedical Robotics and Biomechatronics. IEEE, Aug. 2018, forthcoming [bibtex]
A. Broad, J. Arkin, N. Ratliff, T.M. Howard, and B. Argall, "Real-Time Natural Language Corrections for Assistive Robotic Manipulators," International Journal of Robotics Research. vol. 36, pp. 684-698, May 2017 [abstract] [paper] [bibtex]