Research

Machine Learning for Control/Learning from Demonstration

This research thrust is developing novel methods for imitation learning using model learning that generates robot paths and motions. The research creates learning algorithms that create re-usable models (mostly in the form of differential equations) with embedded dynamical system properties such as stability, safety, and robustness. The learning algorithms are can be used to design safe paths for the autonomous systems.

Sample publications:

  • I. Salehi, G. Yao, A. P. Dani, “Active Sampling based Safe Identification of Dynamical System using Extreme Learning Machines and Barrier Certificates“, IEEE International Conference on Robotics and Automation, 2019.
  • H. Ravichandar, A. P. Dani, “Learning Position and Orientation Dynamics from Demonstrations via Contraction Analysis”, Autonomous Robots, May 2018, DOI: 10.1007/s10514-018-9758-x
  • P. K. Thota, H. Ravichandar, A. P. Dani, “Learning and Synchronization of Movement Primitives for Bimanual Manipulation Tasks”, IEEE Conference on Decision and Control, 2016.
  • H. Ravichandar, A. P. Dani, “Learning Contracting Nonlinear Dynamics from Human Demonstrations for Robot Motion Planning”, ASME Dynamics, Systems and Control Conference 2015 – Best Robotics Student Paper Award
  • H. Ravichandar, P. K. Thota, A. P. Dani, “Learning Periodic Motions from Human Demonstrations using Transverse Contraction Analysis”, American Control Conference, 2016.

Human-Robot Collaboration

This research thrust is developing novel methods for human intention estimation or human action trajectory forecasting using information fusion from different sensors, modeling and forecasting the motion. The human trajectory forecast can be used in the context of human-robot collaboration in manufacturing applications.

Sample Publications:

  • H. Ravichandar, A. Kumar, A. P. Dani, K. R. Pattipati, “Learning and Predicting Sequential Tasks using Recurrent Neural Networks and Multiple Model Filtering”, AAAI Symposium on Shared Autonomy in Research and Practice, 2016, pp: 331-337.
  • H. Ravichandar, A. Kumar, A. P. Dani, “Gaze and Motion Information Fusion for Human Intention Inference”, International Journal of Intelligent Robotics and Applications, vol. 2, no. 2, pp. 136-148, June 2018.
  • H. Ravichandar, A. P. Dani, “Human Intention Inference using E-M Algorithm with Online Learning”, IEEE Transactions on Automation Science and Engineering, 2016, DOI: 10.1109/TASE.2016.2624279.
  • H. Ravichandar, A. Kumar, A. P. Dani, “Bayesian Human Intention Inference Through Multiple Model Filtering with Gaze-based Priors”, IEEE International Conference on Information Fusion, 2016.
  • H. Ravichandar, A. P. Dani, ‘Human Intention Inference using Interacting Multiple Model Filtering’, IEEE International Conference on Multisensor Fusion and Information Integration for Intelligent Systems, 2015.

Perception for Autonomy and Robotics

This research thrust is developing novel estimation algorithms for for simultaneous localization and mapping (SLAM), 3D range/depth estimation, object shape estimation using RGB and RGB-D camera. The methods are incorporating machine learning tools in depth estimation and object shape estimation (or extended object tracking methods).

Sample publications:

  • G. Rotithor, D. Trombetta, R. Kamalapurkar, A. P. Dani, “Reduced Order Observer for Structure from Motion using Concurrent Learning”, IEEE Proc. of Conference on Decision and Control, 2019.
  • G. Rotithor, R. Saltus, R. Kamalapurkar, A. P. Dani, “Observer Design for Structure from Motion using Concurrent Learning”, American Controls Conference, 2019.
  • G. Yao, R. Saltus, A. P. Dani, “Image Moment-based Extended Object Tracking for Complex Motions”, IEEE Sensors Journal, conditionally accepted, Feb 2020.
  • G. Yao, A. P. Dani, “Visual Tracking using Sparse Coding and Earth Mover’s Distance”, Frontiers in Robotics and AI, 2018, DOI: 10.3389/frobt.2018.00095.
  • G. Yao, M. Williams, A. P. Dani, “Gyro-aided Visual Tracking Using Iterative Earth Mover’s Distance”, IEEE International Conference on Information Fusion, 2016. Best Student Paper Award – 2nd runner up.
  • D. Chwa, A.P. Dani, and W. E. Dixon, “Range and Motion Estimation of Moving Objects using a Monocular Camera”, IEEE Transactions on Control Systems Technology, DOI: 10.1109/TCST.2015.2508001, 2015.
  • J. Yang, A. P. Dani, S.-J. Chung, S. Hutchinson, “Vision-based Localization and Robot-centric Mapping in Riverine Environments”, Journal of Field Robotics, DOI: 10.1002/rob.21606, 2015.

Sponsors

We acknowledge the sponsorship from the following funding sources that supported the research work.