Machine Learning for Control
This research thrust is developing novel methods for learning from demonstration using model learning that generates robot paths and motions. The research creates learning algorithms that develop re-usable models (mostly in the form of differential equations) and controls that contains properties such as stability, safety and robustness observed in data.
- I. Salehi, G. Rotithor, G. Yao, A. P. Dani, “Dynamical System Learning using Extreme Learning Machines with Safety and Stability Guarantees”, International Journal of Adaptive Control and Signal Processing, vol. 35, no. 6, pp. 894-914, 2021.
- I. Salehi, G. Yao, A. P. Dani, “Active Sampling based Safe Identification of Dynamical System using Extreme Learning Machines and Barrier Certificates“, IEEE International Conference on Robotics and Automation, 2019, pp. 22-28.
- H. Ravichandar, A. P. Dani, “Learning Position and Orientation Dynamics from Demonstrations via Contraction Analysis”, Autonomous Robots, vol, 43, no. 4, pp. 897-912, 2019.
This research thrust is developing novel methods for human intention estimation or human action trajectory forecasting using information fusion from different sensors. The human trajectory forecast can be used in the context of human-robot collaboration in manufacturing or space robotics.
- D. Trombetta, G. Rotithor, I. Salehi, A. P. Dani, “Variable Structure Human Intention Estimator with Mobility and Vision Constraints as Model Selection Criteria“, IFAC Mechatronics, vol. 76, pp. 102570, 2021.
- A. P. Dani, I. Salehi, G. Rotithor, D. Trombetta, H. Ravichandar, “Human-in-the-loop Robot Control for Human-Robot Collaboration”, IEEE Control Systems, vol. 40, no. 6, Dec. 2020, pp 29-56.
- H. Ravichandar, A. P. Dani, “Human Intention Inference using E-M Algorithm with Online Learning”, IEEE Transactions on Automation Science and Engineering, vol. 14, no. 2, pp. 855-868, 2016.
Visual Perception for Autonomy and Robotics
This research thrust is developing novel estimation algorithms for 3D range/depth estimation, deformable object shape estimation using RGB and RGB-D camera. The methods are incorporating machine learning tools in depth estimation and object shape estimation (or extended object tracking methods).
- G. Yao, R. Saltus, A. P. Dani, “Shape Estimation for Elongated Deformable Object using B-spline Chained Multiple Random Matrices Model”, International Journal of Intelligent Robotics and Applications, vol. 4, no. 4, 2020, pp. 429-440.
- G. Yao, R. Saltus, A. P. Dani, “Image Moment-based Extended Object Tracking for Complex Motions”, IEEE Sensors Journal, vol. 20, no. 12, pp. 6560-6572, 2020.
- G. Rotithor, D. Trombetta, R. Kamalapurkar, A. P. Dani, “Full and Reduced Order Observers for Image-based Depth Estimation using Concurrent Learning“, IEEE Transactions on Control Systems Technology, vol. 29, no. 6, pp. 2647-2653, 2021.
- D. Chwa, A.P. Dani, and W. E. Dixon, “Range and Motion Estimation of a Monocular Camera using Static and Moving Objects”, IEEE Transactions on Control Systems Technology, vol. 24, no. 4, 2015, pp. 1174-1183.