Skeleton-Based Human Motion Prediction

Published on Slideshow
Static slideshow
Download PDF version
Download PDF version
Embed video
Share video
Ask about this video

Scene 1 (0s)

[Audio] Skeleton-Based Human Motion Prediction Name : Vikas Nayikoti ID: 989467381.

Scene 2 (14s)

[Audio] INTRODUCTION The discipline of human motion prediction has received huge interest in recent years due to its packages in numerous domains, which include robotics, PC animation, digital truth, and healthcare. Human motion prediction involves forecasting the destiny actions of someone primarily based on them beyond actions, which is critical for enhancing human-robot interaction, enjoyment, and clinical rehabilitation. This suggestion seeks to address the hassle of skeleton-based totally human motion prediction, focusing on the development of superior predictive fashions and methodologies. The central purpose of this proposed project is to improve the kingdom of the art in human motion prediction and the usage of skeleton facts. Specifically, we purpose to investigate and increase novel strategies that could appropriately forecast human actions in various eventualities. We will discover the demanding situations related to this area and advocate solutions that can contribute to the present information base..

Scene 3 (1m 18s)

[Audio] BACKGROUND Human movement prediction has been extensively explored by researchers from diverse disciplines, such as laptop technological know-how, biomechanics, and robotics. Recent improvements in deep getting-to-know and machine learning have brought about tremendous development in this location. Existing studies encompass plenty of packages, along with individual animation, human-robotic interaction, and medical rehabilitation. Our proposed assignment builds upon this wealthy literature and goals to make a contribution by means of growing extra accurate and versatile skeleton-primarily based human motion prediction models. We will leverage insights from previous paintings and combine modern-day techniques in machine getting to know, neural networks, and biomechanical modeling to enhance the country of the artwork. A comprehensive review of the literature has identified key developments and gaps in the present-day frame of information. We have reviewed over 20 peer-reviewed convention and magazine articles that cope with the challenges and possibilities in human movement prediction, which includes works on records-driven methods, motion synthesis, and real-time programs. Our task intends to deal with obstacles in current methodologies and further enhance the accuracy and applicability of movement prediction models..

Scene 4 (2m 41s)

[Audio] IMPLEMENTATION The implementation of our task will contain: Data collection: Gathering extraordinary skeleton statistics from diverse resources, including movement capture structures, depth sensors, and simulated environments. Model improvement: Designing and training deep getting to know models, including recurrent neural networks (RNNs), convolutional neural networks (CNNs), and hybrid models, to expect human motions from skeleton statistics. Evaluation: Assessing the overall performance of our models thru metrics together with mean squared errors (MSE), imply absolute mistakes (MAE), and qualitative analysis with the aid of experts in the subject. Integration: Integrating the evolved models into actual global packages, consisting of digital reality, robotics, and clinical rehabilitation systems..

Scene 5 (3m 41s)

[Audio] METHOD Our research system includes the subsequent steps: Research Questions: How are we able to enhance the accuracy and generalizability of skeleton-based human motion prediction models? What position does biomechanical knowledge play in improving the prediction of complex human actions? Data Collection: Collect skeleton statistics from diverse sources, along with movement capture data, intensity sensors, and publicly to be had datasets. Model Development: Train and pleasant-music deep getting to know fashions, including recurrent neural networks (RNNs) and convolutional neural networks (CNNs), the use of the collected data. Evaluation: Assess the overall performance of our models using quantitative metrics like MSE and MAE. Seek expert evaluations to assess the first-rate and naturalness of the expected motions. Application: Integrate the advanced models into packages related to virtual fact, robotics, and healthcare..

Scene 6 (4m 53s)

[Audio] CONCLUSION This concept outlines a complete plan to improve the sphere of skeleton-primarily based human motion prediction. By leveraging the present-day advancements in machine learning, biomechanics, and facts series, our venture targets to enhance the accuracy and applicability of movement prediction models. The consequences of this research could have a good-sized effect on fields which include individual animation, human-robot interplay, and scientific rehabilitation..

Scene 7 (5m 25s)

[Audio] REFERENCES [1]. Qinghua Li, Zhao Zhang, Yue You, Yaqi Mu, Chao Feng, "Data Driven Models for Human Motion Prediction in HumanRobot Collaboration," IEEE Access, Volume 8, Pages 227690 - 227702, December 2020. DOI: 10.1109/ACCESS.2020.3045994. [2]. Wansong Liu, Xiao Liang, Minghui Zheng, "Dynamic Model Informed Human Motion Prediction Based on Unscented Kalman Filter," IEEE/ASME Transactions on Mechatronics, Volume 27, Issue 6, Pages 5287 - 5295, June 2022. DOI: 10.1109/TMECH.2022.3173167. [3] Wansong Liu, Xiao Liang, Minghui Zheng, "TaskConstrained Motion Planning Considering UncertaintyInformed Human Motion Prediction for Human–Robot Collaborative Disassembly," IEEE/ASME Transactions on Mechatronics, Volume 28, Issue 4, Pages 2056 - 2063, August 2023. DOI: 10.1109/TMECH.2023.3275316. [4] M. Dong and C. Xu, "Skeleton-Based Human Motion Prediction With Privileged Supervision," in IEEE Transactions on Cybernetics, vol. 52, no. 4, pp. 2150-2162, April 2022, DOI: 10.1109/TCYB.2021.2194240. [5] P. Kratzer, S. Bihlmaier, N. B. Midlagajni, R. Prakash, M. Toussaint, and J. Mainprice, "MoGaze: A Dataset of Full-Body Motions that Includes Workspace Geometry and Eye-Gaze," in IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 367- 373, April 2021, DOI: 10.1109/LRA.2020.3043167. [6] X. Zhang, X. Mu, H. Xu, A. B. Alhassan, and H. K. Kadry, "Vibration Characteristics Analysis of Human-Robot Coupled System for Walking Posture of Elderly-Assistant Robot," in IEEE Access, vol. 9, pp. 44217-44235, 2021, DOI: 10.1109/ACCESS.2021.3066397. [7] A. Stefek, T. V. Pham, V. Krivanek, and K. L. Pham, "Energy Comparison of Controllers Used for a Differential Drive Wheeled Mobile Robot," in IEEE Access, vol. 8, pp. 170915-170927, 2020, DOI: 10.1109/ACCESS.2020.3023345. [8] Sun, K., Shang, J., Wang, C., & Zhang, X. (2018). Integral human pose regression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 5609- 5618. DOI: 10.1109/CVPR.2018.00585 [9] Rhodin, H., Robertini, N., Richardt, C., Seidel, H. P., & Theobalt, C. (2016). A versatile scene model with differentiable visibility applied to generative pose estimation. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 765-773. DOI: 10.1109/ICCV.2015.92 [10] Wang, Y., & Popović, J. (2009). Real-time hand-tracking with a color glove. ACM Transactions on Graphics, 28(3), 63 DOI: 10.1145/1531326.1531386.

Scene 8 (10m 25s)

[Audio] REFERENCES [11] Holden, D., & Saito, J. (2017). Deep Volumetric Video from Very Sparse Multi-view Performance Capture. ACM Transactions on Graphics, 36(4), 80.DOI : 10.1145/3072959.3073630 [12] Mehta, D., Sotnychenko, O., Mueller, F., & Theobalt, C. (2017). VNect: Real-time 3D Human Pose Estimation with a Single RGB Camera. ACM Transactions on Graphics, 36(4), 44.DOI: 10.1145/3072959.3073595 [13] Martinez, J., Hossain, R., Romero, J., & Little, J. (2017). A simple yet effective baseline for 3d human pose estimation. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2659-2668.DOI: 10.1109/ICCV.2017.292 [14]. Xue, J., Wu, J., Sun, X., & Zhang, H. (2019). SPIN: Spatial Pose-Integrity Node for Human Pose Estimation. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 7210-7219.DOI: 10.1109/ICCV.2019.00730 [15] Elhayek, A., de Aguiar, E., Jain, A., Tompson, J., & Bregler, C. (2015). Efficient ConvNet-based marker-less motion capture in general scenes with a low number of cameras. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 3810-3818. DOI: 10.1109/ICCV.2015.434 [16] Bogo, F., Romero, J., & Black, M. J. (2016). Dynamic FAUST: Registering human bodies in motion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 3802-3810. DOI: 10.1109/CVPR.2016.416 [17] Rogez, G., & Schmid, C. (2016). MO2: Real-time pose tracking with a color marker. In Proceedings of the European Conference on Computer Vision (ECCV), 349-365.DOI : 10.1007/978-3-319-46454-1_21 [18] Toshev, A., & Szegedy, C. (2014). Deeppose: Human pose estimation via deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1653-1660 DOI: 10.1109/CVPR.2014.214 [19] Xiong, X., & De la Torre, F. (2013). Supervised descent method and its applications to face alignment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 532-539. DOI: 10.1109/CVPR.2013.73 [20] Wei, S. E., Ramakrishna, V., Kanade, T., & Sheikh, Y. (2016). Convolutional pose machines. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 4724-4732. DOI: 10.1109/CVPR.2016.514.