Integrating a hierarchical structure of situated human motion in Multi-task learning for professional gesture recognition
Résumé
Human-machine interaction currently plays an important role in professional environments, and especially in cases where manual work is included, the accurate recognition of human actions has become essential, making significant steps thanks to the wide use of Machine Learning (ML) algorithms. The standard ML methods though, widely used for action recognition, currently face several challenges that concern their ability to generalize and adapt to new, unseen data. The problem of accurately interpreting human actions in work environments is being addressed by this study, which is essential to the advancement of ML applications in manual jobs and the improvement of human-machine interaction. The proposed work deploys and plans to perform further experiments on Meta-Learning and Multi-Task Learning (MTL) to address the complex dynamics of human movement, along with a hierarchical model for the decomposition of human movement into simpler representations. The aforementioned models aim to tackle the intricacies of data, enhancing their ability to generalize in many professional contexts, and the proposed methodology promises to improve efficiency, safety, and produc- tivity across various industries. This work not only improves the understanding of human motion but also showcases practical applications, marking a step forward in the integration of AI in professional environments, for enhanced human-machine synergy and vocational training.