Human Motion Recognition Using Artificial Intelligence Techniques
Închide
Articolul precedent
Articolul urmator
279 12
Ultima descărcare din IBN:
2024-03-22 10:23
SM ISO690:2012
ENACHI, Andrei, TURCU, Cornel, CULEA, George, TURCU, Cornel, ANDRIOAIA, Dragos-Alexandru, PETRU, Puiu-Gabriel, POPA, Sorin-Eugen. Human Motion Recognition Using Artificial Intelligence Techniques. In: Electronics, Communications and Computing, Ed. 12, 20-21 octombrie 2022, Chişinău. Chișinău: Tehnica-UTM, 2023, Editia 12, pp. 200-202. DOI: https://doi.org/10.52326/ic-ecco.2022/CS.11
EXPORT metadate:
Google Scholar
Crossref
CERIF

DataCite
Dublin Core
Electronics, Communications and Computing
Editia 12, 2023
Conferința "Electronics, Communications and Computing"
12, Chişinău, Moldova, 20-21 octombrie 2022

Human Motion Recognition Using Artificial Intelligence Techniques

DOI:https://doi.org/10.52326/ic-ecco.2022/CS.11

Pag. 200-202

Enachi Andrei1, Turcu Cornel2, Culea George1, Turcu Cornel1, Andrioaia Dragos-Alexandru1, Petru Puiu-Gabriel1, Popa Sorin-Eugen1
 
1 "Vasile Alecsandri" University of Bacau,
2 „Ștefan cel Mare” University, Suceava
 
 
Disponibil în IBN: 3 aprilie 2023


Rezumat

The goal of this paper's research is to develop learning methods that promote the automatic analysis and interpretation of human and mime-gestural movement from various perspectives and using various data sources images, video, depth, mocap data, audio, and inertial sensors, for example. Deep neural models are used as well as supervised classification and semi-supervised feature learning modeling temporal dependencies, and their effectiveness in a set of tasks that are fundamental, such as detection, classification, and parameter estimation, is demonstrated as well as user verification. A method for identifying and classifying human actions and gestures based on utilizing multi-dimensional and multi-modal deep learning from visual signals (for example, live stream, depth, and motion - based data). A training strategy that uses, first, individual modalities must be carefully initialized, followed by gradual fusion (called ModDrop) to learn correlations between modalities while preserving the uniqueness of each modality specific representation. In addition, the suggested ModDrop training approach assures that the classifier detect has weak inputs for one or maybe more channels, enabling these to make valid predictions from any amount of data points accessible modalities. In this paper, inertial sensors (such as accelerometers and gyroscopes) embedded in mobile devices collect data are also used.

Cuvinte-cheie
learning methods, ModDrop, neural models, Sensors, mime-gesture