Статья

Generation an Annotated Dataset of Human Poses for Deep Learning Networks Based on Motion Tracking System

I. Artamonov, Y. Artamonova, A. Efitorov, V. Shirokii, O. Vasilyev,
2021

In this paper, we propose an original method for relatively fast generation an annotated data set of human's poses for deep neural networks training based on 3D motion capture system. Compared to default pose detection DNNs trained on commonly used open datasets the method makes possible to recognize specific poses and actions more accurately and decreases need for additional image processing operations aimed at correction of various detection errors inherent to these DNNs. We used preinstalled IR motion capture system with reflective passive tags not to capture movement itself but to extract human keypoints at 3D space and got video record at corresponding timestamps. Obtained 3D trajectories were synchronized in time and space with streams from several cameras using approaches of mutual camera calibration and photogrammetry. It allowed us to accurately project keypoint from 3D space to 2D video frame plane and generate human pose annotations for recorded video and train deep neural network based on this dataset.

Цитирование

Похожие публикации

Источник

Версии

  • 1. Version of Record от 2021-01-01

Метаданные

Об авторах
  • I. Artamonov
    Neurocorpus Ltd.
  • Y. Artamonova
    Neurocorpus Ltd.
  • A. Efitorov
    Lomonosov Moscow State University
  • V. Shirokii
    Lomonosov Moscow State University
  • O. Vasilyev
    Russian State University of Physical Education, Sport, Youth and Tourism
Название журнала
  • Studies in Computational Intelligence
Том
  • 925 SCI
Страницы
  • 198-204
Финансирующая организация
  • Foundation for Assistance to Small Innovative Enterprises in Science and Technology
Номер гранта
  • 1GS1NTI5/43222 06.09.2018
Тип документа
  • journal article
Тип лицензии Creative Commons
  • CC BY
Правовой статус документа
  • Свободная лицензия
Источник
  • scopus