Overview of computer vision supervised learning techniques for low-data training
Închide
Articolul precedent
Articolul urmator
851 6
Ultima descărcare din IBN:
2020-09-16 21:45
SM ISO690:2012
BURLACU, Alexandru. Overview of computer vision supervised learning techniques for low-data training. In: Electronics, Communications and Computing, Ed. 10, 23-26 octombrie 2019, Chişinău. Chișinău, Republica Moldova: 2019, Editia 10, p. 44. ISBN 978-9975-108-84-3.
EXPORT metadate:
Google Scholar
Crossref
CERIF

DataCite
Dublin Core
Electronics, Communications and Computing
Editia 10, 2019
Conferința "Electronics, Communications and Computing"
10, Chişinău, Moldova, 23-26 octombrie 2019

Overview of computer vision supervised learning techniques for low-data training


Pag. 44-44

Burlacu Alexandru
 
Universitatea Tehnică a Moldovei
 
 
Disponibil în IBN: 7 noiembrie 2019


Rezumat

This work is an overview of techniques of varying complexity and novelty for supervised, or rather weakly supervised learning for computer vision algorithms. With the advent of deep learning the number of organizations and practitioners who think that they can solve problems using it also grows. Deep learning algorithms normally require vast amounts of labeled data, but depending on the domain it is not always possible to have a well annotated huge dataset, just think about healthcare. This paper starts with giving some background on supervised, weakly-supervised and then self-supervised learning in general, and in computer vision specifically. Then it goes on describing various methods to ease the need for a big labeled dataset. The paper describes the importance of these methods in fields such as medical imaging, autonomous driving, and even drone autonomous navigation. Starting with simple methods like knowledge transfer it also describes a number of knowledge distillation techniques and ends with the latest methods from self- and semi-supervised methods like Unsupervised Data Augmentation (UDA), MixMatch, Snorkel and adding synthetic tasks to the learning model, thus touching the multi-task learning problem. Finally topics/papers not reviewed yet are mentioned with some commentaries and the paper is closed with a discussions section. This paper does not go into few-shot/one-shot learning, because this another huge sub-domain, with a scope a bit different from the one of weaklysupervised and self-supervised learning.

Cuvinte-cheie
knowledge distillation, knowledge transfer, self-supervised learning, semisupervised learning, weakly-supervised learning