Single-Layer Perceptron: the basic principles of construction and functioning
Закрыть
Articolul precedent
Articolul urmator
591 23
Ultima descărcare din IBN:
2024-05-21 19:09
SM ISO690:2012
LUPU, Mihail. Single-Layer Perceptron: the basic principles of construction and functioning. In: The 12th international conference on intrinsic Josephson effect and horizons of superconducting spintronics, 22-25 octombrie 2021, Chişinău. Chişinău: 2021, p. 75. ISBN 978-9975-47-215-9.
EXPORT metadate:
Google Scholar
Crossref
CERIF

DataCite
Dublin Core
The 12th international conference on intrinsic Josephson effect and horizons of superconducting spintronics 2021
Conferința "The 12th international conference on intrinsic Josephson effect and horizons of superconducting spintronics"
Chişinău, Moldova, 22-25 octombrie 2021

Single-Layer Perceptron: the basic principles of construction and functioning


Pag. 75-75

Lupu Mihail
 
Institute of the Electronic Engineering and Nanotechnologies "D. Ghitu"
 
 
Disponibil în IBN: 21 martie 2022


Rezumat

In machine learning, the perceptron is an algorithm for supervised learning of binary classifiers. A binary classifier is a function which can decide whether or not an input, represented by a vector of numbers, belongs to some specific class. It is a type of linear classifier, a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector. The perceptron algorithm was invented in 1958 at the Cornell Aeronautical Laboratory by Frank Rosenblatt, funded by the United States Office of Naval Research. The perceptron was intended to be a machine, rather than a program, and while its first implementation was in software for the IBM 704, it was subsequently implemented in custom-built hardware as the "Mark 1". This machine was designed for image recognition: it had an array of 400 photocells, randomly connected to the "neurons". Weights were encoded in potentiometers, and weight updates during learning were performed by electric motors. Single-layer perceptrons are only capable of learning linearly separable patterns. For a classification task with some step activation function, a single node will have a single line dividing the data points forming the patterns. More nodes can create more dividing lines, but those lines must somehow be combined to form more complex classifications. A second layer of perceptrons, or even linear nodes, are sufficient to solve a lot of otherwise non-separable problems. The simplest kind of neural network is a single-layer perceptron network, which consists of a single layer of output nodes; the inputs are fed directly to the outputs via a series of weights. The sum of the products of the weights and the inputs is calculated in each node, and if the value is above some threshold (typically 0) the neuron fires and takes the activated value (typically 1); otherwise it takes the deactivated value (typically -1). Neurons with this kind of activation function are also called artificial neurons or linear threshold units. In the literature the term perceptron often refers to networks consisting of just one of these units. A similar neuron was described by Warren McCulloch and Walter Pitts in the 1940s. Having considered the basic principles of the structure and functioning of a single-layer perceptron, it is possible to draw conclusions about the possibility of its application in various fields and improvement of performance in order to create models that are able to perform the assigned tasks as accurately as possible and to learn as quickly as possible.