Articolul precedent |
Articolul urmator |
427 12 |
Ultima descărcare din IBN: 2023-10-31 12:32 |
Căutarea după subiecte similare conform CZU |
[004.8+351.78]:17 (1) |
Inteligență artificială (312) |
Activități specifice administrației publice (1097) |
Filosofie morală. Etică. Filosofie practică (406) |
SM ISO690:2012 DULSCHI, Olivia. Ethical concerns of utilising artificial intelligence in surveillance systems at government level. In: Contribuția tinerilor cercetători la dezvoltarea administrației publice, 26 februarie 2021, Chişinău. Chişinău: "Print-Caro" SRL, 2021, Ediția 7, pp. 65-67. ISBN 978-9975-3492-3-9. |
EXPORT metadate: Google Scholar Crossref CERIF DataCite Dublin Core |
Contribuția tinerilor cercetători la dezvoltarea administrației publice Ediția 7, 2021 |
||||||
Conferința "Contribuția tinerilor cercetători la dezvoltarea administrației publice" Chişinău, Moldova, 26 februarie 2021 | ||||||
|
||||||
CZU: [004.8+351.78]:17 | ||||||
Pag. 65-67 | ||||||
|
||||||
Descarcă PDF | ||||||
Rezumat | ||||||
The purpose of this paper is to study the ethical concerns of embedding Artificial Intelligence (AI) into systems such as surveillance. The ethical concerns that might derive from mismanaged technology will be explained as a specific objective it. The research question is: How do we create a safe society without putting at risk smaller social groups? The initial hypothesis is that institutions charged with the responsibility to ensure public safety are to utilise systems that are ethically appropriate, such as algorithms that are not predefined to automatically target someone based on their race, ethnicity, confession, or other indices. The research method used is descriptive, being a qualitative analysis, that follows from the understanding the theory behind AI systems, as well as real-life case studies. The main conclusion of the research is that currently even the technological giants are encountering difficulties in developing software that is not prone to misjudgments. Developing such software requires great responsibility and social commitment both from the software engineers as well as from the government. |
||||||
Cuvinte-cheie Artificial Intelligence, surveillance, government, Security |
||||||
|
DataCite XML Export
<?xml version='1.0' encoding='utf-8'?> <resource xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns='http://datacite.org/schema/kernel-3' xsi:schemaLocation='http://datacite.org/schema/kernel-3 http://schema.datacite.org/meta/kernel-3/metadata.xsd'> <creators> <creator> <creatorName>Dulschi, O.</creatorName> <affiliation>University of Hertfordshire, Marea Britanie</affiliation> </creator> </creators> <titles> <title xml:lang='en'>Ethical concerns of utilising artificial intelligence in surveillance systems at government level</title> </titles> <publisher>Instrumentul Bibliometric National</publisher> <publicationYear>2021</publicationYear> <relatedIdentifier relatedIdentifierType='ISBN' relationType='IsPartOf'>978-9975-3492-3-9</relatedIdentifier> <subjects> <subject>Artificial Intelligence</subject> <subject>surveillance</subject> <subject>government</subject> <subject>Security</subject> <subject schemeURI='http://udcdata.info/' subjectScheme='UDC'>[004.8+351.78]:17</subject> </subjects> <dates> <date dateType='Issued'>2021</date> </dates> <resourceType resourceTypeGeneral='Text'>Conference Paper</resourceType> <descriptions> <description xml:lang='en' descriptionType='Abstract'><p>The purpose of this paper is to study the ethical concerns of embedding Artificial Intelligence (AI) into systems such as surveillance. The ethical concerns that might derive from mismanaged technology will be explained as a specific objective it. The research question is: How do we create a safe society without putting at risk smaller social groups? The initial hypothesis is that institutions charged with the responsibility to ensure public safety are to utilise systems that are ethically appropriate, such as algorithms that are not predefined to automatically target someone based on their race, ethnicity, confession, or other indices. The research method used is descriptive, being a qualitative analysis, that follows from the understanding the theory behind AI systems, as well as real-life case studies. The main conclusion of the research is that currently even the technological giants are encountering difficulties in developing software that is not prone to misjudgments. Developing such software requires great responsibility and social commitment both from the software engineers as well as from the government.</p></description> </descriptions> <formats> <format>application/pdf</format> </formats> </resource>