Explainable Artificial Intelligence
Închide
Articolul precedent
Articolul urmator
430 24
Ultima descărcare din IBN:
2024-05-20 23:41
SM ISO690:2012
AVERKIN, Aleksey. Explainable Artificial Intelligence. In: Workshop on Intelligent Information Systems, Ed. 2022, 6-8 octombrie 2022, Chisinau. Chişinău: Valnex, 2022, pp. 4-6. ISBN 978-9975-68-461-3.
EXPORT metadate:
Google Scholar
Crossref
CERIF

DataCite
Dublin Core
Workshop on Intelligent Information Systems 2022
Conferința "Workshop on Intelligent Information Systems"
2022, Chisinau, Moldova, 6-8 octombrie 2022

Explainable Artificial Intelligence


Pag. 4-6

Averkin Aleksey
 
Federal Research Center for Information and Computational Technologies RAS
 
 
Disponibil în IBN: 20 octombrie 2022


Rezumat

Every decade technology makes revolutionary shifts that become the new platforms on which application technology is built. Artificial intelligence (AI) is no different. AI has moved from 1st Generation shallow learning and handcrafted features to 2nd Generation deep learning, which has been effective at learning patterns. We have now entered the 3rd Generation of AI which is machine reasoning-driven – where the machine can interpret decision-making algorithms, even if they have the black-box nature. Explainable artificial intelligence and augmented intelligence are the main part of the 3rd Generation of AI. But in the 2030s we will see their role in the 4th Generation AI with machines that are learning to learn and that will dynamically accumulate new knowledge & skills. By the 2040s, 5th Generation AI will see imagination machines that are no longer reliant on humans to learn. Explainable artificial intelligence now represents a key area of research in artificial intelligence and an unusually promising one in which many fuzzy logics could become crucial. Research in the area of explainable artificial intelligence can be divided into three stages, which correlate with the 3 generations of AI: in the first stage (starting from 1970), expert systems were developed; in the second stage (the mid1980s), the transition was made from expert systems to knowledgebased systems; and in the third phase (since 2010), deep architectures of artificial neural networks, which required new global research on the construction of explainable systems, have been studied. According to the DARPA approach, the first wave is the era of handcrafted, declarative knowledge; the second wave is statistical learning; and the third wave is the future. DARPA termed the third wave Contextual Adaptation, but we instead refer to it as DARPA’s initiative, AI Next. In addition to the three waves, we have identified six major phases of DARPA’s AI investment: AI Beginnings, Strategic Computing, Knowledge/Planning, Cognitive Systems, Data Analytics, and AI Next. DARPA’s initiative, AI Next is related to third-generation explanatory systems and the DARPA program, which began in 2018. The DARPA explainable AI (XAI) program seeks to create AI systems whose learning models and solutions can be understood and properly validated by end users. Achieving this goal requires methods for constructing more explicable models, developing effective explicable interfaces, and understanding the psychological requirements for an effective explanation. Explainable AI is needed for users to understand, properly trust, and effectively manage their smart partners. DARPA sees XAIs as AI systems that can explain their decision to human users, characterize their strengths and weaknesses, and how they will behave in the future. DARPA’s goal is to create more human-readable AI systems through effective explanations. XAI development teams solve the first two problems by creating and developing Explainable Machine Learning (ML) technologies, developing principles, strategies, and methods of human-computer interaction to generate effective explanations. Another XAI development team tackles the third challenge by combining, extending, and applying psychological explanatory theories that the development teams will use to test their systems. The development teams evaluate how a clear explanation of XAI systems improves the user experience, confidence, and productivity. Explainable artificial intelligence now represents a key area of research in artificial intelligence and an unusually promising one in which many fuzzy logics could become crucial. Fuzzy systems can help to approach various aspects of Explainable Artificial Intelligence (XAI). The pioneering works of L.A. Zadeh offer precious tools for the current XAI challenges. They are not limited to approaches of natural language, but they also help to assist the user in understanding the meaning of the decisions made by artificial intelligence-based systems and to provide explanations about the way these decisions are made. Nowadays, XAI is a prominent and fruitful research field, where many of Zadeh’s contributions can become crucial if they are carefully considered and thoroughly developed. It is worth noting that about 30% of publications in Scopus related to XAI, dated back to 2017 or earlier, came from authors well recognized in the Fuzzy Logic field. This is mainly due to the commitment of the fuzzy community to produce interpretable fuzzy systems since interpretability is deeply rooted in the fundamentals of Fuzzy Logic.