Visualization of 1D CNN Lithology Identification Model from Rotary Percussion Drilling Vibration Signals Using Explainable Artificial Intelligence Grad-CAM
نویسندگان
چکیده
In recent years deep learning has gained a lot of popularity because its ability to work on complex tasks. It been used in many industries optimize operations and help decision-making. Deep neural networks have often referred as ‘Black boxes’, that is they take inputs give outputs with high accuracies without giving an insight into how work. important demystify verify are looking at the correct patterns. This paper proposes use Gradient-Weighted Class Activation Mapping (Grad-CAM) visualize behavior lithology identification models drill vibrations input one- dimensional convolutional network (1D CNN). The models, time acceleration, frequency model had 99.8% 99.0% classification accuracy. could distinguish between granite marble rock based vibration signatures. With Grad-CAM, it was possible make 1D CNN transparent by visualizing regions were for predictions. Grad-CAM results indicated successfully learned significant frequencies contained each rock's signal.
منابع مشابه
Building Explainable Artificial Intelligence Systems
As artificial intelligence (AI) systems and behavior models in military simulations become increasingly complex, it has been difficult for users to understand the activities of computer-controlled entities. Prototype explanation systems have been added to simulators, but designers have not heeded the lessons learned from work in explaining expert system behavior. These new explanation systems a...
متن کاملExplainable Artificial Intelligence for Training and Tutoring
This paper describes an Explainable Artificial Intelligence (XAI) tool that allows entities to answer questions about their activities within a tactical simulation. We show how XAI can be used to provide more meaningful after-action reviews and discuss ongoing work to integrate an intelligent tutor into the XAI framework.
متن کاملExplainable Artificial Intelligence via Bayesian Teaching
Modern machine learning methods are increasingly powerful and opaque. This opaqueness is a concern across a variety of domains in which algorithms are making important decisions that should be scrutable. The explainabilty of machine learning systems is therefore of increasing interest. We propose an explanation-byexamples approach that builds on our recent research in Bayesian teaching in which...
متن کاملAutomated Reasoning for Explainable Artificial Intelligence
Reasoning and learning have been considered fundamental features of intelligence ever since the dawn of the field of artificial intelligence, leading to the development of the research areas of automated reasoning and machine learning. This paper discusses the relationship between automated reasoning and machine learning, and more generally between automated reasoning and artificial intelligenc...
متن کاملAn Explainable Artificial Intelligence System for Small-unit Tactical Behavior
As the artificial intelligence (AI) systems in military simulations and computer games become more complex, their actions become increasingly difficult for users to understand. Expert systems for medical diagnosis have addressed this challenge though the addition of explanation generation systems that explain a system’s internal processes. This paper describes the AI architecture and associated...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: International journal of the Society of Materials Engineering for Resources
سال: 2022
ISSN: ['1347-9725', '1884-6629']
DOI: https://doi.org/10.5188/ijsmer.25.224