How can combining Explainable AI with semantic technologies improve the human-centricity and trustworthiness of event detection systems?
//

Explainable AI can revolutionise event detection systems

How can combining Explainable AI with semantic technologies improve the human-centricity and trustworthiness of event detection systems?

An event is a major incident that occurs at a specific location and time. Event detection is a computational approach that allows events happening at specific times in specific places to be detected based on information on social media (e.g., X (Twitter), Facebook, Instagram, etc.). Currently, with artificial intelligence (AI), social media event detection allows us to identify real-time events such as breaking news, online bullying, online harassment, bad news, fake news, emergencies, crime alerts, and terrorist activities and predict future events with accuracy.

However, “no matter how good and efficient an AI model is, users or practitioners find it difficult to trust it if they cannot understand its behaviours”. Therefore, an event detection system must incorporate explainability to gain human trust. An explainable event is humanly understandable because it has the six dimensions of what, who, where, when, why, and how (5W1H). Currently, the AI systems used for event detection seldom explain why and how.

Thus, the question is, “Can AI systems used for event detection be more believable?” In the research, the authors explain that incorporating the 5W1H dimensions in the description of detected events will enable the required level of explainability and human-centeredness that will enhance the trustworthiness of event detection. The researchers suggest combining Explainable AI (XAI), a method to explain the results of AI systems, with semantic technologies (such as ontologies, knowledge graphs, and open data), which will enhance human-centricity and make the vision of explainable event detection a reality.

Providing explanations with humans in the loop

AI is a powerful tool, but understanding its workings can be challenging. Explainable AI (XAI) helps us trust and improve AI’s use, ensuring fairness and ethics. However, AI is often tested in labs, causing potential issues in real-world situations. To make AI more human-centred, it should explain itself to us in a way that we can understand, allowing for more effective and safe use of AI. Figure 1 presents various ways of providing AI explanations.

Figure 1. Overview of explainable AI approaches
Credit. Artificial Intelligence Review

How can semantic technologies be used to attain human-centric explainability in event detection systems? To achieve human-centric explainability in event detection systems , explanations must be grounded in the relevant factual and contextual knowledge such as ontologies or knowledge graphs or knowledge-based hierarchies for it to make sense.

Taiwo Kolajo

Explainable event detection

Social media is a big part of how people talk to each other nowadays. Sometimes, we want to know what’s happening through these online conversations. That’s where AI comes in. AI is a way for computers to learn from data and make decisions. However, some machine learning methods (ML) are complex and hard to understand. They are like black boxes that give us answers without telling us how they got them. This problem is because we want to trust and learn from these answers. That’s why we need explainable AI (XAI).

XAI is a way of making the ML black boxes more transparent and understandable. For example, to know what events are happening through social media, one must ask the 5W1H questions: Who, What, When, Where, Why, and How. These questions help us understand the events better. However, social media is not always reliable or clear. Sometimes, the messages are too short or abbreviated, have spelling errors, use different languages, or are vague or confusing.

This makes it hard to get the full picture of ongoing events. So, we need to use additional information from other sources, like the web or databases, to fill in the gaps. This is called domain knowledge, and it helps us make sense of social media data.

Improving explainability with semantic technology

We cannot achieve explainable event detection using only XAI. Explanations must be grounded in relevant factual and contextual knowledge to make sense. This is where semantically rich information (like that found in ontologies, knowledge graphs, or other knowledge-based hierarchies, e.g., Wikipedia) will be crucial. These graphical structures represent knowledge in the form of concepts and relations in a way that humans can understand.

For example, an ontology can define what a cat is and how it relates to other animals. Using an ontology, we can map the AI’s output to the concepts we already know. We can use logical reasoning to justify the AI’s output based on the ontology and the observations. This way, we can see how the AI uses contextual knowledge to make more reliable decisions (Figure 2).

Figure 2. The Role of Knowledge Graphs in Explainable AI
Credit. Lecue

Challenges of explainable event detection and the way forward

Although the 5W1H dimension can enhance AI’s human-centricity by answering the who, what, when, where, why, and how questions about detected events, some challenges remain. Collaboration with experts will be required to make AI transparent, trustworthy, and understandable.

We need to understand what is happening in the world and explain it clearly and simply to make better decisions and learn from mistakes. Some researchers have suggested how to do this. They said it is necessary to explain the whole process, from how we collect and use data to how we build and compare models and communicate and cooperate with machines and other people. They also said we must consider the risks and benefits of explanations, such as how they affect our privacy, security, trust, and goals.

Implications of human-centric and semantic-based explainable event detection

Events happen in the world, like a fire or a robbery. Using knowledge graphs and ontologies can help AI give more human-friendly explanations. Incorporating explainability that is human-centric in event detection systems is significant for building a decision-making process that is more trustworthy and sustainable. Human-centric and semantics-based explainable event detection will enhance trustworthiness, explainability, and reliability. For instance, fake story detection or breaking news systems would be more factual and believable if explanations that could provide answers to 5W1H dimension questions were incorporated into such systems.

🔬🧫🧪🔍🤓👩‍🔬🦠🔭📚

Journal reference

Kolajo, T., & Daramola, O. (2023). Human-centric and semantics-based explainable event detection: a survey. Artificial Intelligence Review56(Suppl 1), 119-158. https://doi.org/10.1007/s10462-023-10525-0

Taiwo Kolajo is a Senior Lecturer at the Federal University Lokoja in the Department of Computer Science and is currently a Postdoctoral Fellow at the University of Pretoria, South Africa. She has published over 30 peer-reviewed conference papers, journals, and book chapters. Her research interests include natural language processing, big data analytics, artificial intelligence, machine learning, and data mining. She has served as a volunteer and reviewer for conferences such as Women in Machine Learning, Black in AI, and the Chapter of the Association for Computational Linguistics – International Joint Conference on Natural Language Processing, among others. Dr. Kolajo is a member of the Nigeria Computer Society and the Teacher Registration Council of Nigeria.

Olawande Daramola is a professional member of IEEE and currently holds the position of research professor in the Department of Informatics at the University of Pretoria, South Africa. He is the author of over 100 journals, book chapters, and peer-reviewed conference papers in Artificial Intelligence (AI) and Software Engineering. His research interests encompass machine learning, knowledge-based systems, ontologies, big data analytics, and requirements engineering. He serves on the programme committee of several international conferences in computing and acts as a reviewer for various top computing journals published by IEEE, Springer, and Elsevier.