Several factors need to be considered to arrive at a nuanced perspective on artificial intelligence. The IMPACT framework is introduced to offer guidance in this regard.
//

What do you think and know about artificial intelligence?

Unlocking the true potential of AI demands a nuanced approach. Discover guidance through the IMPACT framework for a comprehensive perspective.

Global discussions on the impact of AI have surged following the introduction of ChatGPT. Views on AI vary widely, from pessimistic predictions of its potential to harm humanity, known as “AI-doomers,” to overly optimistic perspectives on its capabilities. However, a balanced understanding suggests that the true impact of AI lies in nuanced considerations. A model known as IMPACT, developed by researchers, outlines key factors influencing societal responses to AI. IMPACT, which stands for Interplay of Modality, Person, Area, Country/Culture, and Transparency, highlights the multifaceted nature of AI’s influence. Early empirical data supports the relevance of these categories in shaping attitudes towards AI.

Is AI good or bad? AI is a blurry concept. In order to come to a much needed nuanced view on AI, we have to consider many variables as outlined in the IMPACT framework.

Christian Montag

What is the IMPACT framework exactly?

The IMPACT framework provides a comprehensive understanding of the factors influencing attitudes towards AI. The Modality aspect examines, among others, the level of autonomy of AI systems, such as whether individuals can drive themselves, use AI assistance, or rely entirely on AI for driving. Other modal features could be the reliability or interactivity of an AI system. This Modality aspect is crucial in shaping perceptions of AI.

In the Person category, sociodemographic factors like age, gender, and education, as well as psychological traits, influence attitudes toward AI. Personality traits, for example, play a significant role in shaping perceptions of AI.

The Area category considers the diverse domains in which AI operates. For instance, attitudes towards AI differ when applied in medical or military settings, depending on the specific purposes within those fields.

The country/culture category highlights the impact of political regulations and cultural values on AI attitudes. Variations in regulations and cultural norms, such as collectivism versus individualism, might contribute to diverse perspectives on AI.

Finally, transparency is a crucial category in the IMPACT framework. It emphasizes the importance of explainable AI (XAI), where AI systems provide insight into their decision-making processes, aiming to overcome the ‘black box’ perception of AI.

Credit. Midjourney

The transparency category of the IMPACT framework extends beyond just XAI and can also encompass the openness of AI code, allowing for scrutiny, study, and further development.

The categories within the IMPACT framework are not isolated but interact in complex ways. For instance, consider interacting with an AI system for medical diagnoses based on a photo of a skin condition uploaded from your smartphone. Would you like to understand how the AI reaches its conclusion? While XAI generally fosters trust, individuals vary in their desire for explanations from AI systems, influenced by factors like the significance of the diagnosis. Another example: If the AI suggests a pizza place rather than providing a medical diagnosis, perceptions of the AI would likely differ.

The IMPACT framework, though simplified, underscores the need to consider multiple variables in shaping attitudes toward AI. It highlights that nuanced views of AI require understanding the characteristics of AI systems and the individuals interacting with them.

Why is it at all important to understand AI attitudes?

Studying AI attitudes is important, particularly in facilitating the transition from pre-AI to AI-driven societies. As we approach the advent of widespread AI adoption, uncertainties loom regarding job displacement and societal safety. Consequently, policymakers must grasp public perceptions of AI. This understanding should be complemented by insights into people’s actual knowledge of AI, assessed through objective literacy tests. Such efforts enable us to identify areas where additional education is needed to prepare for the impending AI revolution.

What needs to be done now?

  • Anchoring AI attitude and AI literacy survey measures in nationally representative samples is crucial for gaining insights into a pressing global issue: understanding what people think and know about AI.
  • It’s worth noting that most current AI attitude survey tools assess individual differences in AI attitudes in a very general manner. To explore further, experts are convinced that while these general measures are a good starting point, more nuanced tools are needed to understand societal perspectives on this critical technology.

You can support Dr. Montag’s research by completing an online survey about AI attitudes. This will take about ten minutes, and participants will get as a “Thank you” feedback on their personality scores: https://ai-singapure.jimdosite.com

🔬🧫🧪🔍🤓👩‍🔬🦠🔭📚

Journal reference

Montag, C., Nakov, P., & Ali, R. (2024). Considering the IMPACT framework to understand the AI-well-being-complex from an interdisciplinary perspective. Telematics and Informatics Reports13, 100112. https://doi.org/10.1016/j.teler.2023.100112

Dr. Christian Montag operates at the crossroads of psychology, computer science, neuroscience, and behavioural economics.