How can we start designing artificial intelligence (AI) that aligns with our values? The answer lies in the processes that scientists follow.

Artificial intelligence with human values

How can we start designing artificial intelligence (AI) that aligns with our values? The answer lies in the processes that scientists follow.

AI’s expansion across various industries globally has drawn increased attention to the behaviors of different AI systems and their consequences. This scrutiny often circles back to the design and construction of these systems, processes primarily governed by AI creators. A strand of research delves into crafting technology in ways more attuned to human values, such as privacy and security.

Combining these two areas, this work analyses how different design processes for AI systems either facilitate or impede AI creators from crafting technology that more effectively resonates with the values of individuals and society.

What matters in AI design processes?

Seven AI design approaches undergo analysis based on various criteria, all linked to their support for AI creators in developing technology that reflects human values.

The first criterion, comprehensiveness, evaluates whether a process covers all essential steps in building an AI system, including planning, construction, testing, and release.

The second criterion assesses the level of guidance provided by the process, considering its clarity, ease of implementation, and provision of supportive tools and explanations.

The final criterion, value-sensitivity support, focuses on guiding AI creators in developing technology attuned to users’ values. This entails understanding users’ values, aligning AI with those values, and ensuring their presence throughout the system’s development stages.

Which AI design processes are under review?

Seven AI design methods undergo scrutiny, as detailed in Table 1 below. The IEEE 7000, devised by the Institute of Electrical and Electronics Engineers (IEEE), modifies established engineering procedures to prioritize values, a framework extended to AI. Z-Inspection, VSAD, RSMLC, and VSD2AI, developed by scholars, integrate value sensitivity into existing AI construction approaches. Furthermore, Microsoft’s HAX process and PwC’s Responsible AI process, originating from corporations, aim to assist industry experts in creating more responsible and ethical AI systems.

Figure 1. The design processes reviewed during this study including their name, source, and description summary
Credit. AI and Ethics

How do AI design processes help build value-sensitive AI?

In Table 2 below, the outcomes of the analysis are presented. Several processes, particularly those originating from researchers, exhibited strong performance across multiple factors outlined previously. This observation corresponds with the notion that certain processes were not originally designed with a specific focus on human values and value sensitivity. Additionally, while some processes contribute partially to various factors, their coverage may be incomplete, unintended, indirect, or insufficient.

Figure 2. The results of the review study showing how different design processes tackled the various factors examined.
Credit. AI and Ethics

In general, certain processes prioritize early-stage development, catering to creators at the onset of AI system construction (such as IEEE 7000 and Z-Inspection). Others adopt a business-centric approach, targeting professionals in the field (like Microsoft’s HAX process and PwC’s Responsible AI process). Conversely, some processes place a strong emphasis on values and value sensitivity (e.g., VSAD, RSMLC, VSD2AI).

While some processes rely on pre-existing lists of values (e.g., Microsoft’s HAX process, PwC’s Responsible AI process), others incorporate steps to help AI creators gather relevant values from users and stakeholders (e.g., VSAD, RSMLC). Some processes combine both approaches (e.g., VSD2AI, IEEE 7000, Z-Inspection).

Furthermore, only two processes offer supplementary materials to guide AI creators and enhance practicality (IEEE 7000, Microsoft’s HAX process), while only two processes assist creators in embedding values directly into the AI being developed (VSAD, VSD2AI).

These recommendations form part of a still-developing movement towards increasing value-sensitivity in AI-based systems. Their aim is to act as a source of inspiration for embarking on a long journey, rather than a short-cut to a final destination.

Malak Sadek

Three steps to help AI creators build more value-sensitive AI

With the increasing prevalence of AI usage, it becomes increasingly crucial for AI systems to recognize and respect human values, thereby becoming more value-sensitive. To ensure this objective, experts crafting design processes for AI systems must consider the following:

  1. Comprehensive Coverage: Ensure that the design process covers every necessary step in building AI, thereby preventing ambiguity for AI creators and integrating understanding and respect for values into each stage.
  2. Practical Support: Provide additional resources such as supplementary sheets, tools, explanations, or other materials to enhance practicality and facilitate ease of implementation for AI creators following the process.
  3. Value Integration: Assist AI creators in constructing more value-sensitive AI by incorporating steps to examine existing lists and sources of values, gather relevant values from users and stakeholders, and embed these values into the AI being developed.


Journal reference

Sadek, M., Calvo, R. A., & Mougenot, C. (2023). Designing value-sensitive AI: a critical review and recommendations for socio-technical design processes. AI and Ethics, 1-19.

Malak Sadek is a doctoral candidate at the Dyson School of Design Engineering at Imperial College London and an Enrichment Student at The Alan Turing Institute. She also holds fellowships at The Leverhulme Centre for the Future of Technology and the Centre for Human-Inspired Artificial Intelligence at Cambridge University. Malak earned a Bachelor's degree in Computer Engineering from the American University in Cairo, Egypt, and a Master's degree in Human-Computer Interaction from the University of St Andrews, Scotland. Her current work revolves around facilitating alignment between conversational artificial intelligence and human values through collaborative design practices.

Dr. Céline Mougenot is a Senior Lecturer (Associate Professor) in Collaborative Design at the Dyson School of Design Engineering. She is Director of the Collective Innovation Lab, which focuses on developing methodologies and tools that can support the collective production of human-centred solutions to complex challenges by diverse groups of stakeholders. The vision of the group is that research in collaborative ideation and design methods stands as a cornerstone for building a future where innovation is accessible, fair, and truly beneficial to all.

Rafael A. Calvo is a Professor at Imperial College London, focusing on the design of systems that support wellbeing in areas of mental health, medicine, and education, as well as on the ethical challenges raised by new technologies. He currently serves as the Director for Research at the Dyson School of Design Engineering, Co-lead at the Leverhulme Centre for the Future of Intelligence, Chief Investigator at the Australian Research Hub on Digital Enhanced Living, Associate Investigator at the Australian Centre of Excellence in Autonomous Decision Making, Associate Investigator NHMRC Centre of Research Excellence to Optimise Sleep in Brain Ageing, Co-Editor of IEEE Transactions on Technology and Society, and serves on the Ethics Advisory Board at Digital Catapult.