The Future of Life Institute recently called for a six-month precautionary pause on artificial-intelligence (AI) development
///

Corporate digital responsibility in the age of artificial intelligence

How effectively are tech companies acknowledging their CDR? The principle of CDR encompasses certain aspects, yet matters such as governance, environmental responsibility and the imbalance of power continue to be neglected.

190 views

The Future of Life Institute recently called for a six-month precautionary pause on artificial intelligence (AI) development, noting that the signatories, including Elon Musk and other senior business leaders, worry that AI labs are ‘locked in an out-of-control race’ to develop and deploy increasingly powerful systems that no one – including their creators – can understand, predict or control. The issue was again highlighted this month when the CEO of OpenAI (the developers of AI product ChatGPT) called for greater regulation to guard against AI risks. What are the implications for companies using AI and other digital technologies? And what is the attitude of the large tech companies developing, marketing and selling these products?

What is corporate digital responsibility?

Corporate Digital Responsibility (CDR) can be defined as “a set of practices and behaviours that help an organisation use data and digital technologies in ways that are perceived as socially, economically, and environmentally responsible.” CDR is increasingly seen as a subset of Corporate Social Responsibility (CSR). As companies have expanded their deployment of digital technologies in recent years, a new set of responsibilities has emerged. These responsibilities are encapsulated within the concept and operation of CDR, as shown in Figure 1. They are particularly relevant to deploying artificial intelligence (AI), which is considered  “the most disruptive technology innovation of our lifetime”.

Figure 1. The main dimensions of Corporate Digital Responsibility (Wynn and Jones, 2023).

Examples from industries

Recent research has examined how major technology organisations are approaching CDR. Deutsch Telekom, for example, argued that its approach to digital responsibility was focused on “human-centred technology” and built on a series of foundations, namely, laws and regulations, human rights, and culture and values, and two principles: 1) data privacy and security and 2) transparency and dialogue. In regards to AI, Google acknowledged that the future development of

AI is dynamic and evolving, and we will approach our work with humility, a commitment to internal and external engagement, and a willingness to adapt our approach as we learn over time.

Google

The company added that it recognised that AI technologies

raise important challenges that we need to address clearly, thoughtfully, and affirmatively.

Google

Microsoft offered details of its “Responsible AI Standard” – the company’s internal playbook for responsible AI – which “shapes the way in which we create AI systems, by guiding how we design, build and test them”, and of its “Responsible AI Impact Assessment Template”.

AI is moving at a blistering pace and, as with any powerful technology, organisations need to build trust with the public and be accountable to their customers and employees.

Accenture

More generally, regarding digital technology deployment, IBM argued that

customers, employees, and even shareholders are more frequently demanding that organizations not only take a principled stance on current concerns, but also follow through with meaningful actions that lead to clear outcomes.

IBM

All the companies studied claimed to be publicly addressing their responsibilities for digital technologies. The companies emphasised their commitment to a number of principles that they claimed guided their approach, especially towards AI. These include data privacy and security; fairness and inclusion; interpretability; accountability; safety; avoiding unfair bias; explainability; reliability; trust; and high standards of scientific excellence and control. However, although companies adopted a positive approach to digital technology deployment, focussing on its benefits at a corporate and individual level, they could be seen as downplaying its potential negative impacts. This may be seen as part of a major corporate marketing/public relations exercise or even as “ethics washing”, namely feigning ethical consideration, designed to improve how stakeholders perceive companies.

Social and environmental impacts

The main focus of these companies’ approaches to digital responsibilities is primarily social and technical issues. In outlining their social responsibilities – fairness, for example – little attention is paid to environmental issues, particularly climate change. The United Nations has described climate change as “the defining issue of our time”, and it may have fundamental social impacts, including the wholesale destruction of homes and communities, the loss of livelihoods, population migration and forced displacement, and the loss of cultural identity. Paradoxically, digital technologies can be seen to offer both a major opportunity to mitigate climate change and to be a cause of such change.

The United Nations Environment Programme, for example, stated that “more climate data is available than ever before”, that “how that data is accessed, interpreted and acted on is crucial to managing these crises”, and that “one technology that is central to this is AI”.  AI is seen to have a vital role in helping to measure and reduce greenhouse gas emissions and improve hazard forecasting for both long-term events, such as rising sea levels, and short-term extreme events, such as hurricanes. But the United Nations Environment Programme also warned that

there is an environmental cost to processing this data, [not least that] the ICT sector generates about 3-4 % of emissions, and data centres use large volumes of water for cooling.

United Nation

Another aspect here is what the Council of Europe has termed the “power asymmetry between those who develop and employ AI technologies, and those who interact with and are subject to them”. For example, digital service providers can acquire very detailed data about their users, which they can mine to generate accurate predictions about user traits, tastes, and preferences. However, users typically do not understand the complexities of their digital technologies. This asymmetry increases the likelihood of potential exploitation and may also lead to new challenges for society, as evidenced by Professor Raja Chatila – a member of the working group of the French national digital ethics pilot committee who observed

everything that is currently happening in AI is taking place with no real ethical or legal controls. Companies are deploying tools on the web that have harmful effects.

Raja Chatila

Emerging responsibilities for tech companies

While digital technologies bring a wide range of new business benefits and opportunities, the companies that develop, sell and deploy these technologies are now facing, and will have to address, several new responsibilities. These responsibilities are increasingly being captured in the concept of CDR. Some leading tech companies now acknowledge their social and technological responsibilities associated with CDR, but environmental responsibilities and power asymmetry between developers and users receive scant attention. The state’s role in governance procedures has not been thoroughly examined, even though it is becoming increasingly relevant, especially as the UK Government presents its white paper “to guide the use of artificial intelligence in the UK” and “maintain public trust in this revolutionary technology”. As digital technologies continue to influence the operation of our businesses and the management of broader economies and societies, the demand for such regulation will likely intensify.

🔬🧫🧪🔍🤓👩‍🔬🦠🔭📚

Journal reference

Jones, P., & Wynn, M. G. (2023). Artificial Intelligence and Corporate Digital Responsibility. Journal of Artificial Intelligence, Machine Learning and Data Science1(2), 50-58. https://eprints.glos.ac.uk/12650/

Martin Wynn is Associate Professor in Information Technology in the School of Computing and Engineering at the University of Gloucestershire and holds a PhD from Nottingham Trent University. He was appointed Research Fellow at East London University, and he spent 20 years in industry at Glaxo Pharmaceuticals and HP Bulmer Drinks. His research interests include digitalisation, information systems, sustainability, project management, and urban planning. His latest book, Handbook of Research on Digital Transformation, Industry Use Cases, and the Impact of Disruptive Technologies, was published in 2022.

Peter Jones is an Emeritus Professor in the School of Business at the University of Gloucestershire. He previously served as Dean of the Business School at the University of Plymouth, and as Head of the Department of Retailing and Marketing at Manchester Metropolitan University. He has worked as a retail and academic consultant in a range of countries and has active research interests in modern slavery in the service sector, sustainable development in retailing, and corporate social responsibility. He has published his research work in a range of book chapters, academic journals and professional publications.