We’re familiar with the disruptive force of ChatGPT and generative AI, all falling under the umbrella of artificial intelligence (AI), but have you ever wondered about how the shortcomings of AI might encourage workers to be creative and innovative?
AI is an umbrella term to mean the capacity of computer systems to learn from experiences and perform tasks that require human-like complexity, such as logical decision-making. AI holds the power to bring about significant changes in the workplace. It includes redefining innovation management, sparking creativity during idea generation, enhancing human existence, and autonomously tackling various tasks. The Australian Department of Health Services (DHS) has implemented Roxy, a virtual assistant, to address inquiries regarding the regulations and policies of its programs. Roxy efficiently manages 78% of routine regulatory queries, leaving more complex questions, that require human creativity and expertise, to be handled by human workers.
However, AI does come with its limitations. It relies on skilled employees, or employees who acquire such skills, to correct mistakes made by robots and computer systems. In doing so, these workers effectively address the shortcomings of AI.
Advances in AI and minimizing AI limitations are less likely to reduce the demand for innovative work behaviour. While intelligent robots may be less likely to make mistakes in the future, innovative behaviour will still be required, for example, to understand AI system outputs.Araz Zirar
What is the interesting question?
A thought-provoking question arises: Can the limitations of AI prompt workers to become more involved in tasks demanding advanced skills like critical thinking, evaluation, synthesis, and creativity? It’s quite reasonable to propose that AI’s limitations might motivate employees to identify issues, conceive innovative solutions, and put these ideas into action. This process entails refining the essential skills and behaviours required to generate, initiate, and implement ideas that enhance both individual and organisational performance. These developments can manifest in various ways, as discussed in the following sections.
1. Intelligence systems make mistakes
Organisations depend on intelligent systems to enhance efficiency and achieve various objectives. These intelligent systems must be capable of adapting to unpredictable instructions and creating a sense of familiarity for employees, even in unstructured interactions. However, this flexibility can give rise to numerous issues, including errors, failures to meet employee expectations, violations of workplace ethical codes, and the potential for physical or emotional harm to employees.
For instance, Hristo Georgiev, an engineer, experienced a case where Google’s search algorithm mistakenly associated him with a serial killer known as ‘The Sadist.’ Similarly, the Robodebt scheme erroneously issued 470,000 debts, leading to significant problems. The failure of an automated debt assessment and recovery solution resulted in an official apology from an Australian prime minister in parliament, and the Australian government agreed to settle an A$1.2 billion class-action lawsuit even before it reached the court.
When intelligent systems frequently fall short in delivering the expected level of service, employees may be compelled to devise unconventional solutions to rectify these deficiencies. In this context, the presence of intelligent systems in the workplace acts as a catalyst, augmenting employees’ capacity for creative thinking and innovation as they seek to overcome these challenges.
Job security concerns loom large with the integration of AI into workplaces. AI holds the power to reshape job tasks and even the essence of work itself, potentially resulting in job losses or the removal of specific job components. Furthermore, the introduction of AI into organisations frequently occurs without adequate consideration for the employees who will collaborate with it. To effectively work with AI systems, employees should prioritise the development of skills that AI currently cannot replicate, including critical thinking, problem-solving, communication, and teamwork.
3. Human supervision
AI often requires human supervision to ensure biases are not propagated. Human oversight of AI requires reskilling and upskilling workers. For human supervision, both practical and technical expertise, along with skills that foster creativity, are essential for workers.
4. Interface design
The interface design of technologies allows users to interact with such technologies. For example, when the interface was user-friendly, individuals in the medical field transitioned into becoming trainers and medical coders for AI. Designing such interfaces for AI in the workplace requires workers’ perspectives and understanding.
5. Algorithmic bias
Algorithmic bias requires targeted human interventions. For example, Amazon’s AI recruitment system, which was based on data from male CVs, didn’t rate candidates without gender bias. Facebook’s Ad algorithm allowed advertisers to target users based on gender, race, and religion, which are all protected characteristics. The UK Home Office’s visa decision-making algorithm, which focused on an applicant’s nationality, was labelled as “racist”. These instances of biases in datasets and unclear algorithms highlight the need for workers to exercise judgement and override AI systems when necessary.
6. AI as a general-purpose tool
AI can work as a tool for coming up with new ideas. It can give employees the space and time they need to be creative, and it can offer designs and products that people can interact with. For example, the availability of digital technologies such as 3D printing allows the creation of novel patterns and products.
3D-printed organs, titanium implants, continuous carbon fibre composites, 3D-printed homes, rain screens and evaporative cooling bricks, and 3D-printed UAV airframes are examples of on-demand parts and equipment production. In this case, 3D printing provides products and patterns that enable innovative behaviour in workers when interacting with them. As such, intelligent systems make it possible for workers to use AI’s ‘blind spots’ and become curious in questioning and trial and error to fix such blind spots or exploit them.
What can an organisation do?
The limitations of Intelligent systems can inspire employees to tap into their knowledge, leading to fresh ideas. Bringing intelligent systems into a workplace can help employees find new solutions and improve areas where these systems fall short. However, it’s important that this doesn’t create a climate of fear. Instead, companies should cultivate a space where employees and AI can work together. Current intelligent systems can’t function without human input. If employees are scared, they might hold back these systems from reaching their full potential.
So, how can a company reduce this fear? One approach is to clearly explain why they’re bringing intelligent systems into the workplace. Another is to thoughtfully consider how humans and AI can coexist. Companies could also explore training programs that help employees work with AI systems. By retraining and upskilling employees, these programs can help them understand and accept the limitations of intelligent systems. A meaningful coexistence that includes retraining and upskilling allows AI and humans to complement each other’s strengths.
Zirar, A. (2023). Can artificial intelligence’s limitations drive innovative work behaviour?. Review of Managerial Science, 1-30. https://doi.org/10.1007/s11846-023-00621-4