Advancing AI with green practices and adaptable solutions for the future
//

Advancing AI with green practices and adaptable solutions for the future

Despite AI's achievements, how can its limitations be addressed to reduce computational costs, enhance transparency, and pioneer eco-friendly practices?

Current AI techniques can achieve exceptional performance in some cases but continue to suffer from high computational costs, a lack of explainability, and a limited capacity for incremental adaptive learning—issues that can result in serious setbacks, especially in dynamic real-world settings like autonomous vehicles.

Given the rising integration of AI across industries, prioritizing eco-friendly practices, known as Green AI, has become imperative. Ensuring efficiency and adherence to Green AI principles is crucial for developing safe and adaptable intelligent systems that deliver explainable outcomes.

A comparison of computational costs among prevalent learning algorithms, depicted in Figure 1, highlights the resource-intensive nature of neural networks (NNs), which are the go-to choice for most Data Science problems. However, alternatives with lower computational footprints, as shown in the figure, present viable options. While NNs typically yield superior results, other methods like Self-Organizing Maps (SOMs) often fall short in performance, relegating them to a less preferred status.

Figure 1. Comparison of computational costs for various learning algorithms.
Credit. Author

As performance remains a paramount concern in AI methodologies, the escalating significance of computational cost cannot be overstated, particularly amidst global efforts to curtail energy consumption in the face of climate change and escalating energy prices.

Moreover, the quest for explainability in AI models is gaining prominence, underscored by emerging legal frameworks worldwide mandating transparency in algorithmic decision-making processes. Yet, explainability lacks a universally accepted definition and can vary in interpretation among researchers. In this context, explainability pertains to an algorithm’s capacity to elucidate its internal decision-making mechanisms, encompassing explanations of what changes occurred, how they transpired, and the resulting decisions.

The imperative for explainability is further underscored in applications involving robotics, where the real-time articulation of decision-making processes by robots to humans is vital for decision validation and comprehension of the rationale behind their actions.

Dynamic path planning

This research looks at the use of AI techniques in navigating a simple maze (Figure 2) and locating the position of a goal here, represented as food, specifically a burger.  While numerous techniques exist to address this challenge, the focus lies on altering the maze by introducing new barriers where none previously existed. This simulates a dynamic environment akin to what a robot might encounter during real-world missions, highlighting the necessity of devising new routes to reach its objective. Although this scenario is simplistic, it underscores the constraints of existing methodologies in tackling such dynamic scenarios.

The study evaluates various techniques, including well-known reinforcement learning methods. Among these, only the Temporospatial Merge Grow When Required (TMGWR) algorithm demonstrates the capability to adapt to alterations in both maze walls and the burger’s position. Notably, it provides users with feedback regarding environmental changes and subsequently devises a new path to navigate the maze towards the goal. Prior research by the authors has established the computational efficiency of the TMGWR algorithm compared to alternative approaches.

Figure 2.  In a maze environment, the agent is shown as the mouse, and the goal is the burger.
Credit. Author

How does the method work?

The TMGWR method draws inspiration from developmental AI and unsupervised learning approaches, which have already been shown to be computationally more efficient than other approaches.  Unsupervised learning, resembling biological learning processes, enables the algorithm to grasp an internal representation of the maze environment and determine optimal actions to navigate towards the goal, in this case, finding the path to the burger. 

Initially, the method focuses on independently learning the maze layout, which lacks knowledge about the goal location. It explores the maze to discern the positions of walls. Only once it comprehends the maze structure does it chart a path to the burger.

Notably, this approach enables the algorithm to detect changes in the maze layout, distinguishing new walls and understanding how they alter the environment. Consequently, it can articulate these changes to users, effectively explaining modifications in the maze and adjusting its internal map accordingly. This adaptability enables it to dynamically devise a new path to the burger, with the revised mapping capable of being elucidated to end-users. An illustrative video of this process can be seen here.

The same principle extends to relocating the burger within the maze. Upon the burger’s movement, the algorithm promptly identifies the change and recalculates a new path to the updated location, leveraging its internal maze representation. This process enables swift adaptation and ensures that users are promptly informed of the new route. An illustrative demonstration of this functionality is available on YouTube.

Although current AI algorithms can achieve impressive performance in many tasks, they are computationally inefficient and unexplainable.  This research demonstrates the potential for alternative approaches that are both computationally efficient and give explainability as well as best in class performance.

Andrew Starkey

Limitations of other methods

Alternative methods exhibit limitations in adapting to dynamic changes in the maze environment. Some cannot accommodate shifts in the burger’s position, persisting in revisiting the previous location. Additionally, certain algorithms struggle with alterations in the maze layout, becoming obstructed by new walls and failing to remap a path to reach the burger via an alternative route. Consequently, these algorithms encounter challenges adjusting to novel scenarios reminiscent of issues encountered in more complex applications, such as self-driving cars, when confronted with traffic configurations not encountered during training.

Is the maze representative of real-life problems?

The maze problem above is simplistic and does not reflect a real-life problem.  However, even with this simple experiment, the limitations of AI approaches have been demonstrated. Developing AI techniques that tackle such experimental setups is crucial for assessing their problem-solving capabilities, elucidating their operational mechanisms, and facilitating comparisons of their computational efficiency. This foundational work lays the groundwork for advancing AI solutions that can address more complex real-world challenges effectively.

3 steps for developing explainable green AI approaches

Explainable AI is increasingly a requirement in today’s world.  Black box approaches that cannot be explained will fall foul of government legislation, reducing the adoption of AI technologies.  Computationally expensive approaches should be avoided due to the imperative to reduce energy usage today.  This work has shown the potential for the further development of AI techniques inspired by biological approaches:

  1. AI studies should include computational cost and performance in their comparison studies.  The work should also assess how explainable the AI method is.
  2. Further development of these approaches is required to meet the requirements of real-world applications fully.
  3. The experiments shown here should be replicated with real-life robots in real-life situations that extend beyond simple mazes. The performance of different types of sensors should be investigated.

🔬🧫🧪🔍🤓👩‍🔬🦠🔭📚

Journal reference

Starkey, A., & Ezenkwu, C. P. (2023, June). Towards Autonomous Developmental Artificial Intelligence: Case Study for Explainable AI. In IFIP International Conference on Artificial Intelligence Applications and Innovations (pp. 94-105). Cham: Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-34107-6_8

Andrew Starkey is currently a Reader at the University of Aberdeen. He also heads Blueflow Ltd., a spin-out company from the University, offering solutions for various data analysis domains. His research focuses on Explainable AI, Automated AI, Green AI, and Autonomous learning. He holds an Enterprise Fellowship from the Royal Society of Edinburgh and Scottish Enterprise.

Chinedu Pascal Ezenkwu is a Lecturer in Business Analytics at Robert Gordon University's School of Creative and Cultural Business. He also leads the university's Online Distance Graduate Certificate in Energy Data Management with Business Analytics Course, Document Control Foundation, and Managing Subsurface Data. Pascal earned his Ph.D. from the University of Aberdeen, specializing in artificial intelligence. In 2022, he was recognised as a UK Global Talent in AI by the Royal Academy of Engineering. Over the years, he has gained practical experience through several interdisciplinary and industrial research projects, successfully applying AI in businesses.