The problem of induction concerns which prediction methods are reliable and how their reliability can be justified. We can only observe a segment of the past and the present of the world we live in, but we cannot observe the future. So, how can we ever acquire knowledge about the future, at least highly probable knowledge?

The standard answer to this question is by *induction*. Induction means that we look for patterns that we can observe in the past and project these patterns to the future. Various methods of induction exist depending on the patterns sought. However, they all share the assumption that the world exhibits uniformity across space and time. Thus, the regularities observed in the past are expected to hold in the future, at least with a high probability. This implies the existence of certain “laws of nature” that form the basis of inductive predictions.

## Hume’s skepticism

The problem of induction consists of the question of how we can rationally justify the method of induction and the assumption that our world is inductively uniform. As the famous philosopher David Hume had observed, the method of induction can neither justified by observation, since induction concludes the unobserved, nor by *logic*, because nothing can logically guarantee that the future course of the world will resemble its past. Most importantly, induction cannot be justified by its past success, as this would be circular reasoning—justifying induction by induction.

Hume draws the skeptical conclusion that inductive reasoning is incapable of any rational justification. However, as Bertrand Russell once noted, if Hume were right, then there would be “no intellectual difference between reason and madness”.

Finding a solution to Hume’s problem of induction has been crucial, but no satisfactory solution has been evident for a long time.

## Wolpert’s no-free-lunch theorem

Hume’s problem of induction is profound, yet Wolpert’s no-free-lunch theorem further strengthens it. The basis of Wolpert’s theorem is the assumption that “a priori”, that is, without assuming any knowledge about the future and its relation to the past, every possible state of the future is equally probable. This assumption concludes that all prediction methods have the same expected success rate.

In essence, under Wolpert’s framework, the success of an inductive prediction method is no greater than that of a method based on arbitrary factors, such as the constellation of stars, the random arrangement of thrown bones, the croaking of a frog, or blind guessing. This illustrates the inherent limitations of inductive reasoning in making reliable predictions.

## Meta-induction: A proposed solution

A proposed solution to Hume’s problem of induction and Wolpert’s no-free-lunch theorem is the method of *meta-induction*. This strategy applies the principle of induction to various competing prediction methods and predicts an optimal combination based on the success of these methods.

Using results from mathematical learning theory, a meta-inductive strategy called attractivity-weighting has been demonstrated to be *universally access-optimal*. This means it is at least as successful as the best available prediction methods, even when success rates and the best method change due to dynamic environmental shifts.

The optimality of meta-induction is based on its cognitive universality: when faced with a seemingly better strategy, the meta-inductivist incorporates it. This approach demonstrates the superiority of meta-induction over ordinary induction in predicting events. However, meta-induction’s justification provides an indirect justification for ordinary induction, offering a solution to Hume’s problem.

Experience shows that inductive prediction methods have been more successful than non-inductive methods. Therefore, it is meta-inductively justified to favour induction in the future. This argument avoids circularity because the justification of meta-induction has been independently established.

## Resolving Wolpert’s theorem

The solution to Wolpert’s no-free-lunch theorem relies on a key mathematical result: meta-induction *dominates* all non-meta-inductive prediction methods. Each non-meta-inductive method performs worse than meta-induction in at least one environment, indicating that Wolpert’s no-free-lunch result does not apply to meta-induction. This does not contradict Wolpert’s theorem, as the state-uniform distribution Wolpert assumes assigns a probability of *zero* to worlds where meta-induction outperforms non-inductive methods. Consequently, these worlds do not influence the probabilistic expectation value of success rates.

However, there are many, even uncountably many, worlds where induction outperforms non-inductive methods. Intelligent prediction methods have a chance of success in these induction-friendly worlds. It is essential not to exclude these worlds by assigning them a zero probability. Thus, Wolpert’s no-free-lunch theorem is resolved without conflicting with his mathematical findings.

🔬🧫🧪🔍🤓👩🔬🦠🔭📚

**Journal reference**

Schurz, G., & Thorn, P. (2024). Escaping the no free lunch theorem: a priori advantages of regret-based meta-induction. *Journal of Experimental & Theoretical Artificial Intelligence*, *36*(1), 87-119. https://doi.org/10.1080/0952813X.2022.2080278