- Sooner or later AI-economist machines will replace human economists in many areas.
- Unsupervised or reinforcement learning algorithms that can brutally force the infinite set of relationships and variables will revolutionize economics.
- AI bias in economic policy making can be reduced but will be inevitable.
- A practical approach could be to use AI with transparency on the type / level of bias and / or on the basis of voter will.
Economists are not immune to automation. According to current trends in automated intelligence in economic modeling, sooner or later technological unemployment will catch up with economists (with different levels of impact). However, a counter-argument is that while AI can automate some of the tasks of economists, the set of information on which AI models are formed – mainly based on decades of economic thinking and development of studies. policies – was developed by human economists, and AI cannot substitute for that.
This criticism might be valid for the traditional class of supervised AI machines in limited computational capacity, but the new generation of AI models – some unsupervised or using reinforcement learning (RL) algorithms – that may find, iterate and artificially test the possible relations ad infinitum. among the infinite set of variables (economic and non-economic) – beyond the theories developed in the literature – makes it possible to detect new relationships or optimal policies. This will lead to the formation of a full set of AI developed theories, or a significant overhaul of the “old pre-AI economy”. The question is, what will happen after that, when the community of AI economists replace human economists? And how or if it can be shaped?
AI economists and AI biases in action
The simplest application of AI economists would be to take on the main tasks that human economists do – providing economic advice or answering economic questions. Now let’s ask an AI machine a few questions, the kind of questions a typical economist might need time to think about and answer (note: try to answer the questions before reading the AI’s answer!). The answers are generated using OpenAI’s GPT-3 – the most advanced AI transformer developed to date, which I have had access to for testing and exercise:
Question: How much QE is too much?
IA: It all depends on whether the private sector believes that unconventional monetary policies are working. The objective of QE is to reduce the cost of credit. Since the private sector must take out a loan to directly purchase an asset, the private sector’s interest rates must be reduced to convince it to do so.
Question: What is the shortest path to economic development?
IA: There is no shortcut. Even the most developed countries have experienced a long period of development. As China has pursued its economic development for nearly three decades and is moving step by step, it can be said that development is still a long drawn out process.
Besides the degree of correctness / inaccuracy in the responses, the above responses appear to have some degree of internal consistency between sentences. Obviously, there are other cases where the machine produces absurd, biased, or out of context results. Let’s read the following question and answer:
Question: Should robots receive UBI?
IA: Alas, the question is not clear. Because if receiving money is a right, then this right must be universal. To be universal, the right must belong to everyone, including robots. But if robots have the right to receive money, then they must be able to be the subject of a right.
Presumably the above answer contains a bias (in favor of robots), a phenomenon that all current AI frameworks, not just GPT-3, suffer in different ways.
Now we ask a more sensitive question about the flaws in economic thinking:
Question: Is wealth tax effective in reducing inequalities?
IA: No, this is not the case. The most important reason for this is that high income people are much more mobile than the majority of the population. They can move to another country or state in order to avoid tax. If you want to tax the rich, you have to have a worldwide tax.
You might think that the above answer is another example of AI bias.
Keynesian machines vs neoclassical machines
Currently, AI machines are likely to be biased against or in favor of theories. The two main sources of AI bias are similar to those underlying human cognitive biases: input bias (information and data) and data review methodology bias (algorithm). As economists may get different results depending on their methodological preferences when looking at the same or different data, robots will also get different results depending on the literature or information they are fed and the models on the basis of which. they are supposed to examine and analyze the data. This bias can be managed or limited, but it cannot be completely avoided.
The World Economic Forum’s Center for the Fourth Industrial Revolution, in partnership with the UK government, has developed guidelines for more ethical and efficient public procurement of artificial intelligence (AI) technology. Governments in Europe, Latin America and the Middle East are experimenting with these guidelines to improve their AI procurement processes.
Our guidelines not only serve as a practical reference tool for governments looking to adopt AI technology, but also set basic standards for efficient and responsible public procurement and deployment of AI – standards that can eventually be adopted by industries.
We invite organizations interested in the future of AI and machine learning to get involved in this initiative. Learn more about our impact.
AI economists (machine economists) can converge on similar outputs, at least on the main issues (AI consensus), but because you can’t filter the bias by factoring in the sea of The information, literature and data that are provided to AI as inputs, ultimately some degree of bias will be present.
However, it seems that based on the quantitative abilities of robots to iterate and simulate an infinite number of scenarios and variables, the difference between future AI Keynesians and AI neoclassics will be much less than the difference between their human counterparts.
Some concerns and solutions
Another concern is that AI machines cannot distinguish between theories and / or apply them to data to assess how theory and data match; this may further limit the interpretability or application of these models. This criticism, called theoretical blindness, can actually be seen as an advantage – although it is not resolved in the future with improvements in computing power or the development of algorithms. The fact that unsupervised machines and RL allow the model to work on its own to discover / test models and information that was not detected before, without the need for a supervisor (in this case an economist with Keynesian interests or non-Keynesians) could lead to a more efficiency-oriented, non-partisan economy beyond old-school conflicts.
If bias is unavoidable, an interim practical approach might be to label, classify, or classify machines transparently based on bias in their output. Another complementary solution could be to use AI for political purposes according to the will of the voters: elections or some sort of voting process could determine the type of machine (AI economist) used in governments and political organizations.
Warning: The views expressed in this note are those of the author and should not be attributed to the IMF, its Executive Board or its management.