-
- Our new model incorporates several innovative features: instead of using representative households, we use a demographically accurate synthetic population of millions of households (matched for age, education, race, and spending habits); instead of using representative companies, we model the behavior of tens of thousands of the largest companies that have a one-to-one match with real companies.
-
- J. Doyne Farmer, Understanding Chaos: A Better Economy for a Better World Page 258
-
- Our new model incorporates several innovative features: instead of using representative households, we use a demographically accurate synthetic population of millions of households (matched for age, education, race, and spending habits); instead of using representative companies, we model the behavior of tens of thousands of the largest companies that have a one-to-one match with real companies.
MaIn-stream economics tries to model the economy in terms of “representative individuals.” One virtual consumer represents every household. One virtual firm represents every firm. Many different types of workers are aggregated as “labor.” Many different types of machines and other productivity enhancers (such as firm reputation and process knowledge) are aggregated as “capital.” I have long questioned this way of doing economics, which I call the “GDP factory” method of analysis.
For decades, J. Doyne Farmer and a relatively small group of like-minded researchers have advocated and implemented a different approach: taking inspiration from the field of ecology, they want to build models that incorporate agents that adopt different strategies within the overall system.
The representative individual approach, in which economists carefully select a set of mental assumptions about human behavior, express it as an equation, and solve the equation for a single equilibrium, predates the age of computers.
Farmer’s approach is explained in his new book. Understanding ChaosSo we need an entirely different modeling approach, called “agent-based modeling”, which starts by observing how different individuals choose strategies to earn, consume, and invest. The goal is to see how these strategies interact over time. This requires computer simulations.
For example, consider the stock market. The “representative individuals” approach assumes a single investor with perfect information and a single strategy that maximizes return over risk. Farmer’s approach, on the other hand, starts by examining the types of strategies different investors actually use. Some investors focus on fundamentals, while others try to spot trends. Everyone has different information and uses different heuristics.
Representative individual models of the stock market tend to have uninteresting and unrealistic dynamic properties: they predict minimal market movements, far fewer transactions than we observe, and are quite different from the patterns of spikes and crashes that seem to characterize existing markets. Models of heterogeneous investors are able to reproduce the patterns actually observed in the stock market.
One of the most interesting findings from representative agent models is that the dynamics of financial markets change as the influence of players using one strategy increases: a strategy that temporarily reduces volatility can suddenly cause instability.
For example, Farmer notes that in the late 1990s, large investment banks adopted “value at risk” (VaR) as a strategy to control market exposure. VaR measures the loss from an adverse price movement of, say, two standard deviations. Using such a metric, risk managers would determine that they could increase their risk exposure when market volatility fell, and that they needed to reduce their risk exposure when volatility rose. In good times, a self-reinforcing feedback loop would occur in which asset prices rose as banks expanded their portfolios. But then, a slight adversity would see everyone using VaR try to sell at once, creating a very severe self-reinforcing loop on the downside. Farmer says this is what happened in financial markets before and during the 2008 financial crisis.
Farmer and his colleagues use computer simulations of heterogeneous agent strategies to analyze energy markets, with a particular focus on assessing the feasibility of an “energy transition” to halt climate change. Their analysis shows that the main cost of moving to renewable energy sources is upgrading the power grid. But because it’s cheaper to actually produce energy, a faster energy transition overall would be good for the economy.
- For example, the annual expenditure on the global electricity grid in 2050 is estimated to be about $670 billion with the fast transition compared to $530 billion per year without the transition. However, the total system cost in 2050 is expected to be about $5.9 trillion with the fast transition compared to $6.3 trillion per year without the transition. Thus, while the additional grid costs of $140 billion may seem high, they are significantly less than the savings from cheaper energy. p. 253
Mainstream approaches to implementing economic theory have always had the advantage that they are easier to communicate and replicate: once someone shows you results from a mainstream model, you can solve the equations yourself and understand what causes those results.
When it comes to empirical studies, replicability is less reliable: Farmer reports that when he was at a company interested in exploiting stock market inefficiencies, his team looked at published papers on market anomalies.
- For about half of the papers, we were unable to reproduce the results, even though we used the same data to test for the assumed deviations from efficiency. p. 146
“If economists are to embrace agent-based modeling, they will need to develop ways to clearly express, explain, and justify the choices they make in building their models.”
For those of us not on the team that built the models, simulations are more opaque: we cannot reproduce the results ourselves. If economists are to adopt agent-based modeling, they need to develop ways to clearly express, explain, and justify the choices they made when building the models.
I think of economic models like maps. In the old-fashioned triptych, if the map told you to go from where you live to Boston over the George Washington Bridge, that was the only map you could use. With maps on your phone, you can consider alternatives and even adjust in real time to accommodate traffic conditions.
Huge amounts of data are becoming available to economists. Computer power has improved by orders of magnitude. This is why we see a trend towards agent-based modeling over representative-based modeling.
However, as a map for policymakers, agent-based models are not yet reliable. I would caution against assuming that agent-based models make centralized decision-making a good way to run an economy. You shouldn’t bet all your money on Farmer.