In graduate school, I remember a professor suggesting that the rational expectations revolution would eventually lead to better models of macroeconomics. I was skeptical, but in my opinion, I don’t think that happened.
This is not because there is anything wrong with the rational expectations approach to macros, I strongly support it. Rather, I believe that the advances resulting from this theoretical innovation have occurred very rapidly. For example, by the time I made this argument (around 1979), people like John Taylor and Stanley Fisher had already grafted rational expectations onto a rigid wage and price model, which He contributed to the New Keynesian revolution. Since then, macros seem to have gotten stuck in a rut (aside from a few innovations since then) princeton school (Related to the zero lower bound problem.)
In my view, the most useful applications of new conceptual approaches tend to be realized quickly in highly competitive fields such as economics, science, and the arts.
Over the past few years, I’ve had many interesting conversations with young people working in the field of artificial intelligence. These people know far more about AI than I do, so I advise readers to take the following with a grain of salt. In discussions, I sometimes expressed skepticism about the pace of future improvements for large language models like ChatGPT. My argument was that there is a pretty serious diminishing return to exposing LLM to additional datasets.
Consider someone who has read and understood ten carefully selected books on economics (perhaps books on macro and micro principles, as well as intermediate and advanced textbooks). If you fully understand this material, you will actually gain quite a bit of economics knowledge. Now I have them read another 100 carefully selected textbooks. How much do they actually know about economics? It’s probably not 10 times as much. In fact, I doubt they know twice as much about economics. I think the same can be said for other fields such as biochemistry and accounting.
this bloomberg article What caught my eye:
OpenAI is Just before the milestone. The startup finished the first training of a new artificial intelligence model at scale in September that significantly outperforms previous versions. The technology behind ChatGPT We move closer to our goal of AI more powerful than humans. However, this model, known internally as Orion, failed to achieve the performance the company desired. In fact, Orion tried and failed to answer coding questions for which he had no training. and OpenAI is not alone I’ve been running into a stumbling block lately. After years of launching increasingly sophisticated AI products, the three largest AI companies are now We are seeing a decline in revenue. It’s the result of a hugely expensive effort to build a new model.
Please don’t take this to mean I’m an AI skeptic. I believe that the recent advances in LLM are very impressive and that AI will ultimately transform the economy in some fundamental ways. Rather, my point is that progress toward some kind of supergeneral intelligence may occur more slowly than some of its proponents expect.
How could I be wrong? I’ve heard that artificial intelligence can be enhanced in ways other than exposing models to ever-larger datasets.data wall” may be overcome by other methods of increasing intelligence. However, if Bloomberg is correct, LLM development has stalled a bit due to diminishing returns due to increased data.
Is this good news or bad news? It depends on how much weight you place on the risks associated with developing ASI (Artificial Super Intelligence).