Sunday, December 23, 2012

Inflation Targeting Pressure in Japan

I recently wrote an article for the forthcoming Global Economic History Encyclopedia about inflation targeting. I only submitted the article for editing a few days ago, but it looks like it may be due for updates before the encyclopedia even comes out! The reason is news that Shinzo Abe, future premier of Japan, is pressuring the Bank of Japan (BOJ) to adopt a 2% inflation target. Apparently, if the bank refuses, Abe will try to change the law guaranteeing BOJ independence. Currently, the BOJ has a 1% inflation target. This is pretty big news, but I haven't seen any economist's analysis in the press yet, so I'll try to walk through some of the basic implications.

The Japanese economy experienced deflation and economic stagnation in the 1990s, recovered somewhat from 2003 to 2007, then fell back into recession. Consumer prices have fallen 6.8% since 1997. Richard Koo's book calls the extended Japanese recession "The Holy Grail of Macroeconomics" because of the light it can shed on macroeconomic theory. Paul Krugman has also repeatedly looked to Japan to study the macroeconomy under conditions of recession, deflation, and the liquidity trap.

To analyze the news about the possible 2% target, something to keep in mind is the relationship between interest rates and inflation. The nominal interest rate is the real interest rate plus expected inflation. There is a zero lower bound (ZLB) on nominal interest rates, since potential lenders would rather just hold cash than lend at negative nominal interest rates. Real interest rates affect decisions about spending and investing now versus later, so they affect aggregate demand. Typically in a recession, monetary authorities want to lower interest rates to boost aggregate demand. A certain level of real interest rates would theoretically restore full employment. But, especially when inflation is low or negative, the ZLB on nominal interest rates can prevent the real interest rate from going sufficiently low. This situation is called a liquidity trap, and renders conventional monetary policy obsolete.

Since the real interest rate is the nominal rate minus expected inflation, raising expected inflation can help lower the real interest rate. But raising expectations about future inflation is not a straightforward task. If a central bank starts printing a lot of money, the public still might not believe that it would continue to do so in the future, when the bank's incentives would be different. One purported solution is for the bank to publicly and explicitly commit to an inflation target. The public commitment is intended to add credibility, so that expectations will actually change.

The other relevant relationship is between exchange rates and inflation. If inflation in Japan is expected to be higher, then the yen will have less purchasing power in the future, so the yen will depreciate. Depreciation of the yen makes Japanese exports relatively cheaper and imports relatively more expensive. So a higher inflation target could reduce Japan's trade deficit, which has been large this year.

My final comment is that central bank independence is not a goal in and of itself. It may be a means of arriving at the goal of effective monetary policy, or it may not. Typically, central bank independence does facilitate effective monetary policy by adding credibility. But at the moment, the BOJ is not credible enough to achieve its 1% inflation target. Maybe if the 2% target were instituted by another part of the government considered more credible, it would actually be more effective. In conclusion, I am somewhat hopeful that Japan is headed in a good direction. Either the BOJ will adopt a 2% target, which will be seen as more credible due to government support, or the government will itself impose the target, which would be drastic enough to cause the markets to notice, bringing about some aggregate demand stimulus.

_______________________________________________________
P.S. I haven't written in awhile for two exciting reasons. First, I have been working on articles for the Global Economic History Encyclopedia. Along with inflation targeting, I have written about the euro area, the Basel Accords, the National Monetary Commission, the gold standard, the Glass-Steagall Act, Wildcat Banking, the Credit Anstalt Crisis, and a few others.

Second, I haven't been writing because I was too busy getting married and going on a honeymoon! I married Joe Binder on December 1 at Christ the King Catholic Church in Dallas, TX. Then we drove across the Southwest for our honeymoon, stopping at a lot of national parks and monuments including Carlsbad Caverns, Guadalupe Mountain, White Sands, Tucson, Sedona, the Gila Cliff Dwellings, and the Grand Canyon. We feel so happy and blessed and are excited about our first Christmas together!




Thursday, September 13, 2012

International Lending with Moral Hazard and Risk of Repudiation

Yesterday I presented the paper "International Lending with Moral Hazard and Risk of Repudiation" by Andrew Atkeson (1991 Econometrica) at the Berkeley Macroeconomics Lunch. My slides are here.

The paper presents a model to explain why countries sometimes face capital outflows when they suffer an adverse macroeconomic shock. With complete markets and perfect insurance, we wouldn't see such a situation. But in the model, the markets aren't complete. The borrower is a sovereign nation, and hence can repudiate the loan. Moreover, the lender cannot observe whether the borrower invests the loaned funds productively or uses them for consumption. The only way to incentivize the borrower to invest some of the funds in equilibrium is if insurance is incomplete, so that the borrower's future utility has some dependence on their investment choice. Thus, moral hazard imposes an insurance-incentives tradeoff.

While preparing for the presentation, I came across an interesting paper by Drelichman and Voth (2011) called "Lending to the Borrower from Hell: Debt and Default in the Age of Philip II.” Here's the abstract:

What sustained borrowing without third-party enforcement in the early days of sovereign lending? Philip II of Spain accumulated towering debts while stopping all payments to his lenders four times. How could the sovereign borrow much and default often? We argue that bankers’ ability to cut off Philip II’s access to smoothing services was key. A form of syndicated lending created cohesion among his Genoese bankers. As a result, lending moratoria were sustained through a ‘cheat-the-cheater’ mechanism. Our article thus lends empirical support to a recent literature that emphasises the role of bankers’ incentives for continued sovereign borrowing.

Monday, September 10, 2012

Notes on the Princeton Initiative on Macro, Money, and Finance


The Princeton Initiative on Macro, Money, and Finance was a fantastic event. I flew from Oakland through Houston and into Newark on Thursday night and arrived at the Nassau Inn just after midnight. It is a nice and conveniently located hotel. On Friday morning, September 7, I left the Inn around 7 a.m. with my roommate, a finance student at the Haas school at Berkeley. For us Californians, it felt like 4 a.m., but luckily plenty of coffee was provided at breakfast in the gorgeous chemistry building on campus.

The day began with a lecture by Markus Brunnermeier called "A Brief History of Macroeconomics." Interestingly, macro and finance developed along different paths, often developing similar notions using different languages. Now, it seems, they are converging. This lesson on history seemed equally a lesson on the future-- a future in which finance and macro are studied in a more unified framework and financial frictions are taken more seriously. This conference urged us future economists to consider going in that direction.

Professor Brunnermeier continued with a lecture on "Liquidity concepts: amplification, persistence, and asymmetry." He gave a convincing argument that liquidity risk is a more fundamental concern than maturity risk. There are three types of liquidity. On the asset side, there is technological liquidity, referring to the reversibility of investment in physical capital, and market liquidity, referring to the specificity of claims capital (if the second best use of the capital is still nearly as good as the best use, market liquidity is high). On the liability side, funding liquidity is tied to the maturity structure of debt. We saw many times in the lecture how different classic and newer papers have incorporated one or more types of illiquidity. For example, technological illiquidity is represented by capital adjustment costs, or in the extreme case by a fixed capital stock.

Another interesting concept he discussed was the "volatility paradox." An interesting implication of some of his models is that systemic risk can build up in times of low volatility, in the form of bubbles or imbalances. Then, once a crisis hits, this risk that has stealthily built up in the background results in direct and indirect spillover effects. Direct spillovers come from the direct contractual interconnectedness of the financial sector, and have been studied a lot and found to be important, but not enough to account for the severity of crises. Indirect spillovers, like fire-sale externalities, credit crunches, and liquidity spirals, are as of yet less studied, but seem much more dramatic. The thing about these effects is that they are a "general equilibrium phenomenon": you cannot simply detect them in the data unless you have a model (and it must be a dynamic model) because there is not a simple story of cause and an effect. All agents are optimizing across time and taking other agents' optimizations into account.

It was extremely helpful to get an overview of the broad similarities and key differences between the important papers in the financial frictions literature: Townsend 1979, Bernanke and Gertler 1989, Carlstrom and Fuerst 1997, Kiyotaki and Moore 1997, and a number of others. I had seen most of these papers in previous classes, but seeing how they fit together and how the field has progressed in a logical order was enlightening.

Professor Yuliy Sannikov gave the pre- and post- lunch lectures on "Heterogeneous agent models with financial frictions: a continuous time approach." The reason to study heterogeneous agent models is due to the fact that with incomplete markets, distribution of wealth matters (because agents cannot fully insure themselves.) Non economists may find it strange to learn that economists usually assume that the economy can be modeled as if there were a single representative consumer. With no financial frictions, this is perfectly reasonable. Moreover, it comes from one of the most powerful results in macroeconomics--if agents have access to complete markets, they will trade securities in a way that smoothest consumption across possible future states of the world. This then makes the math of "aggregating" agents very nice; you can just act as if they are a single agent. I can't emphasize enough how crucially most macroeconomic models rely on this result. This is precisely why, if financial frictions matter, they will matter in a big way.

To present his recent work with Professor Brunnermeier (they call it the BruSan model), Professor Sannikov began by going over a more basic version by Basak and Cuoco (1998). A critical step in a model with heterogeneous agents is defining a state variable that characterizes the wealth distribution in the economy, and then finding the law of motion for that variable. In this model, there were two types of agents, and the state variable was just the fraction of wealth held by one of the agents. Im not sure, with more than two types of agents, whether you would represent the wealth distribution with a single summary statistic or if you would need to use multiple states (maybe the number of agents minus 1?) to have high enough information content. Since I have not taken any continuous time asset pricing classes, I was grateful to get the basic model first, to see the notation and basic tools for continuous time stochastic processes. A capital dividend stream is often assumed to be a Brownian motion process, and to characterize the laws of motion for functions of continuous variables, you can use Ito's lemma, which is kind of like using the chain rule and product rule in "regular" calculus. The "history" lecture in the morning mentioned that finance began in continuous time and macro in discrete time. Since I have mostly studied macro and barely studied finance, continuous time models are not very familiar. I will certainly find a textbook on stochastic calculus to browse and then take another read of BruSan.

The final lecture of the day, by Professor David Sraer, was "Financial frictions: empirical facts." Neoclassical models with complete markets and no financial frictions make some basic predictions, such as how investment should react to cash flow (in short, it shouldn't.) Taking the neoclassical model as the "null hypothesis," can we find any empirical evidence that would cleanly reject the null? There have been a large number of suggestive studies but no "smoking gun." It is very hard to conclusively show that there are financial frictions, much less what type of financial frictions. That doesn't mean that they don't exist-- it seems, anecdotally and theoretically and even intuitively, that they exist and matter hugely, they are just very tricky to identify. As I mentioned, we are trying to study general equilibrium effects. A major impediment is endogeneity. Some of the papers that Professor Sraer discussed used difference-in-differences or even triple differences specifications to try to alleviate the extent of the identification. I wonder if using such specifications can ever be more than just suggestive. What would it take to be convincing?

After the Friday lectures we had a barbecue at the Bendheim Center for Finance. At dinner, like at the other meals, it was great fun to meet the other students from economics departments and business schools across the country (and even a few from schools in Europe.) I am always curious about what the graduate student experience is like in other places, and it is also neat to hear stories about what certain professors (whom I will leave unnamed) are like in person. We also got to visit a bar in downtown Princeton. The downtown Princeton street bordering campus is populated by J. Crew, Banana Republic, Ugg Boots, and Ann Taylor shops, in lieu of the the tie dye t-shirt stands on Telegraph by the Berkeley campus. The bar in downtown Princeton curiously--almost eerily-- played no music. But that did facilitate more (relatively non-econ-related) conversation.

On Saturday we heard the first presentation of the latest BruSan paper on the redistributive impact of monetary policy. It sounds like a simple idea but is really quite profound. In New Keynesian models, the reason monetary policy has an effect is because of price stickiness. Monetary policy works through its ability to alter price setting or minimize price distortions. In BruSan's so-called "I theory," monetary policy has an effect through it's ability to change the distribution of wealth. The I stands for intermediaries or for inside money. To me it seems to obvious that monetary policy affects different people differently, and that this matters, that I could hardly believe this was something new.  I think if you asked the average person on the street what they thought about monetary policy, they would complain about it being unfair in some way or another, and redistribution always seems unfair to a lot of people. But in the representative agent paradigm, this effect does not exist, because there are not different agents!

I am still pretty surprised that awareness of the redistributive impact of monetary policy was not high enough to convince people to study heterogeneous agent models more intensely long earlier. I guess it took the crisis and associated nonconventional monetary policies to drive it home. And adding agent heterogeneity, which we must do if we take financial frictions seriously, is hard! The analytical tools are only in the early stages of development. Again to give non economists an idea, at this point a "heterogeneous agent" model may well mean you have two agents instead of one. But that makes it way more than twice as hard. And there are so many possible ways of introducing heterogeneous agents that it can be daunting. To make your analysis tractable, you have to you have to make a lot of simplifying assumptions, but need to make them carefully so that the analysis still has some hope of being somewhat meaningful. Surely, the techniques that we today consider pretty easy and straightforward were also considered hard and daunting in their early stages. I am confident that the toolkit for heterogeneous agent models will develop significantly and probably rapidly (both because it is so highly demanded, and-- from what I saw at Princeton--so many smart people are up for working in it.) As the toolkit progresses, the I Theory of Monetary Policy seems one of the most natural and important applications. Moreover, it will allow us to analyze the interactions between financial stability and price stability.

The next lecturer, Professor Ben Moll, did give us some helpful suggestions for making life easier with heterogeneous agent models. First and most obvious is to give agents log utility, so that consumption is a constant fraction of wealth. Professor Sannikov also showed us earlier why log utility implies that the Sharpe ratio is equal to the volatility of wealth. Professor Moll also recommended using continuous time stochastic processes, particularly when your model is to have persistent shocks, and using constant returns to scale production functions. Finally, he told us a useful equivalence result. You can either assume that firms own and accumulate capital, issue debt, and face collateral constraints, or you can assume that firms rent capital and face a rental limit. Results are equivalent, but the rental formulation may make the model more tractable.

Professor Moll discussed "Productivity losses from financial frictions." He made a point worth repeating, that differences in income between countries are much larger than differences in income across the business cycle in a given country. So arguably it is more important to study why there are cross country income differences than to study business cycle fluctuations. Now a lot of studies have concluded that differences in capital between countries are not nearly enough to explain differences in income. The main explanation is differences in total factor productivity (TFP), which basically means differences in how effectively the economy turns inputs into outputs. Measured TFP is the so-called Solow's residual, which is a euphemism for "what we (economists) don't know is going on." (Maybe that's not 100% fair. Someone should correct me if not.) Residuals, by construction, are the unexplained. To explain the unexplained, one entry point is financial frictions. This is the path Professor Moll takes. Financial frictions can lead to capital misallocation, i.e. putting capital to less than best uses, which can lead to TFP losses.

How do financial frictions lead to capital misallocation? In the model he presents, agents are entrepreneurs who are heterogenous in their productivity and wealth. The financial friction is a collateral constraint. They can borrow up to a certain multiple of their wealth; that multiple represents the quality of financial markets. Depending on their productivity and how much they can borrow, it may or may not be profitable for them to undertake their project. It turns out that there is a productivity cutoff for being an active entrepreneur or not. Measured TFP for the whole economy depends on the cutoff, which depends on the quality of financial markets. My housemates are development economists. I want to ask them how they think about TFP differences between countries and the role of finance.

The next Saturday lectures was "Bubbles and crashes," by Professor Brunnermeier. I learned the Brunnermeier and Abreu model of rational bubbles in a first year class, but what didn't stand out to me until this lecture is the following. Normally, backwards induction arguments rule out the possibility of a rational bubble. But such backwards induction requires common knowledge. The fact that agents are sequentially made aware of mispricing means that they eventually have mutual knowledge of first order, then of second order, and so on, but common knowledge is mutual knowledge of infinite order, which doesn't happen in finite time. Game theory and information theory are taught as segments of our first year microeconomics sequence, but they are important in macroeconomic models with financial frictions.

The final Saturday lecture was "A welfare criterion for models with distorted beliefs," by Professor Wei Xiong. He opened with a funny anecdote. Supposedly, economists Stiglitz and Wilson made a bet of $100 about whether a pillow was stuffed with natural down or synthetic fiber. One believed the probability of down was 10%, the other thought 90%. To find out who won the bet, they had to cut open (and destroy) the pillow, and they agreed to split the $50 replacement cost. Both economists had an expected value of $55 for the bet (left as an exercise for the reader) so they both happily agreed to it. But the bet had a negative net value! Whatever happened would result in a transfer payment of $100 from one economist to another, and destruction of a pillow worth $50! Bets between economists have high pedagogical and entertainment value. Maybe other people find less destructive ways to entertain themselves. But there are many more common trades that result from heterogeneous beliefs and have negative net value. A social planner should be able to make people "better off," but the question is, what beliefs should the planner use to evaluate welfare? Xiong introduces a belief-neutral welfare criterion. The set of reasonable beliefs is the set of convex combinations of agents' beliefs (in the pillow example, any probability between 0% and 90% that the pillow is down is a reasonable belief.) Choice x is called (in)efficient if it is Pareto (in)efficient evaluated using any reasonable belief. Note that a choice may be neither efficient nor inefficient under this definition. I really enjoyed this lecture, and wonder if I should attempt to do research that is more decision-theoretic. I have realized that I tend to be drawn toward topics with a common thread of uncertainty, belief formation, information processing, and ambiguity.

On Sunday morning I woke up early enough to go running on campus and on a nice crushed gravel waterside path. Next, the morning lecture was given by Professor Chris Sims on the "Fiscal Theory of the Price Level." More aptly, the lecture was what he called a "metafiscal" theory of the price level, talking about the model from above and outside of it more than really going into it. This is what I love most, to be honest. I would be a metaeconomist if that were an option, since I spend much more time thinking about economics (in the meta sense) than thinking about the economy, and with more passion. I am the kind of person who likes the Introduction chapter of books best of all, especially if it goes way into why and how they decided to write the book. Not too productive at this stage in my career, I know. As a macro student, literally hundreds of times I have taken this little step where you have a one period budget constraint (for the government, say) and you sum it up over all periods (yes, back in discrete time!) and appeal to a transversality condition to derive an intertemporal budget constraint. The transversality condition is a restriction on asset value as time goes to infinity(!) and all professors have a slightly different way of explaining it to students. I understood the gist of why, economically and theoretically, it is needed, but sometimes its implications did seem slightly unsettling. I think Professor Sims' explanation got to the heart of why it sometimes seemed odd. The single period constraint is an accounting identity, but the TVC is NOT. It is an equilibrium condition. Professor Sims also argued that conventional ways of thinking about the independence of monetary policymakers and fiscal policymakers should be reconsidered.

Professors Nobuhiro Kiyotaki and Atif Mian gave the final lectures of the conference. Anyone who is familiar with the Kiyotaki and Moore model of credit cycles, now part of the canon for first year grad students, may be interested in Professor Kiyotaki's newer model of banking, liquidity and bank runs. I took a course on Empirical Macro Finance with Professor Mian last semester that greatly sharpened my interest in the intersection of macroeconomics and finance and taught me a lot about empirical identification strategies.

One thing I didn't realize before the Initiative is that economic history is not so common to study at non-UC schools. When I mentioned that economic history is one of my fields, people often thought I meant history of economic thought. And though the Initiative did not have any explicit focus on economic history, nearly every lecture brought to mind historical episodes I have studied. An absolute must-read, in my opinion, is "Finance Capitalism and Germany's Rise to Industrial Power" by Caroline Fohlin. To study financial frictions, it seems quite useful to study earlier stages of financial development. It is not immediately apparent whether less developed financial systems would have more or less severe financial frictions. There have likely been complicated interactions between technological and financial innovation and economic development over the centuries. Some innovations alleviating certain frictions and exacerbating others. My colleague Glenda Oskar, on the job market this year, is researching 19th century capital markets. She looks in particular at a California statute passed in 1861 that granted mining companies the right to levy assessments (like negative dividends) on existing shareholders. She has, impressively, collected a dataset that allows her to study when and under what types of conditions different mining companies exercised this privilege. How did financial frictions in that era compare to today, and what can that teach us about their impact?

Closely related to economic history is the economics of institutions, also much emphasized at Berkeley. Financial frictions depend on the "rules of the game," which realistically are neither exogenous nor static. For now, most models with financial frictions implicitly take the institutional arrangements as given. However,  both positive and normative analysis will benefit if we eventually bring institutions into the model. Many policy changes in response to crises are actually institutional changes.

The Princeton Initiative gave me a lot of interesting ideas to think about. I am grateful to everyone who made it possible. 

Introduction to the Quantitative Ease Blog

Welcome to the Quantitative Ease blog. I am a graduate student in economics at UC Berkeley and study macroeconomics and economic history. I have been blogging about a variety of topics for the past few years and now want to start a new blog with an economics focusThe following posts from my old blog may be of interest: