Lukas Püttmann    About    Research    Blog

Interesting papers at the NBER Summer Institute 2016

Check out the program or this non-representative pick of papers that caught my eye:

  • In the Corporate Finance session we get a piece by Atif Mian, Amir Sufi and Emil Verner:

    An increase in the household debt to GDP ratio in the medium run predicts lower subsequent GDP growth, higher unemployment, and negative growth forecasting errors in a panel of 30 countries from 1960 to 2012.

  • Also this (pdf) paper by Sydney Ludvigson, Sai Ma and Serena Ng in the Forecasting & Empirical Methods session looks good:

    […] we find that sharply higher uncertainty about real economic activity in recessions is fully an endogenous response to other shocks that cause business cycle fluctuations, while uncertainty about financial markets is a likely source of the fluctuations.

  • This piece by Andreas Fagereng, Luigi Guiso, Davide Malacrino and Luigi Pistaferri in the Aggregate Implications of Micro Consumption Behavior session is interesting:

    Third, returns are positively correlated with wealth. Fourth, returns have an individual permanent component that explains almost 20% of the variation.

  • Matthew Gentzkow, Jesse Shapiro and Matt Taddy present a paper in the Political Economy session:

    Partisanship [in US politics] was low and roughly constant from 1873 to the early 1990s, then increased dramatically in subsequent years.

Claudia Sahm offers good comments:

Recessions are almost by definition a time of instability, and it is hard to trace down the roots of instability in models that largely assume it away. I am a big fan of belief shocks, I don’t think we can fully understand recession/recovery without appealing to shifts in expectations. And yet, I have no idea how you cleanly, credibly separate beliefs from credit supply.

Simple example of how to write unit tests in Matlab

A unit test is a little program that checks if some part (or unit) of your code works as expected. What arguments are there for bothering to write such tests?

  • You find bugs more quickly
  • It’s reassuring to first run your tests when you haven’t touched your codes
  • Check if anything brakes if you change your code
  • Writing test also nudges you to keep functions small, as it’s more difficult to test functions when they have many input arguments.

Matlab ships with a unit testing framework since version R2013a. (See here if you’re using an earlier version).

I didn’t find the existing examples of how to use it easy to follow, so I’m putting here an explanation of how to test one individual function.

You can find all codes here.

Say we have a function add_one.m we want to test:

function y = add_one(x)
    y = x + 1;
end

For our unit test, we then write an additional script which we have to name with either test_ at the beginning or _test at the end. So here’s the new script test_add_one.m:

function tests = test_add_one
    tests = functiontests(localfunctions);
end

function test_normal1(testCase)
    x = 1;
    actSol = add_one(x);
    expSol = 2;
    verifyEqual(testCase, actSol, expSol)
end

The first three lines are always required and we only need to change the function name to match the name of the file.

The following function test_normal1 is our first test case. We will pass in the value x = 1 and check that the result is indeed 2.

So now go to the Matlab command line and run:

>>run(test_add_one);

Which returns:

Running test_add_one
.
Done test_add_one

There’ll be a dot for every test case for this function. In this case everything worked fine, but there would be an extensive message if an error had occured.

So let’s add some more tests:

function test_normal_pi(testCase)
    x = pi;
    actSol = add_one(x);
    expSol = pi + 1;
    verifyEqual(testCase, actSol, expSol)
end

function test_irrationalNr(testCase)
    x = 3 + i;
    actSol = add_one(x);
    expSol = 4 + i;
    verifyEqual(testCase, actSol, expSol)
end

function test_negative(testCase)
    x = -5;
    actSol = add_one(x);
    expSol = -4;
    verifyEqual(testCase, actSol, expSol)
end

function test_matrix(testCase)
    x = [1, 2; 3, 4];
    actSol = add_one(x);
    expSol = [2, 3; 4, 5];
    verifyEqual(testCase, actSol, expSol)
end

It’s a good idea to give the functions meaningful names so that when there’s an error, we know where things went wrong. Don’t worry if the names get really long, they’ll only live in this script anyway.

The tricky thing is to think of the irregular ways the function might be used. For example, the following tests check that we get the right output even if we pass in an empty matrix or an NaN value:

function test_empty(testCase)
    x = [];
    y = add_one(x);
    actSol = +isempty(y); % plus converts logical to double
    expSol = 1;
    verifyEqual(testCase, actSol, expSol)
end

function test_nan(testCase)
    x = nan;
    y = add_one(x);
    actSol = +isnan(y); % plus converts logical to double
    expSol = 1;
    verifyEqual(testCase, actSol, expSol)
end

Now let’s give the function something where would expect an error. If we pass the function a string 'Hello world' it returns a numerical vector. That’s not what we want, so let’s add

assert(isnumeric(x), 'Input must be numeric.')

to our add_one.m function. So now it fails if the input is not a number.

The following test case then checks if indeed an error is returned:

function test_stringError(testCase)
    x = 'Hello world';
    
    try
        y = add_one(x);
        actSol = 0;
    catch
        actSol = 1;
    end        
    
    expSol = 1;
    verifyEqual(testCase, actSol, expSol)
end

I use try-catch here to check if the function returns an error. There might be better ways to do this, but this works for me.

But we don’t always just have to check that results are equal, as sometimes we want to make sure that the difference is below some numerical threshold. In this case, calculate the absolute or relative error as actDiff and check that it’s less than some acceptable error like this:

verifyLessThan(testCase, actDiff, 1e-10)

One thing I lack so far is a way to test local functions, so functions that you define within some other function and which only that function can use.

So that’s it. If somebody has ideas for improvements, please let me know!


Eugene Fama and Richard Thaler debate efficient markets

Text and video.

Fama: [The efficient-market hypothesis] a model, so it’s not completely true. […] The question is: “For what purposes are they good approximations?” As far as I’m concerned, they’re good approximations for almost every purpose. I don’t know any investors who shouldn’t act as if markets are efficient. […]

Thaler: For the first part—can you beat the market—we are in virtually complete agreement.

Fama also cites Daniel Kahneman as recommending people to invest in ETFs.

Also, Fama doesn’t think governments or central banks should step in to deflate asset market bubbles:

Fama: We disagree about whether policy makers are likely to get it right, though. On balance, I think they are likely to cause more harm than good.

Thaler argues that the rational model is how people should behave, but it’s not how they do behave. And if you want to predict how people act, you have to take that into account:

Thaler: I believe the rational model, and I think that a lot of people screw it up, and that we can build richer models with a better predictive power if we include the way people actually behave as opposed to [the behavior of] fictional “Econs” that are super smart and have no self-control problems.


Collected links

  1. Interesting thoughts by Chris Blattman:

    But as I read the story, I couldn’t help but think that it’s that smugness that makes half the country hate the Times audience and want to vote for a man like Trump.

    The so-called liberals of New York (like me) who push for equal rights with one hand while pushing their kids to private schools with the other. Or support more open borders on principle, failing to mention that it lowers the cost of their house help without threatening their own jobs.

  2. Philip Tetlock, the author of “Superforecasting”, has lunch with the FT [source: MR]:

    Tetlock sees a division of intellectual labour, where Martin Ford and his ilk shape interesting hypotheses and more cautious and statistically minded people break them into smaller, testable pieces.

    His politically mild background is important, as it turns out. His work has taught him that everyone takes a heavy ideological endowment from their environment.

    But Tetlock’s belief in the possibility of a more rational world seems, happily, to be the only one that is not open to revision in the face of contrary evidence.

  3. Nathan Lane points to this history by Richard Becker of the S programming language which then became R. See 28:12 for the history of the assignment operator <- in R over = in other languages. Spoiler: their keyboard had a button with an arrow.

  4. Sci-hub is back up:

    The ethics are quite clear…

  5. Andrew Gelman and David Rothschild on why political prediction markets are performing worse than expected:

    But more recently, prediction markets have developed an odd sort of problem. There seems to be a feedback mechanism now whereby the betting-market odds reify themselves.

Do the programming languages we use influence how we think?

This article says that which programming languages people or organizations use might influence their thinking and culture.

I’m reminded of the discussion in linguistics on the idea that language structure might shape how we think. This idea is controversial and Steven Pinker, who’s skeptical of it, writes:

And supposedly there is a scientific basis for these assumptions: the famous Sapir-Whorf hypothesis of linguistic determinism, stating that people’s thoughts are determined by the categories made available by their language, and its weaker version, linguistic relativity, stating that differences among languages cause differences in the thoughts of their speakers. […]

But it is wrong, all wrong. The idea that thought is the same thing as language is an example of what can be called a conventional absurdity: a statement that goes against all common sense but that everyone believes because they dimly recall having heard it somewhere and because it is so pregnant with implications. (p57, “The Language Instinct”)

This idea has made inroads into economic research. Keith Chen argues, for example, that how languages encode references to the future has implications on how people think about the future. So a German speaker who says “Ich gehe morgen in die Kirche” (“I go tomorrow to church”) without having to add any explicit grammatical marker for the future tense, is thus more patient and saves more for the future.

No matter if the Sapir-Whorf hypothesis makes sense for natural languages or not, might there be something in it for programming languages? In the article we read:

If you want to know why Facebook looks and works the way it does and what kinds of things it can do for and to us next, you need to know something about PHP, the programming language Mark Zuckerberg built it with.

Among programmers, PHP is perhaps the least respected of all programming languages.

                                    […]

You wouldn’t have built Google in PHP, because Google, to become Google, needed to do exactly one thing very well – it needed search to be spare and fast and meticulously well engineered. It was made with more refined and powerful languages, such as Java and C++. Facebook, by contrast, is a bazaar of small experiments, a smorgasbord of buttons, feeds, and gizmos trying to capture your attention. PHP is made for cooking up features quickly.

However, people and organizations get to choose which language to use, unlike natural languages that you have little say over.

But people don’t switch immediately when there’s a better language available. There’s a lot of sluggishness in how organizations change their systems. And if everybody else in your area of work uses some language, then you probably stick to it as well. And the structure and capabilities of that language might then shape how people think about problems.

This idea is actually mentioned in the good Wikipedia article on this topic which also refers to this Paul Graham essay in which he describes Lisp as his secret weapon.

But there’s also another viewpoint on this. The original article goes on to introduce OCaml, an exotic functional programming language used by the Hedge Fund Jane Street. Such a programming language demands more of the people who use it, but it’s easier to ensure correctness of the programs.

The culture of competitive intelligence and the use of a fancy programming language seem to go hand in hand.

So what if it’s mostly about signaling? Like those tough interview questions in consulting interviews or intelligence tests during applications for investment banks that might hold little predictive value of how good somebody will be at their job, but instead serve as a marketing tool to convince new hires that one must be really clever to work at this firm.

Maybe the reason some people are so make more productive is not the programming languages they use and they’d be similarly productive in other environments.


Do as I did

When I was finishing school, I went to two career events in my town. In both, a group of stately men sat at tables scattered around the room and we went from table to table and asked them about their professions.

One of them was a former manager and had just started an executive search firm. One was an economist and senior member of the Bundesbank. One was a lawyer working as a lobbyist at the European institutions in Brussels. One was a computer scientist working as management consultant. Several others were business executives.

And they all said variations of the same: “Do as I did.”

  • “Computer science is the best way to learn how to think in a procedural way.”

  • “If you want a good career with a 100,000 euro starting salary, you have to study law.”

  • “Only studying economics can teach you where phenomena like inflation come from.”

  • “I was the president of the student organization in Berkeley and that was very important in my career. These extra-curricular activities are very valuable.”

The only types that didn’t say this were the people who had studied business and management. Instead they said:

  • “You could also study something like aeronautic engineering.”

  • “You could backpack around Asia.”

A lot of the advice is good, but the most important thing I learned was this: There’s a limit on the breadth of career advice somebody is able to give, as most people can only really pass judgment on the decisions they themselves made. They post-rationalize their choices and try to get you to follow the same path.


Keeping records

I was thinking that few of us actually keep records of our written conversations. But then I remembered Stephen Wolfram’s “The Personal Analytics of My Life”:

I actually assumed lots of other people were [collecting personal data] too, but apparently they were not. And so now I have what is probably one of the world’s largest collections of personal data.

I have a complete archive of all my email going back to 1989.

Check out the figures.

Collected links

  1. Nature on the IPython notebook.

  2. FRED Adds 1,993 Banking and Monetary Statistics Series, 1914-1941 that is.

  3. Good post by Ricardo Hausmann on group identity.

  4. Ben Bernanke: “How do people really feel about the economy?”:

    In summary, the University of Michigan’s survey of consumer attitudes has shown a normal cyclical pattern of improvement in recent years, both in how people feel about their own economic prospects and in their expectations for the economy as a whole. In contrast, measures of the national “mood,” like Gallup’s “way things are going” question or questions about the “direction of the country,” show a high level of dissatisfaction.

    To an increasing extent, Americans are self-selecting into non-overlapping communities (real and virtual) of differing demographics and ideologies, served by a fragmented and partisan media.

  5. Corpus-based judicial opinions.

  6. Paul Krugman reviews the book by the former Bank of England Governor Mervyn King [source: The Browser]:

    In fact, King not-so-subtly mocks the authors of such books, which “share the same invisible subtitle: ‘how I saved the world.’”

    […] it is mainly an extended meditation on monetary theory and the methodology of economics.

    The more or less standard account of the 2008 crisis, which King shares, is that the combination of stability-fostered complacency and deregulation led to an accumulation of financial vulnerabilities. Private debt was on a steady upward trend before the crisis, […].

    People cope with this uncertainty by settling on “narratives” that are conventionally accepted at any given moment, but can suddenly change.

"What Is Code?", by Paul Ford

One of my favorite long-reads last year was “What Is Code?” by Paul Ford (emphasis added):

Your diligent decentralized team frequently writes new code that runs on the servers. So here’s a problem: What’s the best way to get that code onto those 50 computers? Click and drag with your mouse? God, no. What are you, an animal?

And that’s why everyone gets excited about GitHub. You should go to GitHub, you really should.

How Do You Pick a Programming Language? […] These are different problems. What do we need to do, how many times do we need to do it, and what existing code can we use to help us do it that many times? Ask those questions.

This is why the choice [of a programming language] is so hard. Everything can do everything, and people will tell you that you should use everything to do everything.


Related posts: