Recently in computational economics Category

Cluster computing in economics

As economic models become more complex and reflect more of the heterogeneity among households and firms in the economies that we try to explain, the computational burden of solving these models increases exponentially. In the profession, this is called the curse of dimensionality.

In response to the increasing computation times required by the complexity of many current economic models, cluster computing and parallel processing are becoming important skills for economists. A new resource for economists in this area has just been made available to the public thanks to a National Science Foundation funded joint project of Russell Cooper (Univ. of Texas at Austin), Kim Ruhl (NYU Stern), and the Federal Reserve Bank of Kansas City. The website is www.clustereconomics.org.
For economists who use numerical methods to compute solutions to complex problems, this one is a treat.  This is a link to a post by Peter Norvig, currently the Director of Research at Google, Inc. and formerly chief computational guy at NASA Ames Research Center, Sun Microsystems Labs, UC Berkeley, and USC. In this short article, he proposes different algorithms for quickly getting to your song of choice on the iPod Shuffle, which does not have a display.  His methods include a value function iteration, policy function iteration, and a randomization algorithm--each complete with programming code and simulations to test their effectiveness.

(Thanks to Jason for pointing me to this.)

Authors

  • Richard W. Evans is an Assistant Professor of Economics at Brigham Young University

  • Jason DeBacker is an Assistant Professor of Economics at Middle Tennessee State University

  • Kerk L. Phillips is an Associate Professor of Economics at Brigham Young University