Monday, October 21, 2019

Review: Understanding Complexity

Understanding Complexity Understanding Complexity by Scott E. Page
My rating: 5 of 5 stars

I picked this one up because Scott Page is an old college friend, and hearing his voice – not so much as a reader as a lecturer – was part of the deep pleasure of listening to it. I remember him back in his student government days with the same mix of seriousness and good humor – putting energy into the work but never taking himself too seriously.

So, biased as I am, I declare his presence, his voice, worth the price of admission.

But there’s much more here as well. I guess I could have provided a workable definition of what it means for something to be complex, but I’d never have been able to weave a basically simple concept into the playful depths that Scott manages.

The fundamental observation here is that “complexity” represents a space between easily mappable scenarios and what seems pure randomness. We can measure an economic exchange that involves a couple producers and a couple consumers. We have no hope of measuring something as unpredictable as quantum motion. In between lies something like the macroeconomic conditions we know, an area we cannot accurately predict or control but that we do have the power to influence.

Understanding that in-between, “understanding complexity,” is a dramatic new frontier in data science. It’s the child of game theory and chaos theory, suggesting that simple concepts, twined together, can produce a complex view of what we mean by complexity itself. It feels like a wonderful logic game, and it also feels like it might be the key to making our world substantially better.

I’m not doing justice either to the substance or color of Scott’s argument, but it’s stimulating throughout. He has wonderful metaphors like “dancing landscapes” and “Mt. Fujis,” and he has a knack for setting up the concepts early that he will need later.

My favorite part here, I think, is the way he demonstrates the power of agent-based model simulations. He demonstrates throughout this that, when we account for “bottom-up” phenomena of organized systems – which is any system where potentially countless individual actors make individually based determinations that produce a potentially predictable reaction in an interdependent whole – bizarre and wonderful things can happen. He gives the example of computer simulations in which cells light up as black or white when they meet certain conditions (such as whether their neighbors are black or white) and then produce seemingly top-down results – such as when a set of black/white binary cells produce what looks like a stick-figure creature taking steps forward.

I’m rushing to get down ideas that Scott made clear in sustained fashion but that came to me in spasms of understanding. I’m not deleting this one because I am tempted already to listen to it again.

In the meantime, I feel smarter for having listened to it.


View all my reviews

No comments:

Post a Comment