Thomas Friedman’s 5 Pieces of Advice for His Daughters

From http://deloitte.wsj.com/cmo/2017/12/07/radically-open-tom-friedman-on-the-future-of-work:

I have five pieces of advice for my daughters. My first rule is: Always think like an immigrant, because we’re all new immigrants to the age of accelerations.

Second, always think like an artisan. Always do your job in a way that you bring so much empathy to it, so much unique, personal value-add that it cannot be automated, digitized, or outsourced, and you want to carve your initials into it at the end of the day.

Third, always be in beta. Always think of yourself as if you need to be re-engineered, retooled, relearned, and retaught constantly. Never think of yourself as finished—otherwise, you really will be finished.

Fourth, always remember that PQ (passion quotient) plus CQ (curiosity quotient) is greater than IQ (intelligence quotient). Give me a young person with a high PQ and a high CQ, and I will take that person over a kid with a high IQ seven days a week.

And last, whatever you do, whether you’re in the public sector or the private sector, whether you’re on the front lines or a manager, always think entrepreneurially. Always think, “Where can I fork off and start a new company over here, a new business over there?” Because a huge manufacturing company is not coming to your town with a 25,000-person factory. That factory is now 2,500 robots and 500 people. So we need three people starting jobs for six; six people starting jobs for 12; 12 people starting jobs for 20. That’s how we’re going to get all those jobs. We need everyone thinking entrepreneurially.

Warnings: Return of The Long Emergency

warnings

James Kunstler’s 2005 book “The Long Emergency” made a huge impression on me when I read it in 2006. In fact, it was one of the reasons I found myself pursuing a career in cloud computing in 2007. Partly thanks to this book and a former boss from British Telekom, my business partner and I were convinced that peak oil and climate change would create a huge demand for energy efficient, carbon neutral compute resources, and cloud computing was the future.

The Long Emergency was primarily concerned with America’s oil addiction and ill-preparedness for what looked at the time to be the coming energy (oil) shock, but it also examined other threats to civilization:

  • Climate Change
  • Infectious diseases (microbial resistance)
  • Water scarcity
  • Habitat destruction
  • Economic instability
  • Political extremism
  • War

Every one of those is still an enormous threat.

A new book by national security veteran Richard Clarke and R.P Eddy called “Warnings: Finding Cassandras to Stop Catastrophes” updates The Long Emergency with some new features of the threat landscape.

The book starts off by asking how we can reliably spot Cassandras – people who correctly predict disasters but who were not heeded – so that we can prevent future disasters.

They examine recent disasters – like 9/11, the Challenger space shuttle disaster and Hurricane Katrina, then examine the people who predicted these events, looking or patterns. They come up with some stable characteristics that allow us to score people on their Cassandra Quotient.

The second part of the book looks at current threats, and their doomsayers, to see if any have a high Cassandra Quotient and thus should be heeded.

The threats are:

  • Artificial Intelligence
  • Pandemic Disease
  • Sea-Level Rise
  • Nuclear Ice Age
  • The Internet of Everything
  • Meteor Strike
  • Gene Editing (CRISPR)

The bad news is that they all have high Cassandra Quotients and the scenarios in the book are plausible, science-backed and terrifying.

Artificial Intelligence as a threat hs been on my radar for a year or so thanks to Elon Musk, Bill Gates, Stephen Hawkins and Sam Harris warning of the risks of intelligent machines that can design and build ever moire intelligent machines.

Pandemic Disease has worried me since reading The Long Emergency, but I thought there had been better global awareness, especially since the world took the 2011 flu scare seriously, and Ebola and Zika.  Unfortunately, we are – as a planet – woefully ill-prepared for a global pandemic. A high fatality airborne flu could kill billions.

Sea-Level Rise genuinely surprised me, especially since the Cassandra in question – James Hansen – predicted the current melting and ice shelf break-offs we see in the Arctic today…30 years ago. I even googled how high my home is above sea level after being convinced we could see 7m rises within my lifetime.

As a child of the 70’s and 80’s, nuclear horror is deeply embedded in my psyche. But I thought the risk of a Nuclear Ice Age was a pretty low risk. It turns out you do not need a large-scale nuclear exchange between the US and Russia to cause global climate chaos. A limited exchange between India and Pakistan could be sufficient to kill billions though global starvation. I was also surprised to learn that Pakistan moves its nuclear arsenal around to thwart attacks my Indian commandos in the event of a war. This raises the risk of terrorists intercepting on of these weapons on the move, and using it for nuclear terrorism.

The book does a good job of examining the incredible fragility of out interconnected IT systems in the chapter on The Internet of Everything. As an IT professional I know the reality of how fragile these systems are and we are right to be scared of dire consequences of a serious cyber war.

I do not really think about Meteor Strikes, as there is little we can do about them and they are now part of popular culture.

The final worry in the book is about Gene Editing, especially CRISPR. CRISP has absolutely marvelous potential, but it also has many people worried. Daniel Saurez even has a new book on the topic called “Change Agent“. CRISPR is could be the mother of all second order effects. Take “off target events” for example:

Another serious concern arises from what are known as off-target events. After its discovery, researchers found that the CRISPR/Cas9 complex sometimes bonds to and cuts the target DNA at unintended locations. Particularly when dealing with human cells, they found that sometimes as many as five nucleotides were mismatched between the guide and target DNA. What might the consequences be if a DNA segment is improperly cut and put back together? What sorts of effects could this cause, both immediately and further down the road for heritable traits? Experimenting with plants or mouse bacteria in a controlled laboratory environment is one thing, but what is the acceptable level of error if and when researchers begin experimenting with a tool that cuts up a person’s DNA? If an error is in fact made, is there any potential way to fix the mistake?

So we have planet-scale problems, ingenious solutions. Instead of feeling paralysis or resignation we should accept Peter Thiel’s challenge to find the big breakthroughs, 0 to 1 intensive progress:

Progress comes in two flavors: horizontal/extensive and vertical/intensive. Horizontal or extensive progress basically means copying things that work. In one word, it means simply “globalization.” Consider what China will be like in 50 years. The safe bet is it will be a lot like the United States is now. Cities will be copied, cars will be copied, and rail systems will be copied. Maybe some steps will be skipped. But it’s copying all the same.

Vertical or intensive progress, by contrast, means doing new things. The single word for this is “technology.” Intensive progress involves going from 0 to 1 (not simply the 1 to n of globalization). We see much of our vertical progress come from places like California, and specifically Silicon Valley. But there is every reason to question whether we have enough of it. Indeed, most people seem to focus almost entirely on globalization instead of technology; speaking of “developed” versus “developing nations” is implicitly bearish about technology because it implies some convergence to the “developed” status quo. As a society, we seem to believe in a sort of technological end of history, almost by default.

It’s worth noting that globalization and technology do have some interplay; we shouldn’t falsely dichotomize them. Consider resource constraints as a 1 to n subproblem. Maybe not everyone can have a car because that would be environmentally catastrophic. If 1 to n is so blocked, only 0 to 1 solutions can help. Technological development is thus crucially important, even if all we really care about is globalization.

…Maybe we focus so much on going from 1 to because that’s easier to do. There’s little doubt that going from 0 to 1 is qualitatively different, and almost always harder, than copying something times. And even trying to achieve vertical, 0 to 1 progress presents the challenge of exceptionalism; any founder or inventor doing something new must wonder: am I sane? Or am I crazy?

From Blake Masters notes

Effective Theory

I was reminded of this concept of Effective Theory in an article on Economics by Arnold King.  Here it is explained by Harvard physicist Lisa Randall:

Effective theory is a valuable concept when we ask how scientific theories advance, and what we mean when we say something is right or wrong. Newton’s laws work extremely well. They are sufficient to devise the path by which we can send a satellite to the far reaches of the Solar System and to construct a bridge that won’t collapse. Yet we know quantum mechanics and relativity are the deeper underlying theories. Newton’s laws are approximations that work at relatively low speeds and for large macroscopic objects. What’s more is that an effective theory tells us precisely its limitations — the conditions and values of parameters for which the theory breaks down. The laws of the effective theory succeed until we reach its limitations when these assumptions are no longer true or our measurements or requirements become increasingly precise.

Kind writes:

Whereas the term “science” often is used to connote absolute truth in an almost religious sense, effective theory is provisional. When we are certain that in a particular context a theory will work, then and only then is the theory effective.

Effective theory consists of verifiable knowledge. To be verifiable, a finding must be arrived at by methods that are generally viewed as robust. Any researcher who tries to replicate a finding using appropriate methods should be able to confirm it. The strongest confirmation of the effectiveness of a theory comes from prediction and control. Lisa Randall’s example of sending a spacecraft to the far reaches of the solar system illustrates such confirmation.