Paul Saffo is a forecaster and essayist with over two decades experience exploring long-term technological change and its practical impact on business and society. He teaches at Stanford University and is a Visiting Scholar in the Stanford Media X research network. He was the founding chairman of the Samsung Science Board and serves on a variety of other boards including the Long Now Foundation. At the Convergence unconference in November, he delivered a keynote presentation on the differences between forecasting and advocating for potential future outcomes.
Mapping a Cone of Uncertainty
Welcome back from the break-out sessions. I certainly enjoyed the ones I was at. I had originally planned to offer my perspectives on what I thought was happening in this space, as I have been a friendly bystander to the whole converging areas of nano and bio for a long time. I was just remembering when I ran into Chris this afternoon, I think the first time we talked about this stuff was we once spent a long weekend with a bunch of nano-heads up in the Sierras way back in 1983. Ever since it has fascinated me.
I realize however that it is sort of an abuse of privilege on this stage to offer you my opinions when there are so many wonderful opinions in the group. I am going to step back and instead touch on some of the things that are happening in the short-term, but use them to talk about how I think about forecasting and offer some advice about how you all may want to consider thinking about this.
Let me begin with the question: Are you a futurist or a forecaster? I draw the distinction thusly. A futurist is generally someone who is an advocate about the future. They want to see a particular outcome. In my work, I am generally a forecaster, consciously a bystander, stepping back and saying, “What is the full range of possibilities?”
Now, in my personal life I most assuredly also have the instincts of a futurist, because like anyone else I have some pretty strong opinions about how I would like things to turn out. However, I can tell you from long experience, as a forecaster you have to be explicit about when you are thinking like a forecaster and trying to understand the full range of possibilities and then those moments when you say, “Okay, I am going to be an advocate here.” The road to ruin in forecasting is to allow your opinion of what you wish would happen to interfere with your judgment of what you think will happen.
Especially in this field, which is so exciting, it is real easy to slide into that advocacy role, when it is probably a good idea to be an analyst. Instead of being a participant, it is a good idea to be a bystander. Keep that in mind as you engage in your conversations later today and tomorrow. When is it appropriate to take your understandings as a forecaster and say, “This is the direction I think we all should go.” Of course, at the end of the day, people who are not advocates for the future live the old parable: If we don’t change directions soon, we are most assuredly going to end up where we are headed.
The other aspect of being an enthusiast about the future is there is an odd tendency, call it Hitchcock’s law of futurism, to make the future more dramatic than it might be. I invoke Alfred Hitchcock here, who said, “Movies are reality with the dull parts edited out.” I can assure you after nearly thirty years in this business, the future will have long stretches of dullness in it. The exciting parts will be exciting because they are brief moments, and then things will get dull again.
A danger that I see, and a lot of people do this, is when they look into the future they compress all the exciting stuff together, forgetting that in fact it will be dull. Remember back in the early ’90s when the web arrived and digital technology was taking off? The word Bill Gibson wrote about in his 1984 novel Neuromancer was cyberspace. This was invented by a novelist typing on a manual typewriter, of all people.
We were just giddy at the prospects of “cyberspace.” It was going to enlighten and ennoble mankind. We have a long history of this. Around the turn of the century, in 1900, the phrase “air minded” was very popular among aviation enthusiasts. Somehow flying in airplanes was going to enlighten and ennoble mankind. It would be the end of mankind, bring world peace and… well, you get the idea and you know what happened.
Cyberspace is still around, but let’s face it, what we really got was not the sweeping exhilaration of cyberspace changing all our lives. I must confess parenthetically, at Stanford I am so jealous of my students, because those little twerps have Wikipedia and the web. They are so cunning at faking the fact that they have not done their homework. I am from the generation where we would plead with the librarian to leave the library open. Anyway, but I digress.
We thought this vast new cyberspace would be wonderful, while some people said it would be the end of civilization as we know it. What did we get? We got cyberbia. Cyberspace kind of looks like the real world, all the way down to crab grass. Some things are worse and some things are a little better. The future is probably going to look a lot like today, except we have a little bit better gizmos. Remember Alfred Hitchcock and his rule: there will be dull parts in the future.
Now as a forecaster, I think about the future as mapping a cone of uncertainty. As I am standing in the present moment, witnessing some event that has just happened, I say, “How much uncertainty is there and how broad is that cone? What lies inside it and what could change?” Apropos of that, I encourage you to challenge all of your assumptions, every single last one of them.
For example… convergence? Are we really, really sure it’s convergence? New industries emerge from the intersection of old industries. Maybe it is the cross-impacts of industries working together, or maybe it is the convergence of technologies leading down to singularity, but what happens afterwards? Maybe things are in fact diverging, and that in this convergence there is also a lot of divergence. Remember the last time people talked a lot about convergence, in terms of media convergence in the 1990′s? They talked about how the whole media industry was going to converge into one super-industry.
What happened? Convergence at a narrow technical level led to divergence in terms of markets and products, which is why newspapers are going out of business instead of controlling the world. If it had been media convergence, the New York Times would own everything and record executives would not be in trouble. In fact, things diverged in terms of the specific industries. Keep in mind that it may not exactly turn out the way that we think. A big reason is this other rule of thumb from my forecasting experience: change is never linear.
Nothing interesting is ever linear. We all know S-curves. The mother of all S-curves is Moore’s law. Here we are in Silicon Valley, and what better place to talk about it than here? The thing I see again and again, especially here in Silicon Valley, is people who live and die by the S-curve really don’t get it. They really do not understand what it is like to be on an S-curve and what it is like not to be surprised by an inflection point. The problem is that we are linear thinkers. We tend to look into the future by taking a ruler to the past, turning it around, and drawing a straight line.
Inevitably, what happens is there are two kinds of people. There are those who are surprised when the inflection point hits. Like “Wow, where did the world wide web come from?” Then there are technologists who pay the price of being wrong not once but twice, because they stand at the flat spot of that curve, long before the inflection point, and they say, “The future is so obvious. It’s just around the corner!” Then they stand around waiting twenty years for it to happen. Just before it arrives they go, “It will never happen at all!” Then they walk away just as the fortunes are being made.
For the most part, these days one wants to make people understand just what inflection points feel like. However, there are parts of this audience who have been “Waiting for Godot.” Back in the 80′s, nanotech was just around the corner. We have been on the edge of an AI revolution for 50 years.
The important thing to pay attention to is that flat spot. If you want to look for a short-term success, look for something that has been failing for twenty years. My rule here is “cherish failure,” especially when it’s someone else’s. When you look at that S-curve, it implies more than technology. Where did Columbus fall on this S-curve? He was at the inflection point. To put it more precisely, Columbus was not the first European to make it to the New World, he was the first European to make it back. That flat spot is paved by one interesting failure after another, and it turns out there were countless expeditions. They never got a holiday named after them because they did not make it back.
Just as an example of cherishing failure and showing you that some things really do look like the curve of Moore’s law, some folks have talked about robotics today. Robotics is one of those things where you can hear the Doppler whistle of the inflection point.
Here is a picture that I shot in May of 2004 at the first DARPA Grand Challenge. Some of you who were there will remember it was a 150-mile course and there were 21 teams. I was an optimist, part of a medical team sitting next to a helicopter halfway out on the race. We felt kind of stupid because the thing looked like a Monty Python sketch. This robot in particular was very sweet. It came out of the barrier, got to the exit point where the race started, looked left and right, thought for a second, hung a hard left and drove into a coyote bush where it died.
Here is the robot who did much better. This is Sandstorm. The answer to the question “Why did the robot cross the road,” is “To drive into a fence, of course.” This is what Sandstorm did just shortly after that. The robot that got farthest in the May 2004 race got exactly seven kilometers into the race before it slid off the road and died. Everybody shrugged their shoulders and felt stupid.
18 months later we had the second Grand Challenge. What happened in that one? A very different result. Not coincidentally, it was one doubling period of Moore’s law. In October 2005, twenty-two teams got farther than Sandstorm got in the first race and five finished. I’m also pleased to say Stanford won… though the truth is that Carnegie Mellon lost.
The real difference in philosophy—Red Whittaker, who is the head of the Carnegie Mellon team, is a former marine. Let me correct that. There’s no such thing as a former marine. He likes hardware. If you have a dollar, you are better off spending it on hardware. The leader of the team at Stanford, Sebastian Thrun, who we stole from Carnegie Mellon, says superior software will always compensate for inferior hardware.
The Sandstorm team was crushing the Stanford team until a bolt came loose and the LIDAR unit on top of Sandstorm started bobbing, which caused Sandstorm to crawl to a walk. Stanley, the Stanford bot blew past because, let’s face it, software rules.
The third Grand Challenge is especially interesting. That was in November of 2007. It was 96 miles, 11 teams and five finishers. This was the urban Grand Challenge, and it was a success. We demonstrated that while primitive, robots understand the vehicle code in California better than most Californians.
To put this in context and to apply another rule that I follow, look back. A lot of people will tell you “so and so” is looking into the mirror through a rear-view mirror. I’m hear to tell you rear-view mirrors are good forecasting tools, as long as you use them the right way. The wrong way would be like the gentleman who back at the start of the Iraq war came to the Secretary of Defense and said, “Sir, I have all the files and lessons learned from Vietnam,” and the Secretary said, “I’m not interested. We lost that war.” That’s the wrong way to use a rear-view mirror. That is confusing being a forecaster and a futurist.
The right way is to look for the general patterns, not the specifics. Mark Twain allegedly said the future does not repeat itself, but sometimes it rhymes. What you want to do is look for the rhyming, the patterns that are similar. My rule is always look back at least twice as far as you are looking forward in order to pick up that pattern. If we do look back twice as far and we look at fundamentals, there is an interesting pattern that reinforces why robots could be taking off.
About every decade we have an enabling technology that arrives that sets the competitive landscape for entrepreneurs here in Silicon Valley. That technology in the late ’70s was cheap microprocessors. The revolution it triggered was a processing revolution, and the poster child was the personal computer. You know it was a big deal because the pimply faced geeks on the cover of Business Week and Time were people like Steve Wozniak and Bill Gates, though I don’t think Steve Jobs ever had acne in his life. You get the idea.
That decade was completely and utterly preoccupied with processing. The next decade, the ’80s, was shaped by a fundamentally different technology. That was the communications laser, which arrived and gave us fiber optics bandwidth, CD-ROM, and all that stuff. It triggered the access-centric decade of the ’90s, and the poster child was of course the world wide web. Remember back in the ’80s when people said, “What are we going to do with all that bandwidth?” We could have done the web much earlier, except that we did not have bandwidth.
We had this shift from the processing decade to the access decade. Of course, PCs did not become irrelevant. They just changed in function from being defined by what they processed for us to being defined as connectors. What is the big technology of this decade? It ain’t software, and it ain’t Web 2.0. It’s cheap sensors.
This revolution on little cats’ feet has been sneaking past us, and we have been so transfixed by other thing that we have overlooked the fact that sensors are coming into our lives everywhere. Whether it’s having more cameras built into our cellphones than are sold separately, or RFID transponders proliferating everywhere, this is a decade being shaped by cheap sensors.
The sequence is this: in the ’80s we invented our computers, in the ’90s we networked them together, and in this decade we are hanging eyes, ears and sensory organs on them. We are asking them to observe and manipulate the physical world on our behalf. All you need is to give them a way to roll around—give them wheels, darnit, they don’t even need feet—and they’re robots. The poster child of this decade is going to be robots.
The short answer to the big question of where the big fortune is going to be in the short run, forget about saving the world by reinventing humanity. The next Steve Jobs is laboring away anonymously in some garage, and we are this close to somebody figuring out how to turn these technologies into a compelling robotic product. Maybe it will take two years, maybe it will take three years, but that is going to be what takes off in consumers’ lives.
We have evidence of this. Let me give you another rule of thumb: look for indicators—look for things that don’t fit. Good forecasting is the opposite of good research. In good research, you hold off on your opinion, look at data and carefully develop your theory, then you pursue doggedly research evidence that supports your theory. Good forecasters do the opposite. You come to a conclusion as quickly as you can, then set out systematically to demonstrate that you are wrong, and then look for weird little things that just do not seem to fit that might be important.
For instance, there was this thing back in 2003. I remember when the Roomba arrived, I had a whole bunch of geek friends here in Silicon Valley, engineers, who were totally stoked about having this robotic vacuum cleaner, and I thought this was really odd. These are engineers. I don’t recall them ever having an interest in having a vacuum cleaner at all. Then when I started asking questions, I noticed they were giving their robotic vacuum cleaners names, and I thought this was very strange. When was the last time anybody gave their vacuum cleaner a name?
Then I talked to the folks at iRobot and they said “Yeah, you know it’s really weird, we’ve discovered that two-thirds of our Roomba owners give their Roombas names, and one-third confess to having taken their Roombas on vacation with them, or to their friend’s house to show off.” As a forecaster I went: A. that’s important. B. it has nothing to do with cleaning floors. This is scratching some deep, emotional human itch. After I took one apart I realized these aren’t really even robots. They’re a pile of transistors posing as robots. If you take one apart, it’s a fraud. Still, it is an indicator that you can hear that Doppler whistle of the inflection point coming.
Another indicator, the first human was killed by a robot in 2002 in Yemen. We have something like 20,000 robots flying around Iraq today. They are primitive and ugly, and there are people who will say, “Those aren’t really robots.” Still, they are autonomous enough to count. Here is another indicator that is a little farther afield.
What could an automobile wreck possibly have to do with a robot revolution? This particular wreck took place 250 miles north of the airbase where the third Grand Challenge was held. It took place thirty minutes before the urban robotic race began. A thick fog appeared over highway 99, and people were driving their cars with the instinct of salmon going upstream. 118 cars smashed into each other. The paramedic working at the front said he could hear them smashing into the back. 118 cars, something like 18 big rigs—this is proof. People shouldn’t drive, as Brad Templeton writes about.
You should not be afraid of the robot drivers. You should be afraid of the humans. When I look at this picture, my only thought is why do we have to wait that long? What are we going to do with all those parking lots next to buildings when we have robots? After all, a robot could drop you off and it could go four miles away, have a cigarette with its robotic friends, then come back and pick you up.
Having talked a little bit about robots, one other thing I would mention is that maybe like that Roomba that is not really a robot, the robot revolution may arrive without robots. Maybe they are stupid. How many people here have read Daemon by Daniel Suarez? He actually originally wrote it under the pseudonym Leinad Zeraus. That’s “Daniel Suarez” backwards, you get the idea. If you have a copy of his self-published book, don’t sell it. They are already going for $100 a copy on ebay.
He depicts a dystopian future of software bots that are dumber than your average Congressmember and completely autonomous. They create massive havoc and chaos. It is one of those wonderful science fiction novels with a concealed forecasting message: while we’re all waiting for the Robert E. Lee of big, clanky robots to arrive, maybe they are sneaking in underneath us and we’re in for a surprise. The robots are already here, and they’re really, really dumb.
I am almost at the end of my time, so I will offer one last suggestion. Those who think the farthest win. I should not have to say that to this group. For you all “a thousand years” spins off your tongue. How many of you are proud of the fact that you think long-term? Well, guess what? You are not the best long-term thinkers. Now, I sit on the board of the Long Now Foundation, where there will be a ten thousand year clock in the second-tallest mountain in Nevada. We like to think that we are pretty good at it, too. However, we are not the best people.
I would suggest to you the best long-term thinkers in terms of conceiving things long-term and also making things happen long-term would be religious fundamentalists. Actually, I think what is going on today is that there is a sort of race. You know how there are two types of fools, one who says “this is old and therefore good,” and the other who says “this is new and therefore better.” They are saying it is a race between those who love technology and those who hate technology. I think the race today for civilization is a race between people who think the farthest.
You all should keep that in mind, because so far, you are not the ones who think the farthest. Let me just mention one more benefit to thinking farther. We assume that when technologies come they revolutionize our lives, they change things forever, and they enable new possibilities. As a technology forecaster, I am a historian of technology who happens to spend most of his time looking at technologies that do not exist yet. I am here to tell you that for the most part we use new technologies to ossify old habits.
Think about DOS. DOS was just a simulacrum of what we were doing with time-sharing, except there was no remote access. We have this example again and again. My favorite one was the first thermoset resin invented by Leo Baekeland in 1907. It was called “Bakelite,” the first plastic. What did people do with Bakelite, the first plastic? Why, of course, they spent their whole time trying to make it look like wood and tortoise shell. It took people about twenty years before they realized that it made for really cheesy wood and tortoise shell, and then let plastic be plastic. Then things got interesting.
Well, the same thing is going on today, whether it is DOS imitating time-sharing, or email imitating what was done with the postal service. Then there are some things that are more profound. How many people here think our social security system is a great idea and works just fine? Don’t all raise your hands at once. Well, it’s your and your antecedents’ faults. We have the social security system because UNIVAC arrived just in time to save the system from collapsing. The arrival of computers ossified an arguably obsolete system.
The last piece of advice I would leave with you is: the person who thinks the longest wins, not only in the long-term future, but also in terms of the short-term opportunities that really will change tomorrow. Thank you for listening.