by Reid Hoffman |
The Utopian Vision of AI
In the late 1800s, in what's known as the Second Industrial Revolution, multiple major new technologies initiated an era marked by even more rapid transformation than the century that preceded it — especially in the United States. Electrification, railroads, telegraphs, and eventually the automobile all helped unleash unprecedented productivity gains, and thus unprecedented prosperity.
But these and other new technologies of that era didn't just raise standards of living. They also radically transformed how people lived.
Electrification meant people could suddenly do far more at night than they'd ever done before. There was a massive shift from rural living to cities. Railroads and telegraphs enabled new networks of communication and distribution. Skyscrapers concentrated human capital and allowed corporations to take on increasingly sophisticated endeavors.
Overall, life got faster and more connected, and much, much richer with possibilities. These new technologies, in other words, didn't just change how people, goods, or information got from one place to another. They changed how people dreamed about the future. They created new social relations and life patterns. They expanded conceptions about what one might aspire to, what defined a "good life," and how one should achieve meaning or purpose. They re-defined what it means to be human.
Today, we're in the early stages of a similarly massive transformation. Multiple new technologies that get broadly categorized under the label artificial intelligence, or AI, are the animating force in this transition. Over the next several decades, AI will have the kind of economic and cultural impact that electrification, railroads, and the other technologies driving the Second Industrial Revolution had on the world. We'll achieve massive increases in productivity and prosperity. We'll also see massive shifts in how people live and organize their lives.
In the face of such change and uncertainty, it's easy to slip into a dystopian mindset. To see more challenges than opportunities. To lean toward fear rather than hope. With AI, uncharted territories definitely lie ahead. And the effects of this transformation are likely to take place over a much shorter time period than previous technological revolutions. But the best way to confront this fact, I believe, and the risks it implies, is not simply to ignore or resist AI's evolution. Instead, we should rigorously steer toward the best possible outcomes in a thoughtful and deliberate way.
And that's why I'm delighted to be a part of the launch today of Stanford University's new Institute for Human-Centered Artificial Intelligence.
The purpose of Stanford HAI is to convene researchers, builders, leaders, and users from across a broad range of disciplines — including philosophy, neuroscience, government, computer science, robotics, and many others — to promote and develop human-centered AI technologies and applications that enhance human productivity and quality of life.
One reason some people tend to view AI through a dystopian lens, I think, is because of how corporations are at the forefront of this realm. Granted, there are a handful of nations, including China, France, Canada, and the U.S., which have undertaken government-funded initiatives to accelerate AI development. But corporations have been at the leading edge of such efforts.
And that leads to concerns that AI's development trajectory will favor approaches, applications, products, and services that prioritize the profitable over whatever works best for humanity.
As someone who views entrepreneurship as a powerful lever to create positive impact at scale, I have a different view. While I believe strong government oversight is a key component to productive, prosperous, and well-managed societies, I also believe that corporations, including those operating on a global scale, can create massive social progress along with the profits that sustain their efforts.
And history bears this out: The gains we've made since the First Industrial Revolution began in England in the mid-1700s are so astounding it's easy to lose sight of them. To illustrate this, for example, we live more than twice as long as most people did then — and yet we can make a 500-mile trip in roughly 1/100th of the time it would have taken in 1750.
Compared to the humans of just a few hundred years ago, in other words, humans today can draw upon an expanding array of technological superpowers to make life more productive and more meaningful. And while governments played a key role in making this era of rapid transformation and human progress possible, by providing social stability and key physical infrastructure, breakthrough innovations like the steam engine, or standardized parts, or the factory assembly line didn't arise out of official government programs or initiatives. They were the work of inventors, entrepreneurs, and corporations in search of profits.
Of course, as much as humanity has achieved over the last few centuries, we've hardly created a utopia here on earth. Whatever prosperity our new productivity has created, we still have the same kinds of inequity, conflict, and injustice that plagued us in eras of lesser abundance.
Our new technologies have also created new challenges like pollution, resource depletion, and climate change. And to achieve the net positive outcomes we now benefit from, government often had to step in to temper capitalism's shortcomings and excesses — via child labor laws and other forms of workplace regulation, consumer protection laws, and more.
So as much as I believe that today's corporations are already leading us toward a new era of rapid productivity gains — and the subsequent increase in quality of life that will arise out of that — I also believe that successfully navigating the shift that is now underway must involve many different stakeholders with the widest possible range of perspectives.
Stanford HAI will be a place for such conversations. While the institute will draw upon Stanford's deep history with Silicon Valley and its most innovative technologists and companies, it will also incorporate the multi-disciplinary viewpoints of policy-makers, legal experts, ethicists, philosophers, economists, and scientists, and ensure that participation is diverse and inclusive in terms of race, gender, and culture. In doing so, it will seek to understand best practices for developing AI technologies that serves humanity in ways that broadly benefit us all.
As chair of Stanford HAI's advisory board, I look forward to the robust discourse it initiates in pursuit of this mission.
And as it officially begins its work today, I want to share one question I will continue to ask myself as AI evolves: What could possibly go right?
This question, of course, stands in counterpoint to the one that has driven much AI discourse in recent years: What could possibly go wrong?
With a technology as powerful and unprecedented as AI is, "What could possibly go wrong?" is obviously a crucial question to ask and thus I know Stanford HAI will keep asking it. And I hope many others do too, especially those who are directly developing this technology.
Whether it's the prospects of mass unemployment or algorithmic bias baked into hiring processes, lending decisions, and more, we need to have the rigorous and even skeptical inquiry from diverse viewpoints that will help us see potential problems, and then either steer away from them before they happen or correct them when they do.
At the same time, I also believe that we mustn't simply view our efforts to navigate to the best possible future through defensive or preventative lenses.
And that's why I ask "What could possibly go right?"
What, in our most wildly optimistic vision, does the world look like when AI is as woven into contemporary life just as seamlessly as electricity is?
How might we use it to create new jobs that are both economically rewarding and personally fulfilling?
Can it drive down the costs of crucial goods and services so much that governments will be able to provide much better safety nets than they ever have before?
How can we best tap AI to create education tools and services that make it possible for every person on the planet to maximize their potential?
Can new AI-driven healthcare devices lead to massive decreases in illness?
Can myriad new forms of emotionally attuned robot companions abolish or at least greatly diminish human loneliness?
My point is not that AI is the magic answer to all humanity's problems — but that it will only be as magic as we dream it can be. Today, some of that dreaming starts at Stanford HAI. I can't wait to see where we go from here.