Global Scaling Academy
Você procura por
  • em Publicações
  • em Grupos
  • em Usuários
BACK

Can The Current AI Boom Scale To AGI?

Can The Current AI Boom Scale To AGI?
Paige S.
Jun. 4 - 7 min read
000

by Greylock Partners |

From the 2019 Startup Grind Conference, Greylock’s Reid Hoffman and OpenAI Co-Founder and CTO Greg Brockman on the prospects of building beneficial AGI (Artificial General Intelligence).

AI systems have achieved impressive capabilities that may one day reach human levels of intelligence. Autonomous systems with this intelligence and capability are referred to as artificial general intelligence (AGI). The future impact human-level AI will have on society is challenging to comprehend. Yet, it’s broadly understood that prioritization of building beneficial and ethical autonomous systems today are vital for positive human impact.

In this episode of Greymatter, OpenAI co-founder and CTO Greg Brockmanand Greylock partner and OpenAI investor Reid Hoffman discuss the implications of today’s AI progress. Greg and Reid deep dive into the transformative potential of AGI on organizations of all kinds, the policy changes required of international governments, and ways to build and scale ethical AI and AGI systems.

Founded in 2015, OpenAI is an AI research company discovering and enacting the path to safe artificial general intelligence. Prior to co-founding OpenAI, Greg previously was CTO at Stripe, which he helped scale from 4 to 250 employees. An accomplished entrepreneur and executive, Reid has played an integral role in building many of today’s leading consumer technology businesses, including LinkedIn and PayPal. As a Greylock investor, he currently serves on the boards of Airbnb, Apollo Fusion, Aurora, Coda, Convoy, Entrepreneur First, Gixo, Microsoft, Nauto, Xapo and a few early stage companies still in stealth. Reid is the co-author of Blitzscaling and two New York Times best-selling books: The Start-up of You and The Alliance.

Below is an edited transcript of Greg and Reid’s discussion. This is the first of two Greymatter episodes featuring OpenAI co-founders in conversation with Reid, on the impact of AGI systems. On our next episode, OpenAI co-founder and CEO Sam Altman discusses the specific technology and open source AI projects the company is building to further positive human impact.

Listen to the full podcast here.

Founding OpenAI

“The idea that the machine itself could figure out how to solve a problem that I can not even describe how to begin to approach, that for me, was just the most mind blowing thing ever. It’s always been clear that the most important problem that I could hope to contribute to is artificial intelligence. I know that machines will be built that are able to do materially useful things in the world, do things that humans can not, and help us achieve new heights. It’s just a question of when and exactly how fast that happens.” — Greg Brockman

Building Blocks Of AI

“For the past seven years, deep learning has really come onto the scene. It’s been around for 60 years, but it’s just now that the technology has really started to work. At the core, we’re starting to have massive computational power that has an algorithm of a deep neural network, which is massively scalable, right? One pattern that we’ve found holds across domain after domain- if you scale up a neural network with appropriate date and you tune the architecture the right way, it will work better. Also, we have much greater availability to data. These building blocks of AI are starting to change in a very interesting way.

For example, there’s something significant that humans do. Babies are able to learn from their experiences, extract meaning from them, and apply their experiences to a new problem. This does not describe the AI systems we have now. But we are starting to see signs of life with AI technology doing this. We’re starting to see a shift in terms of the AI story. For example, people used to say deep learning is just pattern recognition. But we’re beginning to see that this story is not quite true. AI is starting to look at the world and extract structure and information.” — Greg Brockman

Government and AI

“At the end of the day, nations around the world are going to be deploying these technologies. So ensuring that AI and AGI systems are built safely is where I believe everyone is aligned. It’s clear though, that there will be competition across different countries and even different companies, but I believe there’s this core discussion around safety that we can all cooperate on. It’s hard to object to working together to make sure that these systems are built in a way that will not be destructive to the world. If you build systems with the right ethos and with the right safety-first mindset, you can achieve all of the technological power along with the associated responsibility or accountability.

One thing that OpenAI states in our charter is that we’re very concerned about late stage AGI development turning into an arms race. If you are moving as fast as you can towards a transformative technology, first thing that’s going to be removed is safety. And I think that the winner being whoever can build the most unsafe system, doesn’t sound like a recipe for success.

So it’s really important that, right now, before we are all in that world, where everyone can feel the stakes, that we are extending the olive branch globally to build those relationships and build that trust. Though, it’s important to be a realist on these topics and it’s important that the way that we extend that olive branch feels both honest, but also is prepared for all possibilities. But I think that the core of it really is that as long as we’re in this world where we can keep saying safety, safety, safety, there is real hope for coordination.” — Greg Brockman

Building Safe AI and AGI Systems

“Right now we have a lot of optionality as to how these technologies will play out. By building these systems early on, you can see the implications before they’ve actually hit the world. Then, you really get to choose, not just as one company but I think as a world, we get to choose how these technologies are going to affect our lives. We can’t hope to be in a world where AI progress stops and I don’t think we should want that. There are so many benefits that we can get, but at the same time I don’t think that we should just go and deploy and see what happens later.” — Greg Brockman


This podcast was recorded at the 2019 Startup Grind conference. Startup Grind is the largest independent startup community actively educating, inspiring, and connecting more than 1.5M entrepreneurs in over 500 chapters. Watch the video discussion here.

 


Report publication
    000

    Recomended for you