Thoughts on Superintelligence

Noah Adelstein
2 min readDec 3, 2017

--

I just finished Superintelligence by Nick Bostrom.

The book talked about a world where computers develop superhuman intelligence and reach the point where they can begin to think for themselves.

I’m not a heavily technical person, but thought that this book was totally understandable. Bostrom did a great job of simplifying complex ideas. He talked about AI and machine learning history. Walked through what superintelligence (smarter than human intelligence) would look like and different ideas about how we might get there. Talked a lot about what it would mean and different scenarios where things could go wrong. Ultimately, just shedding great light into the current ideas and state of superintelligence and machine learning.

Reading the book gave me an idea of why Elon Musk is so afraid of AI in the future. I thought one funny and scary example was this idea of a paper clip AI.

So Bostrom was saying that in a world where there was superintelligence, we could give a machine a certain goal and it would be able to use all of its power to carry out that goal. In this particular example, he gave the goal to be to manufacture the maximum amount of paper clips. With superintelligence, if that was the only goal, then essentially here is what could happen:

It could use its power to run machines that would take apart the entire earth in a pursuit for resources that would produce paper clips.

There are lots of ideas about how superintelligence could have a tangible impact that great, but after reading the book I got the impression that if we are able to build what many people think we will be able to, then a scenario like paperclip AI that could destroy the world could actually happen.

There obviously would have to a better goal than just “make maximum amount of paperclips.” It becomes quite challenging, though, to come up with something that would not be detrimental. This is for a variety of reasons I won’t get into.

I have this thought that superintelligence, if we reach it, is totally going to change the way that our lives work. That thought is not unique. What I’ve personally been feeling, though, is that life, as is, is complex and challenging and there is a lot to think about. Throwing this stuff into the mix adds an entirely new dimension. Some people are excited about that and in some ways I am too. But, on the other hand, part of me just wants to live my life and worry about the things that currently exist instead of having to add this huge dimension to what could change the world into the mix.

Regardless, I found this to be a super important read. I think at some point in time, everyone should read it to understand the landscape. Pretty wild.

Thoughts on this review/the book in general? Comment or send me a note :)

Full reading list here

--

--

Noah Adelstein
Noah Adelstein

Written by Noah Adelstein

Denver Native | WUSTL ’18 Econ | SF

No responses yet