Thoughts on AI Risk

Thoughts on AI risk:

If you don’t know about Artificial Intelligence(AI) risk, it’s the belief that AI at the level of human intelligence is not far away and that it will likely result in the end of humanity. Generally, it also argues that we should try to understand AI and attempt to ensure that whatever we create is friendly to humans. Recently, the cause has gained significant support in the mainstream as prominent figures like Elon Musk and Stephen Hawking have spoken publicly in support of it. If you want a decent understanding of the arguments see here.

Here, I’ll be going over my views on the topic, mainly objections to AI risk and the push for more research on it.

0. Superhuman AI is likely fairly soon

Where I largely agree with AI risk, is that well beyond human-level intelligence is likely to happen within 50 years. Given that humans are, by virtue of evolution being slow, the stupidest beings capable of rational thought, there is probably much room for improvement. Computers will continue to get faster and will almost certainly surpass human computational power. At that point, it is just a matter of creating a reasonably efficient brain emulation or artificial intelligence program and human-level intelligence will be created. To get past human-level intelligence, just add better software and faster hardware.

1. We should welcome our new overlords

Should we even care about humanity ceasing to exist? If we are consistent with the way we morally consider animals as less important than ourselves we probably should care more about the well being of the Artificial Intelligence that replaces us than all of humanity. Perhaps there are situations like the paperclip maximizer or subsistence existence of brain emulations, especially for average utilitarians, where we are worried about the rise of AI but those seem to be the exception as opposed to the rule. Of my objections, I think this is the most convincing mostly because I don’t have any reason to defer to AI experts on morality.

2. How super is super intelligence?

Quite fundamentally, how “smart” can you even get? Despite the talk about Moore’s law ending there is probably still significant room for computational power to increase, even if just due to decreasing costs of manufacture. The superiority of the “intelligence” software though I am not convinced of. AI still won’t be able to solve NP-hard problems at any sort of scale. More concretely, how does a super smart-intelligence differ from an evil (human) genius with access to vast computational power? I am reasonable willing to withdraw this to AI experts because we really don’t have any good understanding of what a superintelligence might be able to do.

3. Can we really do anything?

Generally the goal of friendly AI is to create an AI that is friendly to humans. What does that exactly mean? As Asimov’s three laws show this is really hard and any slip up will result in disaster. If we naively* consider all agent-based AI preferences, the proportion of those that are friendly to humans is vanishingly small. It seems nearly impossible to create a friendly AI. While the preferences/morality and intelligence of beings are mostly independent, it seems impossible to create a being much smarter than oneself and be able to with any certainty constrain the preferences and more importantly behavior of such a being. This argument doesn’t seem to have been answered by AI experts nor do I see exactly why to defer to such experts in this case. This argument though doesn’t necessarily diminish the effect of work on AI risk, since work to raise chance of success from 1 percent to 2 percent is just as effective as research raising it from 50 percent to 51 percent.

*An agent-based AI is an AI that has a set of fixed preferences and attempts to maximize those by interacting with its reality. My main complaint about Superintelligence by Bostrom and AI in general is the failure to realize how the agent model falls short. Humans are not agents, our preferences change and any friendly AI should be able to deal with this. Preferences are given an almost mythical place and place of focus when it seems that given the way humans work, we should be focusing on behavior and preferences may not be important of a concrete concept.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s