Complex State and Asymmetry in Games

What makes a game interesting to watch? What defines excitement? I tend to agree with the folks at fivethirtyeight that one thing that makes a game interesting to watch is a huge amount of variance in the predicted outcome throughout the game. This though misses an element that can make games very interesting to watch: a difficult to determine win probability.In most sports, most of the time it is pretty clear who is winning and who is losing and whether their changes improved because of a play or got worse because of them. IMO some of the most interesting situations are when it is unclear who is better off or has a better change of winning.

I follow quite a bit of racing and racing can have situations of this nature. While to most two cars racing many seconds apart may not seem terribly exciting compared to two cars closely following one another, to me and the metric of win probability uncertainty it can oftentimes be more exciting. One thing that I find uninteresting that many find quite interesting are long oval races. These races involve countless cars racing very closely together so it should be quite exciting. However, ultimately the results of the initial 80 percent of the race are usually unimportant because positions are so easily gained and lost that they are nearly meaningless. In racing, when watching two cars executing the same strategy close together, it is fairly easy to tell who is ahead. In Formula 1, it is oftentimes especially not interesting as it is fairly rare to see passing between two equally matched cars on the same strategy. When two cars are running completely separate strategies it can oftentimes be very interesting and stimulating to try and figure out which car is ahead.

This element of refining a model of who is winning is interesting to me. Esports and games of strategy have this element much more than normal sports. This is mostly due to the inherent asymmetry in most esports. In, Starcraft because races differ it can be difficult do directly compare how two players are doing. In DOTA, because one team may be better equipped for the late game it may be unclear if the other team is far enough ahead to finish the game. Or the two teams may be trying to win the game in entirely different ways and it is difficult to figure out how this will interact.Will one team be able to spread out and poke the other team to death or will the other team be able to group up and brute force a win.

The most interesting games are games when either an entirely new strategy is unveiled resulting in a update in how the game can be played or alternatively when the game goes so far outside the realm of normal that it is extremely hard to know who is ahead. In chess the most beautiful games are oftentimes where our normal heuristics for who is ahead are wrong. Games when many pieces are sacrificed to make way for a final checkmate.

I’d like to add a little bit about stories here. I generally am in favor of hard sci-fi but sometimes it is too limiting. What I really want is a clearly defined world where I can speculate about what comes next with certainty that the conundrum won’t be solved by some plot device. My ideal that demonstrates this is something like Twitch Plays Pokemon. The world is clearly defined, we know how everything works and has some interesting behavior and strategy relative to reality.

So this post is kinda tied into board game design. This really just describes the difficulty of getting the “snowballiness” of a game right. Make it to easy to convert an early advantage into a win is frustrating for those who are behind. Making it too easy to come back just makes the early game meaningless. The appropriate way seems to be to make the final outcome inherently complicated to determine so it is always unclear exactly who is winning. This though will violate some other principles describe here (under construction).

Thoughts on AI Risk

Thoughts on AI risk:

If you don’t know about Artificial Intelligence(AI) risk, it’s the belief that AI at the level of human intelligence is not far away and that it will likely result in the end of humanity. Generally, it also argues that we should try to understand AI and attempt to ensure that whatever we create is friendly to humans. Recently, the cause has gained significant support in the mainstream as prominent figures like Elon Musk and Stephen Hawking have spoken publicly in support of it. If you want a decent understanding of the arguments see here.

Here, I’ll be going over my views on the topic, mainly objections to AI risk and the push for more research on it.

0. Superhuman AI is likely fairly soon

Where I largely agree with AI risk, is that well beyond human-level intelligence is likely to happen within 50 years. Given that humans are, by virtue of evolution being slow, the stupidest beings capable of rational thought, there is probably much room for improvement. Computers will continue to get faster and will almost certainly surpass human computational power. At that point, it is just a matter of creating a reasonably efficient brain emulation or artificial intelligence program and human-level intelligence will be created. To get past human-level intelligence, just add better software and faster hardware.

1. We should welcome our new overlords

Should we even care about humanity ceasing to exist? If we are consistent with the way we morally consider animals as less important than ourselves we probably should care more about the well being of the Artificial Intelligence that replaces us than all of humanity. Perhaps there are situations like the paperclip maximizer or subsistence existence of brain emulations, especially for average utilitarians, where we are worried about the rise of AI but those seem to be the exception as opposed to the rule. Of my objections, I think this is the most convincing mostly because I don’t have any reason to defer to AI experts on morality.

2. How super is super intelligence?

Quite fundamentally, how “smart” can you even get? Despite the talk about Moore’s law ending there is probably still significant room for computational power to increase, even if just due to decreasing costs of manufacture. The superiority of the “intelligence” software though I am not convinced of. AI still won’t be able to solve NP-hard problems at any sort of scale. More concretely, how does a super smart-intelligence differ from an evil (human) genius with access to vast computational power? I am reasonable willing to withdraw this to AI experts because we really don’t have any good understanding of what a superintelligence might be able to do.

3. Can we really do anything?

Generally the goal of friendly AI is to create an AI that is friendly to humans. What does that exactly mean? As Asimov’s three laws show this is really hard and any slip up will result in disaster. If we naively* consider all agent-based AI preferences, the proportion of those that are friendly to humans is vanishingly small. It seems nearly impossible to create a friendly AI. While the preferences/morality and intelligence of beings are mostly independent, it seems impossible to create a being much smarter than oneself and be able to with any certainty constrain the preferences and more importantly behavior of such a being. This argument doesn’t seem to have been answered by AI experts nor do I see exactly why to defer to such experts in this case. This argument though doesn’t necessarily diminish the effect of work on AI risk, since work to raise chance of success from 1 percent to 2 percent is just as effective as research raising it from 50 percent to 51 percent.

*An agent-based AI is an AI that has a set of fixed preferences and attempts to maximize those by interacting with its reality. My main complaint about Superintelligence by Bostrom and AI in general is the failure to realize how the agent model falls short. Humans are not agents, our preferences change and any friendly AI should be able to deal with this. Preferences are given an almost mythical place and place of focus when it seems that given the way humans work, we should be focusing on behavior and preferences may not be important of a concrete concept.

Introduction

The name, halting thoughts, is mean to imply that hopefully the thoughts here will make you halt and think in some manner. In addition the thoughts also hopefully are coherent and halt in a Turing-machine sense.

Expect to see posts about philosophy, computing, math, life, and high-level politics.