Saving 15-20 Percent More From Charitable Donations

EDIT: tax straddle rules prevent this (I think).

EDIT: this appears to be strategy 5 here.


(I’m not a tax attorney/accountant and have no imminent plans to try this. Consider this speculative)

TLDR: You can save almost an additional capital gains rate (15-20%) on your charitable donations if you donate appreciated assets or do some silly-looking financial transacting.

One of the side effects of charitable donations, at least in the US, are the tax savings. Donations are deductible from your income tax. This will lower your tax bill by the amount you donated times your marginal rate. (This assumes that you are itemizing and not taking the standard deduction). So if you are in the say 25% tax bracket then you would get 25 cents back for every dollar donated so 25 percent efficiency. This is great but what if you could do better?

One way to do better is to donate highly appreciate capital assets. Donating assets is not only deductible in the same way donating cash is by deducting the worth of the assets donated as if they were cash but also avoids any capital gains tax on the assets. So if you bought a stock at 1 dollar and donated it when it was worth 10 dollars then you could not not only save (25% income tax rate) *$10 =$2.50 but also would save (15%) * $9= $1.35 in capital gains tax as well netting a saving 38.5%.

Not bad, eh? But what if you don’t have any  highly appreciated capital assets? Is there a way you could manufacture highly appreciated capital assets?

If prediction markets contracts were considered to be capital assets (I believe they are not in the US) you could simply find an event to bet on with every outcome having low probability and bet on every outcome. For simplicity there might be 9 candidates for Democratic Presidential nominee and one “other candidate, the highest contract might be valued at 2 dollars for a 10 dollar payout. By buying every contract for a total of $10.50 you have guaranteed that when the market resolves you’ll end up with one contract worth 10 dollars and the rest worth nothing.

Great, you’ve just managed to guarantee a loss of 50 cents, but you’ve also put all that $10.50 into an appreciated asset that you can now donate to charity and avoid the capital gains tax and accrued bunch of capital losses that you can deduct. So for 10.50 you are able to donate 10 dollars to charity deducting (25% income tax rate) * $10 dollars = $2.50 as well as deducting at least (15% capital gains tax rate) * ($10.50-$2.00) = $1.275 meaning that it only effectively cost you $6.725 to donate 10 dollars to charity for a savings of 32.75 percent which is better than the 25 percent savings by just donating cash. In the limit with no trading losses and arbitrarily small assets you can deduct the full income tax rate and capital gains rate (so 40% in our examples).

Are there financial assets in the real work that might work for this? I’m not so sure. Binary range or tunnel options sound like what you’d want but they don’t seem to be structured they way you’d want in that each option includes the others instead of excluding it making this type of strategy impossible. Alternatively, if one was risk neutral with their donations, then one could construct highly appreciated assets by simply invest in a highly risky asset and get the same effect if one was risk averse.

Why have I not heard of this before? It is a pretty unusual situation and requires some odd financial techniques (it allows and allows for arbitrage). Maybe charitable donors don’t usually concoct unusual tax schemes.

EDIT: This appears to be essentially identical to strategy 5 here. A few addenda: it looks like you will need to hold your options for at least one year for them to become long-term capital gains per here. “Property is capital gain property if you would have recognized long-term capital gain had you sold it at fair market value on the date of the contribution.”

EDIT 2: Making highly correlated bets to accumulate losses like this is what tax straddle rules are designed to handle so it is had to do this in a risk free manner.

Talebian Empiricism is a cautionary not constructive tale

There is a dichotomy worth pointing out between predictions based on complex models (most science) and on those based on simple probability (represented by N. Taleb). For events with lots of data we can easily sync these two up and there is little conflict. For very rare “black swan” events the models cannot be calibrated because the event has either never happened or happened very infrequently.

The Talebian view that we should be skeptical of models because they can’t predict very rare events is correct but not very helpful. What do we do? Give up trying to predict the world because our models might be wrong? No we try to refine our models to better represent the world through counterfactuals and new data, and always understand that our models could be catastrophically wrong and we look for ways to mitigate the impact if our model is wrong.

Should we be a bit skeptical that our metrics/models are covering what we care about and that the correlations we’ve observed will hold in extreme scenarios? Absolutely, and we should point out all the ways they can be wrong or too aggressive and all the trends we might be missing. But we certainly shouldn’t stick our heads in the stand and say that these trends are unknowable. Does Taleb really want to live in a world where certifying a plane requires get tens or hundreds of  planes being run them around the clock for  a decade in order to be sure that a plane is as safe as another plane?

Complex State and Asymmetry in Games

What makes a game interesting to watch? What defines excitement? I tend to agree with the folks at fivethirtyeight that one thing that makes a game interesting to watch is a huge amount of variance in the predicted outcome throughout the game. This though misses an element that can make games very interesting to watch: a difficult to determine win probability.In most sports, most of the time it is pretty clear who is winning and who is losing and whether their changes improved because of a play or got worse because of them. IMO some of the most interesting situations are when it is unclear who is better off or has a better change of winning.

I follow quite a bit of racing and racing can have situations of this nature. While to most two cars racing many seconds apart may not seem terribly exciting compared to two cars closely following one another, to me and the metric of win probability uncertainty it can oftentimes be more exciting. One thing that I find uninteresting that many find quite interesting are long oval races. These races involve countless cars racing very closely together so it should be quite exciting. However, ultimately the results of the initial 80 percent of the race are usually unimportant because positions are so easily gained and lost that they are nearly meaningless. In racing, when watching two cars executing the same strategy close together, it is fairly easy to tell who is ahead. In Formula 1, it is oftentimes especially not interesting as it is fairly rare to see passing between two equally matched cars on the same strategy. When two cars are running completely separate strategies it can oftentimes be very interesting and stimulating to try and figure out which car is ahead.

This element of refining a model of who is winning is interesting to me. Esports and games of strategy have this element much more than normal sports. This is mostly due to the inherent asymmetry in most esports. In, Starcraft because races differ it can be difficult do directly compare how two players are doing. In DOTA, because one team may be better equipped for the late game it may be unclear if the other team is far enough ahead to finish the game. Or the two teams may be trying to win the game in entirely different ways and it is difficult to figure out how this will interact.Will one team be able to spread out and poke the other team to death or will the other team be able to group up and brute force a win.

The most interesting games are games when either an entirely new strategy is unveiled resulting in a update in how the game can be played or alternatively when the game goes so far outside the realm of normal that it is extremely hard to know who is ahead. In chess the most beautiful games are oftentimes where our normal heuristics for who is ahead are wrong. Games when many pieces are sacrificed to make way for a final checkmate.

I’d like to add a little bit about stories here. I generally am in favor of hard sci-fi but sometimes it is too limiting. What I really want is a clearly defined world where I can speculate about what comes next with certainty that the conundrum won’t be solved by some plot device. My ideal that demonstrates this is something like Twitch Plays Pokemon. The world is clearly defined, we know how everything works and has some interesting behavior and strategy relative to reality.

So this post is kinda tied into board game design. This really just describes the difficulty of getting the “snowballiness” of a game right. Make it to easy to convert an early advantage into a win is frustrating for those who are behind. Making it too easy to come back just makes the early game meaningless. The appropriate way seems to be to make the final outcome inherently complicated to determine so it is always unclear exactly who is winning. This though will violate some other principles describe here (under construction).

Thoughts on AI Risk

Thoughts on AI risk:

If you don’t know about Artificial Intelligence(AI) risk, it’s the belief that AI at the level of human intelligence is not far away and that it will likely result in the end of humanity. Generally, it also argues that we should try to understand AI and attempt to ensure that whatever we create is friendly to humans. Recently, the cause has gained significant support in the mainstream as prominent figures like Elon Musk and Stephen Hawking have spoken publicly in support of it. If you want a decent understanding of the arguments see here.

Here, I’ll be going over my views on the topic, mainly objections to AI risk and the push for more research on it.

0. Superhuman AI is likely fairly soon

Where I largely agree with AI risk, is that well beyond human-level intelligence is likely to happen within 50 years. Given that humans are, by virtue of evolution being slow, the stupidest beings capable of rational thought, there is probably much room for improvement. Computers will continue to get faster and will almost certainly surpass human computational power. At that point, it is just a matter of creating a reasonably efficient brain emulation or artificial intelligence program and human-level intelligence will be created. To get past human-level intelligence, just add better software and faster hardware.

1. We should welcome our new overlords

Should we even care about humanity ceasing to exist? If we are consistent with the way we morally consider animals as less important than ourselves we probably should care more about the well being of the Artificial Intelligence that replaces us than all of humanity. Perhaps there are situations like the paperclip maximizer or subsistence existence of brain emulations, especially for average utilitarians, where we are worried about the rise of AI but those seem to be the exception as opposed to the rule. Of my objections, I think this is the most convincing mostly because I don’t have any reason to defer to AI experts on morality.

2. How super is super intelligence?

Quite fundamentally, how “smart” can you even get? Despite the talk about Moore’s law ending there is probably still significant room for computational power to increase, even if just due to decreasing costs of manufacture. The superiority of the “intelligence” software though I am not convinced of. AI still won’t be able to solve NP-hard problems at any sort of scale. More concretely, how does a super smart-intelligence differ from an evil (human) genius with access to vast computational power? I am reasonable willing to withdraw this to AI experts because we really don’t have any good understanding of what a superintelligence might be able to do.

3. Can we really do anything?

Generally the goal of friendly AI is to create an AI that is friendly to humans. What does that exactly mean? As Asimov’s three laws show this is really hard and any slip up will result in disaster. If we naively* consider all agent-based AI preferences, the proportion of those that are friendly to humans is vanishingly small. It seems nearly impossible to create a friendly AI. While the preferences/morality and intelligence of beings are mostly independent, it seems impossible to create a being much smarter than oneself and be able to with any certainty constrain the preferences and more importantly behavior of such a being. This argument doesn’t seem to have been answered by AI experts nor do I see exactly why to defer to such experts in this case. This argument though doesn’t necessarily diminish the effect of work on AI risk, since work to raise chance of success from 1 percent to 2 percent is just as effective as research raising it from 50 percent to 51 percent.

*An agent-based AI is an AI that has a set of fixed preferences and attempts to maximize those by interacting with its reality. My main complaint about Superintelligence by Bostrom and AI in general is the failure to realize how the agent model falls short. Humans are not agents, our preferences change and any friendly AI should be able to deal with this. Preferences are given an almost mythical place and place of focus when it seems that given the way humans work, we should be focusing on behavior and preferences may not be important of a concrete concept.


The name, halting thoughts, is mean to imply that hopefully the thoughts here will make you halt and think in some manner. In addition the thoughts also hopefully are coherent and halt in a Turing-machine sense.

Expect to see posts about philosophy, computing, math, life, and high-level politics.