Note: this is a follow-up to an earlier post in which I describe a hypothetical platform for matching donors to opposing campaigns and sending their money to charity instead. Click here to read the post (this post likely won’t make sense if you haven’t read that one).
In response to my post about a platform that would match opposing political donations and send them to charity, a few astute readers correctly pointed out a problem — one that I call the “apolitical altruist.”
The apolitical altruist (call him Albert) is someone who wants to give money to charity but doesn’t really care which candidate — Demi or Rebecca — wins the election. Say that substantially more Democrats than Republicans use the platform, so considerably more money has been pledged to Demi than to Rebecca. Let’s say that Albert has a $100 budget. He can give his $100 to charity. Or, he can contribute to Rebecca through the platform. If he does so, his $100 will be matched against $100 of a Demi supporter. In effect, Albert will have caused $200 to be donated to charity: $100 of his own and $100 that would have otherwise gone to Demi.
The apolitical altruist is a problem because one of the most important premises of the platform is that it biases elections as little as possible. If only Democrats and apolitical altruists use the platform (in equal amounts), then the platform is essentially cheating Demi out of money.
There’s another problem that no one pointed out but that some people likely noticed: the “selfish charity.”
The selfish charity (call it the Giving Eric Considerable Cash Organization, or GECCO) has one goal: to maximize the amount of money it takes in. Say that, as before, considerably more money has been pledged to Demi than to Rebecca. GECCO could donate to Rebecca, choosing itself as the charity that the money should be given to if matched. Since Rebecca is raising way less than Demi, GECCO’s money is guaranteed to be matched (and go back to GECCO). But GECCO also hopes that the matching money on the other side (which otherwise wouldn’t have been matched) is also pledged to GECCO. This gives GECCO potential upside with almost no risk, at Demi’s expense.1
Unlike the apolitical altruist, this is highly unlikely to happen in practice, as the platform’s charities would all be vetted, well-reputed charities. But at least in theory it’s a vulnerability.
It turns out that a simple change solves both problems: instead of giving all matched money to charity, only give half. For example, if Demi has $10 million pledged through the platform and Rebecca has $8 million, instead of sending $16 million ($8 million from each side) to charity and $2 million to Demi, send $8 million ($4 million from each side) to charity, $6 million to Demi, and $4 million to Rebecca.
This has the unfortunate downside of rerouting only half as much money from politics to charity, but it does solve both of the problems we’ve discussed. To see that, consider Albert’s options. He can donate $100 to his favorite charity, or he can donate $100 to the platform. If he donates $100 to the platform, Rebecca will get $50 (Albert doesn’t care), Demi will lose $50 (Albert doesn’t care), Albert’s $50 will go to his favorite charity, and the $50 that Demi lost will go to some charity — not necessarily Albert’s favorite. So Albert is better off just donating to charity.
Similarly, if GECCO donates $100 to Rebecca, at best it will get back its $50 and also get the $50 that Demi loses — and that’s if it gets lucky. So GECCO is better off just keeping its own money.
But Dylan Mavrides, in a comment on the post where I described the paltform, points out that all is not solved:
Unfortunately, the problem doesn’t go away completely, because now, let’s say I would normally apportion $100 to donate to charity and $50 to donate to politics from a paycheck or something. Now I can just put all the money into your platform and get $150 of political influence (75 to my candidate, -75 to the other) and $150 of charity donations. Since the amount that goes to charity stays the same as if I just donated all the money to charity, it’s like I can freely express my political opinion much more strongly/efficiently than I would normally be able to do so.
Put another way, it (maybe substantially, depending on how much I care about charitable donations) lowers the threshold of caring-about-politics that I need to have in order to get $x of political influence.
This wouldn’t necessarily be a big problem, if this happened on both sides (maybe it still would be, I haven’t thought about it). But if, say, Rebecca’s supporters used the platform much more than Demi’s, then it would just lower the amount that Demi’s supporters would have to care about making political donations, in order to donate.
In other words, we still have the problem of a slightly political altruist. The problem with the apolitical altruist is that they can affect the election even though they don’t care about the result. But it’s still a problem if you can affect the election substantially when you only care a little bit about the result, and it’s especially a problem if this option is only available to people on one side (as Dylan points out is possible in his last paragraph).
Ultimately I’m not too concerned about this problem, because it incentivizes balance in the platform. That is, say Rebecca is receiving more pledges than Demi. Then Demi’s supporters have a lower threshold for how much they need to care about politics to contribute to the platform (since a larger fraction of their money is matched). This causes more Demi supporters to use the platform, which reduces Rebecca’s edge and balances things out. I think that bad equilibria are possible here in theory, but in practice I think they’re not that likely.
One thing that bothers me about the platform I suggested is its inflexibility: it always matches donations at a one-to-one ratio. Sometimes it’s objectively true that a dollar is more valuable for one candidate (usually the one who has raised less so far) than for the other. In such cases, the platform maybe be unfair to one side.
Perhaps the most natural way to deal with this is to create a market that decides the matching ratio. Donors to one candidate (call him Bill) submit bids and donors to the other candidates (call him Oscar) submit offers for the quantity “number of Bill-dollars to be matched against one Oscar-dollar.” For example, if a Bill supporter bids “110 cents,” that means they are willing to have their money matched against an Oscar-supporter’s money and sent to charity at a ratio of up to 110 Bill-cents for every Oscar-dollar. If an Oscar supporter puts in an offer of “80 cents,” that means they are willing to have their money matched against a Bill-supporter’s money at a ratio of 80 Bill-cents (or more) for every Oscar-dollar. If both of these things happen, there is an opportunity for a trade: the Bill-supporter and Oscar-supporter have indicated that they are willing to match up at any ratio between 80 and 110 Bill-cents per Oscar-dollar.
Then, once per day, an algorithm searches for the matching ratio at which the most dollars can be matched and sent to charity. Put another way, the platform finds a particular number , with the guarantee that if you’re a Bill supporter who bid more than , the money you donated will get matched at a -to-one ratio, and if you’re an Oscar supporter who made an offer less than , the money you donated will also get matched at a -to-one ratio. It’s not hard to see that only one satisfies this property (see footnote for elaboration).2
A natural question to ask is whether this market is incentive-compatible. That is, say you want to donate to Bill through the platform. Is it in your interest to bid honestly, i.e. to bid the largest ratio that you would be okay with having your money matched at?
The intuitive answer is yes. Here’s why: let’s say that the matching ratio that day is going to be (you may or may not know — it doesn’t matter). The guarantee of the platform is to match you up if and only if your bid is at least . This means that you should never underbid: by doing so, you’re risking that you won’t be matched despite wanting to be, and you don’t gain anything. Similarly, you should never overbid: by overbidding, you’re risking that you will be matched at a ratio that you’re not happy with, and you don’t gain anything.
It turns out that this reasoning is wrong. Try and figure out why! If you think you know — or give up — scroll down and keep reading.
The reasoning is wrong because you have influence over the value of that the platform ends up choosing. By bidding, you’re creating more demand, thus causing to rise — and remember, all else equal you’d prefer a lower . This is because a lower means that for every Bill-dollar that goes to charity, a larger number of Oscar-dollars go to charity (and you don’t like Oscar).
We need to do more work to figure out whether the market is incentive-compatible, and that work starts with defining a utility function. I think the most natural utility function to give to a Bill supporter is something like this:
where and are positive and is negative. Here, “$ Bill” represents the total amount of money that Bill will receive from the platform; “$ Oscar” represents the total amount of money Oscar will receive from the platform; and “$ Charity” represents the total amount of money that the platform will match on either side and send to charity. So the quantity ($ Bill) + ($ Oscar) + ($ Charity) is constant: it’s just the total amount of all donations pledged through the platform. But, a Bill supporter values Bill-dollars and charity-dollars a positive amount and values Oscar-dollars a negative amount. Hence the coefficients , , and in the formula above: they represent, respectively, how happy the Bill supporter is made by each additional dollar that Bill gets; how happy they are about each additional dollar that Oscar gets (hence why is negative); and how happy they are about each additional dollar that goes to charity.3
So let’s say you’re a Bill supporter with a utility function that takes this form. It isn’t too hard to figure out what you should bid if you’re honest, i.e. the maximum ratio that you are willing to have your money matched at. If you do the math, you’ll find that your honest bid is , a quantity we will denote with .4 (Try confirming this for yourself!) So now the question of incentive-compatibility boils down to whether it’s possible for you to increase your utility function by bidding something other than .
Unfortunately, the answer is that it’s possible, meaning that it might pay to “lie” about your preferences via your bid. To see this, let’s say that you have a pretty good idea of what bids and offers have been placed so far and so you roughly know what will be. Further, say that this value of is slightly below — meaning that you’re willing to have your money matched at but just barely. Consider what happens if you decide to bid 0 instead of . This has two effects: first, it sends your money to Bill and causes some money on the other side to go to Oscar, instead of both bundles of money going to charity. You’re very slightly sad about this: you preferred to be matched at to not being matched, but just barely. The second effect is that it ever-so-slightly decreases , because you no longer exist on the demand-side of the market.5 You’re happy about this because it means that the money going to charity is more from Oscar-supporters and less from Bill-supporters than had you bid honestly. This means that Bill will receive more money and Oscar less.
The takeaway is that if is close enough to , then you’d prefer to bid 0 (or equivalently, just donate directly to Bill) rather than bidding . (How close to does have to be for you to prefer lying? It turns out: not super close. It depends on the specific offers made by Oscar-supporters, but generally there’s a pretty sizable interval such that if lies in that interval you’re better off bidding 0.6) Similarly, if you’re an Oscar supporter who’s only slightly willing to have your money matched at ratio , it makes sense for you to just donate directly to Oscar.
This is unfortunate because it might lead to a gradual “unraveling” of the market. Consider: the people on both sides who are only slightly happy with their money being matched drop out. Now there’s a new matching ratio and the people on both sides who are only slightly happy with this new ratio drop out. This continues, and I’m not sure what happens in the end. Maybe this iterative process stops and there’s an equilibrium where people on both sides are quite happy to donate at some ratio . Or maybe the market unravels until almost no one’s willing to participate.
Ultimately, the previous section was an intellectual exercise. In practice, having a market-based platform would be too complicated and opaque for most people to be comfortable participating. I think that it would be best for the platform to just match all donations at a one-to-one ratio.
That said, maybe it is possible to create a mechanism that is flexible and incentive compatible (or at least won’t unravel) — I just haven’t thought of how to do this yet! And maybe there’s even a way to implement such a mechanism in a way that most people would be able to understand and participate in. There’s a lot of thinking about the platform that remains to be done.
I’ve done some thinking beyond what I’ve written above. If you’re up for reading something more technical, click here to learn what else I’ve figured out (including a more thorough treatment of the math behind the market-based platform I described above)!
1. Dylan points out that I may have just invented a totally new type of fraud!↩
2. For every positive , let be the amount of money pledged to Bill by donors who bid ratios greater than or equal to (or to put it another way, the amount of money pledged to Bill by donors who are willing to have their money matched at an -to-one ratio). Let be the amount of money pledged to Oscar by donors who made offers that are less than or equal to (i.e. ones who are willing to have their money matched at a -to-one ratio). Note that monotonically decreases with (and ), whereas monotonically increases with (and ). These two functions cross at a unique which we will call . That is, is the unique positive number satisfying . Then by definition, the number of dollars donated by Bill supporters that are willing to be matched at ratio is equal to times the number of dollars donated by Oscar supporters that are willing to be matched at ratio . This means that we can precisely match all of these people at a -to-one ratio.
On the other hand, suppose we tried to use a different value instead of — a larger one, say. The number of Bill supporters okay with the new would be smaller, while the number of Oscar supporters okay with the new would be larger. Not only that, but a larger would demand more Bill-dollars for every Oscar-dollar. So a larger would not have enough Bill-dollars to pair up all dollars from Oscar-supporters who are willing to be matched at . Notice also that a larger would necessarily send fewer dollars to charity. Both of these facts likewise hold if we try to replace by a smaller number. Hence, the satisfying is the optimal value to choose in two senses: it matches up everyone who wants to be matched at that and it maximizes the amount of money given to charity.↩
3. For simplicity of analysis, we’re assuming that there’s only one charity.↩
4. This assumes that .↩
5. In practice, because people would be bidding integer numbers of cents instead of arbitrary real numbers, bidding 0 would have a tiny chance of decreasing by a lot (an entire cent) instead of decreasing by a tiny amount for certain; but from the perspective of maximizing your utility, this is essentially equivalent unless you have complete certainty of what the market looks like.↩
6. Specifically you should bid 0 if , where is the total money donated by Oscar-supporters who are willing to have their money matched at , as in footnote 2.↩