In an earlier post, I praised a recent Ohio project giving citizens the tools to draw their own congressional district maps according to a set of carefully negotiated rules. The exercise was particularly valuable in demonstrating that reform is possible - and in demonstrating the degree of improvement that reform might achieve.
The rules for the exercise
were straightforward. Moreover, they reflect a set of very sophisticated
choices, even when simplified to be accessible to the general public.
They began with a solid threshold: a proposed plan would be tossed if it didn’t live up to the two basic redistricting requirements of federal law. The first is the U.S. Constitution, which requires that each district have about the same population. The second is the Voting Rights Act, which keeps district lines from fragmenting substantial minority populations to dilute their voting power. So far, so good.
In addition to these two basic rules, there are many other objectives that people try to satisfy when they draw district lines, some of which are at odds with each other. One reform strategy is to lock in rigid priorities: first, do X; then, if it doesn’t conflict with X, do Y. Another strategy is to punt: choose trusted decisionmakers, throw in a bunch of different goals, and let the decisionmakers work out which is more important. The Ohio project chose still a different path, and it’s an intriguing way to acknowledge and resolve the tension of multiple objectives.
The organizers chose four second-tier goals: community preservation, compactness, competitiveness, and partisan fairness (more about these, individually, here. They developed quantitative scales to evaluate plans based on each goal, and weighted the goals by relative importance. They then encouraged members of the public to find their own optimal balance among the goals, scoring each plan as it came in. Some plans, say, aimed more for compactness than competitiveness, or vice versa. But each effort to balance the competing goals was aiming for a high overall score. And notably, each offered an improvement on the status quo, which didn’t satisfy any identified goal particularly well.
As I explain here, I have some quibbles with some of the goals that were chosen,
and with some of the particular means to measure progress toward those
goals. But the structure of the enterprise – an open competition
- is noteworthy. This isn’t the first time that a competition
has been proposed: Sam Hirsch, among others, has suggested such a thing. But the Ohio exercise was a
competition with an unusual – and very thoughtful – ending.
I find it most impressive,
given the temptation in any contest to crown an ultimate champion, that
the organizers in Ohio refused to automate victory. Rather than
simply selecting the highest-scoring plan, any plan scoring in the top
25% was designated as a “winner.” In the real world, a trusted
decisionmaking body would then have discretion to choose, from among
those winners, the map most beneficial for Ohio voters overall.
If you’re going to have a competition, this is an extremely thoughtful approach, for two reasons. First, it recognizes that even with clear scores, there may not be one clear “winner” (I owe a hat tip to Dan Goroff for this point). If two redistricting goals are equally important, but in conflict, it’s possible to have multiple winning plans with the same score – one sacrifices a bit of goal 1 to improve goal 2, and another does the opposite. In the math and economics worlds, this is known as the “Pareto frontier” – a whole set of outcomes that all represent the best score in a competition like the one that Ohio set up. By choosing the top 25%, the competition acknowledges that there might be a tie.
Second, and probably more important, we know that the scoring system won’t be perfect. Even if it were possible to get perfect consensus on all of the goals of the redistricting process and their relative importance, translating that consensus to a mathematical score will involve a bit of noise. Both the measurements and the weights are approximate at best. Several of the traits being scored are just easily quantifiable proxies for elements of meaningful representation that are harder to measure. And there may be other intangible goals that don’t really have good proxies at all.
In this respect, the quest for the “best” redistricting plan is like the quest for the “best” supermarket produce. We’d want to take into account size, shelf life, cost, color, taste, and probably a bunch of other factors. Size and shelf life and cost can be easily measured and scored. There’s a scale for color, but we might have different opinions about what “best” looks like, and it’s going to be tough to score color blends. And our measurements for taste are approximate at best. If you’re going to set up a competition for the “best” produce, you’d want the results to be flexible enough to account for imperfection in the measurements, to get at the produce that’s near the quantifiable “Pareto frontier,” but perhaps not precisely on it.
So too with redistricting. Some goals are easily measured, but for others, any measure we might devise is at best a near miss. Since the score isn’t a perfect translation of the intended outcome, the highest score shouldn’t automatically win. In Ohio, the rules promote the top quartile of high scoring plans – any of which improves on the status quo. Then a decisionmaking body would take a look, to see if the number 4 plan actually serves Ohio voters better than the three plans with a higher numerical score. The decision to forego simply appointing a single winner works out to be a big win for everyone.