The only threat “to disrupt the ability to run a timely election” in the latest fiasco from the Undergraduate Association’s Election Commission came from the leaders who decided to fire their computer guy three weeks before he was supposed to start running the elections.
The UA could avoid future messes like this by making more committees do more work to try to adjudicate claims of bias. Or it could preempt many concerns with a simple technical solution: publish the code.
Let students inspect the software that former Technical Coordinator Evan Broder ’10 was going to run, and let students themselves decide whether he could have altered the election’s outcome for political gain. If he could have tampered with votes or otherwise shown bias, then any possible replacement could too—and the fact that those replacements’ bias is not as well-known should not be reassuring. If the system allows for such bias to intervene, then it needs to be fixed.
Unless the elections system is public, there’s no way to gauge what threat, if any, is posed by having a political appointee run that system. I happen to think he hardly posed any threat at all, but how can I really know? In a closed system, there’s no way students can possibly trust that whoever runs the system will not affect the election’s outcome for whatever reason.
To be sure, there may be those who say that revealing the source code of a voting system will expose its flaws for anyone to abuse. But without scrutiny, how can a closed-source system ever improve? How do we even know now that votes are being counted properly?
When the UA forced the resignation of a perfectly good software engineer, they meant to ensure that students felt confident in the integrity of the election system. Instead, they increased the odds that something will go wrong by March 16, when online voting is to begin. That’s hardly a confidence-booster.
And I don’t feel very good about an election where I can’t see the software that counts the votes.