This is a follow-up to Privacy - What's Possible With the NSA Watching.
I'll try to stick to the technology side of things and openly punt on the legal ins and outs.
A brave coder could create a data service that just moves things securely in and out, that other convenient, secure services could be built on top of. Secure email, text messaging, phone calls (voice service), whatever you like. With Congress and members of the NSA etc apparently unwilling to admit that what they're doing is violating basic US rights, it may be possible to force public oversight on this process with technology despite laws and programs to the contrary.
Basic Security
First, the service itself could be secured with HTTPS PFS, described in the previous article. That takes care of connections. On the servers themselves you've got a regularly rotating key for encrypting user data you share with no one, including the government.
But there's still the sticky problem of whatever poor jerk runs this site being served a secret warrant, and no legitimate legal challenge being available because the actual person whose rights are being violated isn't allowed to know. To untangle this gross catch 22 the government's assembled, what you really need to enable here is 3 basic principles: separation, anonymity, and civil disobedience.
Separation
If the author of the code doesn't actually control the service, they just maintain the code the service uses to operate, they can provide some insulation between themselves and the service. The service could be designed to generate its own keys, keep them completely private, and expose them to no one - even the author of the service. If we assume the author of the service can't avoid being identified, the trick is to ensure they can never be compelled to expose user data. Suppose the NSA says to the author, someone on your network is a person of interest, tell no one, go get their data for us. If the author doesn't have any access to the keys that data is being stored with - if the software itself is the only entity with actual access to the keys - there's not much the author can be asked to do here. Except - they could be compelled to write code that modifies the software so it exposes those keys, or exposes what a specific person has said.
Civil Disobedience and Anonymity
So, if the service was setup so the only way it could be modified is via a public channel, like a public code repository, you would force any malicious code like the above to be exposed to the public. You could further force any modification to the code through a public review process - ideally by anonymous coders so they can't be compelled to approve malicious code - you would place a pretty strong lock and key on the code. To do this you have the software update itself periodically, by pulling the latest approved code from the public repository. You could design the software in a way that it destroys all the keys (making the data leftover garbage) if it's modified in any other way. This creates a remaining risk of the repository itself being attacked, so whoever hosts the repository would also be at risk of being compelled. Worst case you could host the repo with the rest of the service, and have the software respond to an attack by destroying the keys.
The coders that do these code reviews would have to accept a serious legal risk by participating - whatever is ensuring their anonymity could always be pulled back, so they could potentially be compelled to approve malicious code - it could get pretty ugly. That's civil disobedience. There may be other ways to protect the coders besides anonymity - for example if only a small, random subset of the coders was allowed to perform a given code review/approval, all coders gain plausible deniability as to who actually said no to a malicious code submission, and no one coder is the ideal target for threats to get them to approve malicious code.
The coders that do these code reviews would have to accept a serious legal risk by participating - whatever is ensuring their anonymity could always be pulled back, so they could potentially be compelled to approve malicious code - it could get pretty ugly. That's civil disobedience. There may be other ways to protect the coders besides anonymity - for example if only a small, random subset of the coders was allowed to perform a given code review/approval, all coders gain plausible deniability as to who actually said no to a malicious code submission, and no one coder is the ideal target for threats to get them to approve malicious code.
An Olive Branch
As I said earlier, the goal is not to build the one place actual terrorists can have a nice secure chat about blowing up a building. You do still want it to be possible for warrants to be served on real, actual criminals - you just don't want it to be outside the realm of public oversight with nothing but a "Just Trust Us" PR campaign as guarantee it's not being abused.
So, the goal is to make it possible to serve warrants into this system - basically to the software - and a group of people - a jury of your peers in a sense - get to decide whether that warrant is valid and reasonable.
- Make the only way to get access to private data in this system via an electronic warrant filing system. From a technical perspective, you could just have the system email some government email address a key periodically that they can use to validate themselves as government actors, and they can make up their own minds about how they want to gate use of the system. They've shown themselves to be plenty resourceful in screwing us so far, I'm sure they can do smart things with this as well.
- Every user of the system is a member of the jury of peers. When a secret warrant is issued, a small pool of members is selected, and sent the warrant. Since it's a secret warrant, their receipt of it is illegal - another piece of civil disobedience. But if you manage to keep step 1 air tight, you may be able to force the government into step 2. A lawyer would know better than I what would be necessary in the electronic warrant system for it to feel comfortable for NSA etc to use, and legally cover members as well as possible.
- The random pool of users decides whether the warrant should be honored. If they decide it should, the selected communications are turned over, simple as that. If they decide it should, but there's no reasonable justification for this warrant being secret, they can turn it over, but have the software publish the warrant publicly. If the warrant is completely unreasonable, they can turn nothing over and have the software publish it to remind the government of its duties. You can ensure the pool is always an odd number and a simple majority wins on both the "turn records over" and "make public" votes.
That's it - a way to put a jury of your peers and public scrutiny back into the US legal process. It's possible there are parts of this that just aren't viable inside the US - in fact, the author of the software could probably have some really terrible things happen to them regardless of where they lived, so they'd probably need to be as anonymous as possible. Go America.
No comments:
Post a Comment