I’ve been tossing this up in my head and was wondering if this is interesting to others. The problem I’m trying to solve is two-fold. Firstly, how to ensure end-to-end security on data you store on the cloud. Second, how to give services limited access to that data so that once the online processing is complete, no one can decrypt the data. This can allow for, say, webmail where if the cloud device is eventually compromised, no one has access to the data.
The basic architecture is that of a sandbox. The sandbox has a journalling storage and retrieval service, a partial secret service and a monitored network service. Inside the sandbox are “apps” which only have the ability to communicate via those three services.
Whenever the storage service is asked to store data, it is encrypted with a public key. You keep the private key on a remote device. This means that apps can freely store data, but it’s sort of a data black hole, and only a remote device can decrypt it with the private key. An app can forward encrypted data to a device and get it decrypted there.
So far it’s just like an encrypted dropbox, but here’s where it gets fancy. You then use secret splitting to cut the private key into two pieces. You give one piece to the Partial Secret Service, and you give the other to the app. Neither can decrypt the data on their own, but together they can decrypt data for a limited time. You can remove the split secret from the Partial Secret Service (which you trust), which stops the app from being able to decrypt the data. The next time, you split off a different pair of secrets so that replay attacks cannot work.
While you trust the partial secret service, you also have some way of knowing that the system is compromised. If the system is compromised, if either of the keys are gone, the encrypted data is safe. Importantly, if you can shut the sandbox down on intrusion, your data is safe.
Thoughts? Are there any fatal flaws in my idea?
There’s a project on github looking at implementing an open-source BitTorrent Sync style protocol called clearskies which has extendable architecture by design, so that might be worth looking into.
They’re working on an alpha implementation in C++ now
The Maidsafe project is attempting a similar goal, but at a much larger scale, as a distributed autonomous corporation. It’s essentially a global storage network where all data is encrypted client-side, chunked, said chunks are then globally de-duplicated and replicated across the network. Resource consumption is managed by an internal blockchain based cryptocurrency, either bought directly, or earned by providing storage to the network.
Dual-licensed GPLv3 and a commercial license (1% of revenue).
Warning: my crypto. is somewhat out-of-date so please take this with a pinch of salt:
The item which causes me most concern in Sunny’s scheme is this: is there a risk of “leaking” bits of the private key each time a new pair of secrets are spilt off? If this is likely to be a frequent event perhaps some thought might be usefully invested in having some kind of time- or usage-based- protection key-pair renewal/regeneration mechanism?
We (not ppau) have started on our own version that is like a storage for other services to use. We are also working on a BT driven web browsing system, so filtering can’t be applied to it. Hopefully we can have some information up soon (still ironing out the specs)
haha yeah that’s my main concern as well. It would be nice to be able to generate session keys or something but I couldn’t come up with a clever way. I kind of came up with a heavier-weight protocol where, when encrypting, an app (or in this case it would be a new sandbox-side service) would use a private key it generates on the fly for each block. It would then encrypt the private key with the public key and put it on a queue for pick-up.
The client would then pick up all the private keys and store them on the client. It would then split the private keys for each “block” and hand them to each end when they ask for them. This would make bit leaking both less likely, as well as less of a security risk, since the “blocks” all have different private keys. The only downside is that it relies on good random generators on the server end, and obviously it’s a noisier protocol, requiring more intervention from the client.
Sounds a bit like the Invisible Internet Project and there was another peer to peer anonymous file sharing thing using GPG keys floating around a while back that never took off.