>allowing valuable AIs to be trained in insecure environments without risking theft of their intelligence
your system involves an unencrypted network and unencrypted data, it would be trivial to train an identical network
the idea of controlling an "intelligence" with a private key is silly. you can achieve effectively the same thing by simply encrypting the weights after training.
Can't someone simply recover the weights of the network by looking at the changes in encrypted loss? I don't think comparisons like "less than" or "greater than" can possibly exist in HE or else pretty much any information one might be curious about can be recovered.
great point. I don't think that LE or GT exist in this homomorphic scheme. :) Otherwise, it would be vulnerable. Checks such as this are what go into good HE schemes.
the idea of controlling an "intelligence" with a private key is silly. you can achieve effectively the same thing by simply encrypting the weights after training.
Can't someone simply recover the weights of the network by looking at the changes in encrypted loss? I don't think comparisons like "less than" or "greater than" can possibly exist in HE or else pretty much any information one might be curious about can be recovered.