Ahead of the 2019 TensorFlow Dev Summit, Google is announcing a new way for third-party developers to adopt differential privacy when training machine learning models. TensorFlow Privacy is designed to be easy to implement for developers already using the popular open-source ML library.
The goal (via The Verge) of differential privacy for machine learning is to only “encode general patterns rather than facts about specific training examples.” This allows user data to remain private, while the system overall still learns and can advance from general behavior.
In particular, when training on users’ data, those techniques offer strong mathematical guarantees that models do not learn or remember the details about any specific user. Especially for deep learning, the additional guarantees can usefully strengthen the protections offered by other privacy techniques, whether established ones, such as thresholding and data elision, or new ones, like TensorFlow Federated learning.
For the past several years, Google has been working on foundational research for this technique, among other mechanisms, like Federated Learning. This practice also coincides with the Privacy tenet of Google’s Responsible AI Practices.
No privacy or mathematics expertise is needed to implement TensorFlow Privacy, with Google noting how developers “using standard TensorFlow mechanisms should not have to change their model architectures, training procedures, or processes.”
Instead, to train models that protect privacy for their training data, it is often sufficient for you to make some simple code changes and tune the hyperparameters relevant to privacy.
TensorFlow Privacy is available on GitHub today, with Google recommending interested parties contribute.
FTC: We use income earning auto affiliate links. More.
Comments