Preventing Health Data Leaks with Federated Learning Using NVIDIA FLARE

Preventing Health Data Leaks with Federated Learning Using NVIDIA FLARE:
- The goal of federated learning is to perform data analytics and machine learning without accessing the raw data of remote sites.
- NVIDIA FLARE allows custom training and validation code without compromising data security.
- Encryption and transmission channel security are not enough to guarantee data leakage protection.
- FLARE 2.3.2 introduces features that ensure data protection in federated learning and analytics.
- Data owners can review code before it is executed on their data to prevent malicious code.
- Job workflow is modified to prevent the creation of malicious code during initialization or construction.
- This solution does not solve all data protection problems, but it is crucial when data owners have limited trust in remote data scientists.
- Other areas to consider for data protection include model inference attacks, differential privacy, transmission channel security, and output filters.
- Defense in depth is necessary to protect data owners from potentially malicious code in federated scenarios with no full trust relationship.