An advanced Differential Privacy library for pyTorch, based on opacus.
Ashwin R Jadhav 545e5df58b Update README.md | 4 years ago | |
---|---|---|
data | 4 years ago | |
save | 4 years ago | |
src | 4 years ago | |
.gitignore | 4 years ago | |
README.md | 4 years ago | |
requirments.txt | 4 years ago |
Implementation of the vanilla federated learning paper : Communication-Efficient Learning of Deep Networks from Decentralized Data.
Experiments are produced on MNIST, Fashion MNIST and CIFAR10 (both IID and non-IID). In case of non-IID, the data amongst the users can be split equally or unequally.
Since the purpose of these experiments are to illustrate the effectiveness of the federated learning paradigm, only simple models such as MLP and CNN are used.
Install all the packages from requirments.txt
options.py
. Details are given some of those parameters:```--model:``` Default: 'mlp'. Options: 'mlp', 'cnn'
```--gpu:``` Default: None (runs on CPU). Can also be set to the specific gpu id.
```--epochs:``` Number of rounds of training.
```--lr:``` Learning rate set to 0.01 by default.
```--verbose:``` Detailed log outputs. Activated by default, set to 0 to deactivate.
```--seed:``` Random Seed. Default set to 1.
* Federated Parameters
```--iid:``` Distribution of data amongst users. Default set to IID. Set to 0 for non-IID.
```--num_users:```Number of users. Default is 100.
```--frac:``` Fraction of users to be used for federated updates. Default is 0.1.
```--local_ep:``` Number of local training epochs in each user. Default is 10.
```--local_bs:``` Batch size of local updates in each user. Default is 10.
```--unequal:``` Used in non-iid setting. Option to split the data amongst users equally or unequally. Default set to 0 for equal splits. Set to 1 for unequal splits.
## Running the experiments
* The baseline experiment trains the model in the conventional way.
* To run the baseline experiment with MNIST on MLP using CPU:
python baseline_main --model=mlp --dataset=mnist --gpu=None --epochs=10
* Or to run it on GPU (eg: if gpu:0 is available):
python baseline_main --model=mlp --dataset=mnist --gpu=0 --epochs=10
-----
* Federated experiment involves training a global model using many local models.
* To run the federated experiment with CIFAR on CNN (using CPU):
python federated_main --model=cnn --dataset=cifar --gpu=None --epochs=10
* Or to run it on GPU (eg: if gpu:0 is available):
python federated_main --model=cnn --dataset=cifar --gpu=0 --epochs=10 ```
The experiment involves training a single model in the conventional way.
Parameters:
Optimizer : SGD
Learning Rate: 0.01
Table 1:
Test accuracy after training for 10 epochs:
Model | Test Acc |
---|---|
MLP | 92.71% |
CNN | 98.42% |
The experiment involves training a global model in the federated setting.
Federated parameters:
`
Fraction of users (C): 0.1
Local Batch size (B): 10
Local Epochs (E): 10
Optimizer : SGD
Learning Rate : 0.01
Table 2:
Test accuracy after training for 10 global epochs with:
Model | IID | Non-IID (equal) |
---|---|---|
MLP | 88.38% | 73.49% |
CNN | 97.28% | 75.94% |