81 커밋 82a98880a2 ... a3ab6c0c1a

작성자 SHA1 메시지 날짜
  Jens Keim a3ab6c0c1a add functions for server side noise 4 년 전
  Jens Keim 69f06611d2 add privacy_engine_xl with modified _generate_noise 4 년 전
  Jens Keim d22bc3a22d 3 년 전
  wesleyjtan 82a98880a2 update requirements 5 년 전
  tanyksg 2b5c2270e2 Minor edits to README and requirements.txt 5 년 전
  wesleyjtan 074270857f update requirements 5 년 전
  wesleyjtan 70a78d4ebc update readme, requirements, and eval notebooks 5 년 전
  tanyksg bd11ec3ea9 updated eval notebooks 5 년 전
  tanyksg d6f6bd0394 Updated FP16 bash script, and added plots 5 년 전
  wesleyjtann e21e7007da add to scripts and plot figures in Eval notebook 5 년 전
  wesleyjtann 3ea4942121 clean 32bit code and fix cuda issue 5 년 전
  wesleyjtann 8662167bd3 Merge branch 'With_FP16' 5 년 전
  wesleyjtann b351618c26 add files to .gitignore 5 년 전
  tanyksg 42e7e9d52e Added FP16 experiments and results 5 년 전
  wesleyjtann 2c64fdafff run HFL experiments 5 년 전
  wesleyjtann 59ebd7820b change CNNCifar model to 917350 params and update evaluation results 5 년 전
  wesleyjtann 3a0e28ed92 debug HFL fl_train() eval 5 년 전
  wesleyjtann 357dfff83a change eval and shuffle dataset for training 5 년 전
  wesleyjtann a64a8792e4 fix HFL clustering bug 5 년 전
  wesleyjtann 06e8f098a6 fix bug and update readme 5 년 전
  wesleyjtann 185a6197ef change training loop to depend on test accuracy 5 년 전
  wesleyjtann d64b80080a add federated-hierarchical_main.py and update readme 5 년 전
  wesleyjtann 83c5219a4c fix error in federated_main.py->idxs=user_groups[c], and implement first hierarchical structure with 2 clusters 5 년 전
  wesleyjtann 730884a595 split dataset into clusters 5 년 전
  wesleyjtann fbe609c11c change mlp model to 2 hidden layers with 200 units 5 년 전
  WesleyJoonWieTann d1d89f4e6c fix utils.py get_dataset() cifar10 5 년 전
  Ashwin R Jadhav 8105c587d3 Merge pull request #1 from AshwinRJ/add-license-1 5 년 전
  Ashwin R Jadhav 36f33d7f01 Create LICENSE 5 년 전
  Ashwin R Jadhav 7a3578ddda Update README.md 5 년 전
  Ashwin R Jadhav 5e13165dac Update README.md 5 년 전
  Ashwin R Jadhav 59e214f29e Update README.md 5 년 전
  Ashwin R Jadhav 22d21b00ba Update README.md 5 년 전
  Ashwin R Jadhav 545e5df58b Update README.md 5 년 전
  Ashwin R Jadhav 778f45d0b7 Added exp_details function 5 년 전
  Ashwin R Jadhav ea11cd92a5 Update baseline_main.py 5 년 전
  Ashwin R Jadhav df2c9f86fc Minor bug fix 5 년 전
  Ashwin R Jadhav 2d3b7d2337 Minor update 5 년 전
  Ashwin R Jadhav 9250f8e0b8 Update baseline_main.py 5 년 전
  AshwinRJ ff15a0fc6b conform to pep8 5 년 전
  AshwinRJ 565fd6a756 Added baseline experiment 5 년 전
  AshwinRJ d933dd89b8 Minor fixes 5 년 전
  Ashwin R Jadhav 511e071a3e Update README.md 5 년 전
  AshwinRJ 6b9110cea9 Added readme 5 년 전
  AshwinRJ 9c97e403c1 Added dir for fashion mnist 5 년 전
  Ashwin R Jadhav 28140fb289 Rename main_fedavg.py to federated_main.py 5 년 전
  AshwinRJ 3abd0cc8af Added support for pytorch 1.2 5 년 전
  Ashwin R Jadhav 486bdb43a4 Delete averaging.py 5 년 전
  AshwinRJ 676dcdbd30 Improved Logging and Readability 5 년 전
  AshwinRJ a7ad5d04a9 Added averaging function 5 년 전
  AshwinRJ 2e1313f8f3 Added optimizer option 5 년 전
  AshwinRJ 2b46351813 Added test inference 5 년 전
  Ashwin R Jadhav 9beaecd14f Rename model.py to models.py 5 년 전
  Ashwin R Jadhav 0daec5578e Minor fix 5 년 전
  Ashwin R Jadhav 4cf63daad4 Update and rename Update.py to update.py 5 년 전
  AshwinRJ 5f6673f73a Major restructuring 5 년 전
  AshwinRJ eadd078244 Added utils 5 년 전
  AshwinRJ 089d168465 Added requirments 5 년 전
  AshwinRJ 53de533bd7 Remove duplicated directory 5 년 전
  AshwinRJ 391af6bd58 Merge branch 'master' of https://github.com/AshwinRJ/Federated-Learning 5 년 전
  AshwinRJ d99f9b155a Reorganized files 5 년 전
  AshwinRJ 6a3a0a19a4 .DS_Store ignored! 5 년 전
  Ashwin R Jadhav fb4c7b447f Delete .DS_Store 5 년 전
  Ashwin R Jadhav 5bbfb057d5 Delete .DS_Store 5 년 전
  Ashwin R Jadhav a0ebfa4430 Delete .DS_Store 5 년 전
  Ashwin R Jadhav 97ab2b1c29 Delete DS_Score 5 년 전
  AshwinRJ 4874c36d30 Added support for Fashion Mnist 5 년 전
  AshwinRJ 23c590a2a8 Minor fix 6 년 전
  AshwinRJ adba993623 Minor fix 6 년 전
  AshwinRJ 00c0211e4c Unequal data split support for non-iid 6 년 전
  AshwinRJ d47dc60fcc Updated figure save format 6 년 전
  AshwinRJ 418feb80dc Small Update 6 년 전
  AshwinRJ acb916a8d7 Minor fix 6 년 전
  AshwinRJ a52b69c93e Added avg acc plot 6 년 전
  AshwinRJ e00b27389e Added Federated Avg 6 년 전
  AshwinRJ 0e559f757a Added Regular NN Implementation 6 년 전
  Ashwin R Jadhav 97a488db17 Delete NN_Arch.py 6 년 전
  AshwinRJ 3c93e8265e Added Regular NN Implementation 6 년 전
  Ashwin R Jadhav 82adfa2df0 Update README.md 6 년 전
  AshwinRJ e9f57c2877 Added CNN Arch for CIFAR 6 년 전
  AshwinRJ e81afa9eed Added CNN MNIST 6 년 전
  AshwinRJ 7ad2e4503b First commit 6 년 전
100개의 변경된 파일0개의 추가작업 그리고 265개의 파일을 삭제
  1. 0 12
      .gitignore
  2. 0 116
      .ipynb_checkpoints/README-checkpoint.md
  3. 0 21
      LICENSE
  4. 0 108
      README.md
  5. 0 4
      data/README.md
  6. 0 0
      data/cifar/.gitkeep
  7. 0 0
      data/fashion_mnist/.gitkeep
  8. 0 0
      data/mnist/.gitkeep
  9. 0 4
      requirements.txt
  10. 0 0
      save/.gitkeep
  11. BIN
      save/MNIST (MLP, IID) FP16 and FP32 Comparison_acc_FP16_32.png
  12. BIN
      save/MNIST (MLP, IID) FP16 and FP32 Comparison_loss_FP16_32.png
  13. BIN
      save/MNIST_CNN_IID FP16 and FP32 Comparison_acc_FP16_32.png
  14. BIN
      save/MNIST_CNN_IID FP16 and FP32 Comparison_loss_FP16_32.png
  15. BIN
      save/MNIST_CNN_IID_FP16_acc_FP16.png
  16. BIN
      save/MNIST_CNN_IID_FP16_loss_FP16.png
  17. BIN
      save/MNIST_CNN_IID_acc.png
  18. BIN
      save/MNIST_CNN_IID_acc_FP16.png
  19. BIN
      save/MNIST_CNN_IID_loss.png
  20. BIN
      save/MNIST_CNN_IID_loss_FP16.png
  21. BIN
      save/MNIST_CNN_NONIID FP16 and FP32 Comparison_acc_FP16_32.png
  22. BIN
      save/MNIST_CNN_NONIID FP16 and FP32 Comparison_loss_FP16_32.png
  23. BIN
      save/MNIST_CNN_NONIID_FP16_acc_FP16.png
  24. BIN
      save/MNIST_CNN_NONIID_FP16_loss_FP16.png
  25. BIN
      save/MNIST_CNN_NONIID_acc.png
  26. BIN
      save/MNIST_CNN_NONIID_acc_FP16.png
  27. BIN
      save/MNIST_CNN_NONIID_loss.png
  28. BIN
      save/MNIST_CNN_NONIID_loss_FP16.png
  29. BIN
      save/MNIST_MLP_IID FP16 and FP32 Comparison_acc_FP16_32.png
  30. BIN
      save/MNIST_MLP_IID FP16 and FP32 Comparison_loss_FP16_32.png
  31. BIN
      save/MNIST_MLP_IID_FP16_acc_FP16.png
  32. BIN
      save/MNIST_MLP_IID_FP16_loss_FP16.png
  33. BIN
      save/MNIST_MLP_IID_acc.png
  34. BIN
      save/MNIST_MLP_IID_acc_FP16.png
  35. BIN
      save/MNIST_MLP_IID_loss.png
  36. BIN
      save/MNIST_MLP_IID_loss_FP16.png
  37. BIN
      save/MNIST_MLP_NONIID FP16 and FP32 Comparison_acc_FP16_32.png
  38. BIN
      save/MNIST_MLP_NONIID FP16 and FP32 Comparison_loss_FP16_32.png
  39. BIN
      save/MNIST_MLP_NONIID_FP16_acc_FP16.png
  40. BIN
      save/MNIST_MLP_NONIID_FP16_loss_FP16.png
  41. BIN
      save/MNIST_MLP_NONIID_acc.png
  42. BIN
      save/MNIST_MLP_NONIID_acc_FP16.png
  43. BIN
      save/MNIST_MLP_NONIID_loss.png
  44. BIN
      save/MNIST_MLP_NONIID_loss_FP16.png
  45. BIN
      save/objects/.DS_Store
  46. BIN
      save/objects/Bad/HFL4_mnist_mlp_100_lr[0.01]_C[0.1]_iid[0]_E[1]_B[10].pkl
  47. BIN
      save/objects/Bad/HFL4_mnist_mlp_100_lr[0.01]_C[0.1]_iid[1]_E[1]_B[10].pkl
  48. BIN
      save/objects/Bad/HFL4_mnist_mlp_100_lr[0.05]_C[0.1]_iid[0]_E[1]_B[10].pkl
  49. BIN
      save/objects/Bad/HFL4_mnist_mlp_100_lr[0.05]_C[0.1]_iid[1]_E[1]_B[10].pkl
  50. BIN
      save/objects/Bad/HFL4_mnist_mlp_150_lr[0.01]_C[0.1]_iid[1]_E[1]_B[10].pkl
  51. BIN
      save/objects/Bad/HFL4_mnist_mlp_150_lr[0.05]_C[0.1]_iid[0]_E[1]_B[10].pkl
  52. BIN
      save/objects/Bad/clustersize25_HFL4_mnist_mlp_100_lr[0.01]_C[0.1]_iid[1]_E[1]_B[10].pkl
  53. BIN
      save/objects/Bad/clustersize25_HFL4_mnist_mlp_100_lr[0.05]_C[0.1]_iid[0]_E[1]_B[10].pkl
  54. BIN
      save/objects/Old/FL_cifar_cnn_200_lr[0.01]_C[0.1]_iid[1]_E[5]_B[50].pkl
  55. BIN
      save/objects/Old/FL_mnist_mlp_141_lr[0.1]_C[0.1]_iid[1]_E[1]_B[10].pkl
  56. BIN
      save/objects/Old/FL_mnist_mlp_302_lr[0.1]_C[0.1]_iid[0]_E[1]_B[10].pkl
  57. BIN
      save/objects/Old/[10]_FL_mnist_cnn_3160_C[0.1]_iid[0]_E[1]_B[10].pkl
  58. BIN
      save/objects/Old/[1]FL_mnist_mlp_200_lr[0.05]_C[0.1]_iid[1]_E[1]_B[10].pkl
  59. BIN
      save/objects/Old/[1]_FL_mnist_mlp_500_C[0.1]_iid[1]_E[1]_B[10].pkl
  60. BIN
      save/objects/Old/[2]FL_mnist_mlp_302_lr[0.05]_C[0.1]_iid[0]_E[1]_B[10].pkl
  61. BIN
      save/objects/Old/[2]_FL_mnist_mlp_1468_C[0.1]_iid[0]_E[1]_B[10].pkl
  62. BIN
      save/objects/Old/[3]HFL2_mnist_mlp_101_lr[0.05]_C[0.1]_iid[1]_E[1]_B[10].pkl
  63. BIN
      save/objects/Old/[3]_HFL_mnist_mlp_500_C[0.1]_iid[1]_E[1]_B[10].pkl
  64. BIN
      save/objects/Old/[4]HFL2_mnist_mlp_101_lr[0.05]_C[0.1]_iid[0]_E[1]_B[10].pkl
  65. BIN
      save/objects/Old/[9]_FL_mnist_cnn_1054_C[0.1]_iid[1]_E[1]_B[10].pkl
  66. BIN
      save/objects/Old/clustersize50HFL4_mnist_mlp_100_lr[0.01]_C[0.1]_iid[1]_E[1]_B[10].pkl
  67. BIN
      save/objects/[10]FL_mnist_cnn_261_lr[0.01]_C[0.1]_iid[0]_E[1]_B[10].pkl
  68. BIN
      save/objects/[11]HFL2_mnist_cnn_100_lr[0.01]_C[0.1]_iid[1]_E[1]_B[10].pkl
  69. BIN
      save/objects/[12]HFL2_mnist_cnn_100_lr[0.01]_C[0.1]_iid[0]_E[1]_B[10].pkl
  70. BIN
      save/objects/[13]HFL4_mnist_cnn_100_lr[0.01]_C[0.1]_iid[1]_E[1]_B[10].pkl
  71. BIN
      save/objects/[14]HFL4_mnist_cnn_100_lr[0.01]_C[0.1]_iid[0]_E[1]_B[10].pkl
  72. BIN
      save/objects/[15]HFL8_mnist_cnn_30_lr[0.01]_C[0.1]_iid[1]_E[1]_B[10].pkl
  73. BIN
      save/objects/[16]HFL8_mnist_cnn_30_lr[0.01]_C[0.1]_iid[0]_E[1]_B[10].pkl
  74. BIN
      save/objects/[1]FL_mnist_mlp_468_C[0.1]_iid[1]_E[1]_B[10].pkl
  75. BIN
      save/objects/[20]FL_cifar_cnn_300_lr[0.01]_C[0.1]_iid[1]_E[5]_B[50].pkl
  76. BIN
      save/objects/[21]HFL2_cifar_cnn_100_lr[0.01]_C[0.1]_iid[1]_E[5]_B[50].pkl
  77. BIN
      save/objects/[22]HFL4_cifar_cnn_100_lr[0.01]_C[0.1]_iid[1]_E[5]_B[50].pkl
  78. BIN
      save/objects/[23]HFL8_cifar_cnn_100_lr[0.01]_C[0.1]_iid[1]_E[5]_B[50].pkl
  79. BIN
      save/objects/[2]FL_mnist_mlp_1196_lr[0.01]_C[0.1]_iid[0]_E[1]_B[10].pkl
  80. BIN
      save/objects/[3]HFL2_mnist_mlp_100_lr[0.01]_C[0.1]_iid[1]_E[1]_B[10].pkl
  81. BIN
      save/objects/[4]HFL2_mnist_mlp_100_lr[0.01]_C[0.1]_iid[0]_E[1]_B[10].pkl
  82. BIN
      save/objects/[5]HFL4_mnist_mlp_100_lr[0.01]_C[0.1]_iid[1]_E[1]_B[10].pkl
  83. BIN
      save/objects/[6]HFL4_mnist_mlp_150_lr[0.01]_C[0.1]_iid[0]_E[1]_B[10].pkl
  84. BIN
      save/objects/[7]HFL8_mnist_mlp_30_lr[0.01]_C[0.1]_iid[1]_E[1]_B[10].pkl
  85. BIN
      save/objects/[9]FL_mnist_cnn_100_lr[0.01]_C[0.1]_iid[1]_E[1]_B[10].pkl
  86. BIN
      save/objects/clustersize50HFL4_mnist_mlp_100_lr[0.01]_C[0.1]_iid[0]_E[1]_B[10].pkl
  87. BIN
      save/objects_fp16/BaseSGD_cifar_cnn_epoch[9]_lr[0.01]_iid[1]_FP16.pkl
  88. BIN
      save/objects_fp16/BaseSGD_mnist_cnn_epoch[9]_lr[0.01]_iid[1]_FP16.pkl
  89. BIN
      save/objects_fp16/BaseSGD_mnist_mlp_epoch[9]_lr[0.01]_iid[1].pkl
  90. BIN
      save/objects_fp16/BaseSGD_mnist_mlp_epoch[9]_lr[0.01]_iid[1]_FP16.pkl
  91. BIN
      save/objects_fp16/FL_cifar_cnn_100_lr[0.01]_C[0.1]_iid[1]_E[5]_B[50].pkl
  92. BIN
      save/objects_fp16/FL_cifar_cnn_100_lr[0.01]_C[0.1]_iid[1]_E[5]_B[50]_FP16.pkl
  93. BIN
      save/objects_fp16/FL_cifar_cnn_100_lr[0.01]_C[0.5]_iid[1]_E[5]_B[50].pkl
  94. BIN
      save/objects_fp16/FL_cifar_cnn_200_lr[0.01]_C[0.1]_iid[1]_E[5]_B[50]_FP16.pkl
  95. BIN
      save/objects_fp16/FL_cifar_cnn_300_lr[0.01]_C[0.1]_iid[1]_E[5]_B[50]_FP16.pkl
  96. BIN
      save/objects_fp16/FL_cifar_cnn_500_lr[0.01]_C[0.1]_iid[1]_E[5]_B[50]_FP16.pkl
  97. BIN
      save/objects_fp16/FL_mnist_cnn_100_lr[0.01]_C[0.1]_iid[0]_E[1]_B[10]_FP16.pkl
  98. BIN
      save/objects_fp16/FL_mnist_cnn_100_lr[0.01]_C[0.1]_iid[1]_E[1]_B[10]_FP16.pkl
  99. BIN
      save/objects_fp16/FL_mnist_cnn_261_lr[0.01]_C[0.1]_iid[0]_E[1]_B[10]_FP16.pkl
  100. BIN
      save/objects_fp16/FL_mnist_mlp_1196_lr[0.01]_C[0.1]_iid[0]_E[1]_B[10]_FP16.pkl

+ 0 - 12
.gitignore

@@ -1,12 +0,0 @@
-.DS_Store
-
-# exclude everything
-logs/*
-data/mnist/*
-data/cifar/*
-
-src/.ipynb_checkpoints/Mixed Precision Training-test-checkpoint.ipynb
-src/.ipynb_checkpoints/federated_main-Mixed Precision Training-checkpoint.ipynb
-src/Mixed Precision Training-test.ipynb
-src/federated_main-Mixed Precision Training.ipynb
-

+ 0 - 116
.ipynb_checkpoints/README-checkpoint.md

@@ -1,116 +0,0 @@
-# Federated-Learning (PyTorch)
-
-Implementation of both hierarchical and vanilla federated learning based on the paper : [Communication-Efficient Learning of Deep Networks from Decentralized Data](https://arxiv.org/abs/1602.05629).
-Blog Post: https://ai.googleblog.com/2017/04/federated-learning-collaborative.html
-
-Experiments are conducted on MNIST and CIFAR10 datasets. During training, the datasets split are both IID and non-IID. In case of non-IID, the data amongst the users can be split equally or unequally.
-
-Since the purpose of these experiments are to illustrate the effectiveness of the federated learning paradigm, only simple models such as MLP and CNN are used.
-
-## Requirments
-Install all the packages from requirments.txt
-* Python=3.7.3
-* Pytorch=1.2.0
-* Torchvision=0.4.0
-* Numpy=1.15.4
-* Tensorboardx=1.4
-* Matplotlib=3.0.1
-
-
-## Data
-* Download train and test datasets manually or they will be automatically downloaded from torchvision datasets.
-* Experiments are run on Mnist and Cifar.
-* To use your own dataset: Move your dataset to data directory and write a wrapper on pytorch dataset class.
-
-## Running the experiments
-The baseline experiment trains the model in the conventional way.
-
-* To run the baseline experiment with MNIST on MLP using CPU:
-```
-python baseline_main.py --model=mlp --dataset=mnist --epochs=10
-```
-* Or to run it on GPU (eg: if gpu:0 is available):
-```
-python baseline_main.py --model=mlp --dataset=mnist --gpu=1 --epochs=10
-```
------
-
-Federated experiment involves training a global model using many local models.
-
-* To run the federated experiment with CIFAR on CNN (IID):
-```
-python federated_main.py --local_ep=1 --local_bs=10 --frac=0.1 --model=cnn --dataset=cifar --iid=1 --test_acc=99 --gpu=1
-```
-* To run the same experiment under non-IID condition:
-```
-python federated_main.py --local_ep=1 --local_bs=10 --frac=0.1 --model=cnn --dataset=cifar --iid=0 --test_acc=99 --gpu=1
-```
------
-
-Hierarchical Federated experiments involve training a global model using different clusters with many local models.
-
-* To run the hierarchical federated experiment with MNIST on MLP (IID):
-```
-python federated-hierarchical_main.py --local_ep=1 --local_bs=10 --frac=0.1 --Cepochs=5 --model=mlp --dataset=mnist --iid=1 --num_cluster=2 --test_acc=97  --gpu=1
-```
-* To run the same experiment under non-IID condition:
-```
-python federated-hierarchical_main.py --local_ep=1 --local_bs=10 --frac=0.1 --Cepochs=5 --model=mlp --dataset=mnist --iid=0 --num_cluster=2 --test_acc=97  --gpu=1
-```
-
-You can change the default values of other parameters to simulate different conditions. Refer to the options section.
-
-## Options
-The default values for various paramters parsed to the experiment are given in ```options.py```. Details are given some of those parameters:
-
-* ```--dataset:```  Default: 'mnist'. Options: 'mnist', 'fmnist', 'cifar'
-* ```--model:```    Default: 'mlp'. Options: 'mlp', 'cnn'
-* ```--gpu:```      Default: None (runs on CPU). Can also be set to the specific gpu id.
-* ```--epochs:```   Number of rounds of training.
-* ```--lr:```       Learning rate set to 0.01 by default.
-* ```--verbose:```  Detailed log outputs. Activated by default, set to 0 to deactivate.
-* ```--seed:```     Random Seed. Default set to 1.
-
-#### Federated Parameters
-* ```--iid:```      Distribution of data amongst users. Default set to IID. Set to 0 for non-IID.
-* ```--num_users:```Number of users. Default is 100.
-* ```--frac:```     Fraction of users to be used for federated updates. Default is 0.1.
-* ```--local_ep:``` Number of local training epochs in each user. Default is 10.
-* ```--local_bs:``` Batch size of local updates in each user. Default is 10.
-* ```--unequal:```  Used in non-iid setting. Option to split the data amongst users equally or unequally. Default set to 0 for equal splits. Set to 1 for unequal splits.
-* ```--num_clusters:```  Number of clusters in the hierarchy.
-* ```--Cepochs:```  Number of rounds of training in each cluster.
-
-## Results on MNIST
-#### Baseline Experiment:
-The experiment involves training a single model in the conventional way.
-
-Parameters: <br />
-* ```Optimizer:```    : SGD 
-* ```Learning Rate:``` 0.01
-
-```Table 1:``` Test accuracy after training for 10 epochs:
-
-| Model | Test Acc |
-| ----- | -----    |
-|  MLP  |  92.71%  |
-|  CNN  |  98.42%  |
-
-----
-
-#### Federated Experiment:
-The experiment involves training a global model in the federated setting.
-
-Federated parameters (default values):
-* ```Fraction of users (C)```: 0.1 
-* ```Local Batch size  (B)```: 10 
-* ```Local Epochs      (E)```: 10 
-* ```Optimizer            ```: SGD 
-* ```Learning Rate        ```: 0.01 <br />
-
-```Table 2:``` Test accuracy after training for 10 global epochs with:
-
-| Model |    IID   | Non-IID (equal)|
-| ----- | -----    |----            |
-|  MLP  |  88.38%  |     73.49%     |
-|  CNN  |  97.28%  |     75.94%     |

+ 0 - 21
LICENSE

@@ -1,21 +0,0 @@
-MIT License
-
-Copyright (c) 2019 Ashwin R Jadhav
-
-Permission is hereby granted, free of charge, to any person obtaining a copy
-of this software and associated documentation files (the "Software"), to deal
-in the Software without restriction, including without limitation the rights
-to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-copies of the Software, and to permit persons to whom the Software is
-furnished to do so, subject to the following conditions:
-
-The above copyright notice and this permission notice shall be included in all
-copies or substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-SOFTWARE.

+ 0 - 108
README.md

@@ -1,108 +0,0 @@
-# Hierarchical Federated-Learning (PyTorch)
-
-Implementation of both hierarchical and vanilla federated learning based on the paper : [Communication-Efficient Learning of Deep Networks from Decentralized Data](https://arxiv.org/abs/1602.05629).
-
-Experiments are conducted on MNIST and CIFAR10 datasets. During training, the datasets split are both IID and non-IID. In case of non-IID, the data amongst the users can be split equally or unequally.
-
-Since the purpose of these experiments is to illustrate the effectiveness of the federated learning paradigm, only simple models such as MLP and CNN are used.
-
-## Requirements
-Install all the packages from requirements.txt
-* Python==3.7.3
-* Pytorch==1.2.0
-* Torchvision==0.4.0
-* Numpy==1.15.4
-* Tensorboardx==1.4
-* Matplotlib==3.0.1
-* Tqdm==4.39.0 
-
-## Steps to setting up a Python environment
-1. Creating environment:
-```
-conda create -n myenv python=3.7.3
-```
-2. Installing Pytorch and torchvision:
-```
-conda install pytorch==1.2.0 torchvision==0.4.0 -c pytorch
-```
-3. Installing other package requirements:
-```
-pip install -r requirements.txt
-```
-
-
-## Data
-* Download train and test datasets manually or they will be automatically downloaded to the [data](/data/) folder from torchvision datasets.
-* Experiments are run on MNIST and CIFAR.
-* To use your own dataset: Move your dataset to data directory and write a wrapper on pytorch dataset class.
-
-## Running the experiments
-All the experiments of reported results are in the [scripts](/src/) below:
-* script_bash_FL_diffFP_mnist_mlp.sh
-* script_bash_FL_diffFP_mnist_cnn.sh
-* script_bash_FL_diffFP_cifar.sh
-* script_bash_FL_diffFP.sh
------
-The baseline experiment trains the model in the conventional federated learning.
-
-* To run the baseline federated experiment with MNIST on MLP using CPU:
-```
-python federated_main.py --local_ep=1 --local_bs=10 --frac=0.1 --model=mlp --dataset=mnist --iid=1 --gpu=0 --lr=0.01 --test_acc=95 --mlpdim=200 --epochs=600
-```
-* Or to run it on GPU (eg: if gpu:0 is available):
-```
-python federated_main.py --local_ep=1 --local_bs=10 --frac=0.1 --model=mlp --dataset=mnist --iid=1 --gpu=1 --lr=0.01 --test_acc=95 --mlpdim=200 --epochs=600
-```
------
-
-Hierarchical federated experiment involves training a global model using many local models. 
-
-* To run the hierarchical federated experiment with 2 clusters on MNIST using CNN (IID):
-```
-python federated-hierarchical2_main.py --local_ep=1 --local_bs=10 --frac=0.1 --Cepochs=10 --model=cnn --dataset=mnist --iid=1 --num_cluster=2 --gpu=1 --lr=0.01 --epochs=100
-```
-* To run the same experiment under non-IID condition:
-```
-python federated-hierarchical2_main.py --local_ep=1 --local_bs=10 --frac=0.1 --Cepochs=10 --model=cnn --dataset=mnist --iid=0 --num_cluster=2 --gpu=1 --lr=0.01 --epochs=100
-```
------
-Hierarchical Federated experiments involve training a global model using different clusters with many local models (16-bit).
-
-* To run the hierarchical federated experiment with 2 clusters on CIFAR using CNN (IID):
-```
-python ./federated-hierarchical2_main_fp16.py --local_ep=5 --local_bs=50 --frac=0.1 --Cepochs=10 --model=cnn --dataset=cifar --iid=1 --num_cluster=2 --gpu=1 --lr=0.01 --epochs=100 
-```
-
-
-You can change the default values of other parameters to simulate different conditions. Refer to the options section.
-
-## Options
-The default values for various paramters parsed to the experiment are given in ```options.py```. Details are given some of those parameters:
-
-* ```--dataset:```  Default: 'mnist'. Options: 'mnist', 'cifar'
-* ```--model:```    Default: 'mlp'. Options: 'mlp', 'cnn'
-* ```--gpu:```      Default: 1 (runs on gpu:0). Select 0 if using CPU only
-* ```--gpu_id:```	Default: 'cuda:0' (this specifies which GPU to use)
-* ```--epochs:```   Number of rounds of training.
-* ```--lr:```       Learning rate set to 0.01 by default.
-* ```--verbose:```  Detailed log outputs. Activated by default, set to 0 to deactivate.
-* ```--seed:```     Random Seed. Default set to 1.
-
-#### Federated Parameters
-* ```--iid:```      Distribution of data amongst users. Default set to IID. Set to 0 for non-IID.
-* ```--num_users:```Number of users. Default is 100.
-* ```--frac:```     Fraction of users to be used for federated updates. Default is 0.1.
-* ```--local_ep:``` Number of local training epochs in each user. Default is 1.
-* ```--local_bs:``` Batch size of local updates in each user. Default is 10.
-* ```--num_clusters:```  Number of clusters in the hierarchy.
-* ```--Cepochs:```  Number of rounds of training in each cluster.
-
-## Experimental Results 
-The results and figures can be found in [evaluation notebooks](/src/)
-* Eval.ipynb
-* Eval_fp16.ipynb
-* Eval_fp16-32-compare.ipynb
-
-
-
-

+ 0 - 4
data/README.md

@@ -1,4 +0,0 @@
-## Datasets
-
-- Download the required datasets in the respective directories
-- Add your custom dataset in a separate directory.

+ 0 - 0
data/cifar/.gitkeep


+ 0 - 0
data/fashion_mnist/.gitkeep


+ 0 - 0
data/mnist/.gitkeep


+ 0 - 4
requirements.txt

@@ -1,4 +0,0 @@
-tqdm==4.39.0 
-numpy==1.15.4
-matplotlib==3.0.1
-tensorboardx==1.4

+ 0 - 0
save/.gitkeep


BIN
save/MNIST (MLP, IID) FP16 and FP32 Comparison_acc_FP16_32.png


BIN
save/MNIST (MLP, IID) FP16 and FP32 Comparison_loss_FP16_32.png


BIN
save/MNIST_CNN_IID FP16 and FP32 Comparison_acc_FP16_32.png


BIN
save/MNIST_CNN_IID FP16 and FP32 Comparison_loss_FP16_32.png


BIN
save/MNIST_CNN_IID_FP16_acc_FP16.png


BIN
save/MNIST_CNN_IID_FP16_loss_FP16.png


BIN
save/MNIST_CNN_IID_acc.png


BIN
save/MNIST_CNN_IID_acc_FP16.png


BIN
save/MNIST_CNN_IID_loss.png


BIN
save/MNIST_CNN_IID_loss_FP16.png


BIN
save/MNIST_CNN_NONIID FP16 and FP32 Comparison_acc_FP16_32.png


BIN
save/MNIST_CNN_NONIID FP16 and FP32 Comparison_loss_FP16_32.png


BIN
save/MNIST_CNN_NONIID_FP16_acc_FP16.png


BIN
save/MNIST_CNN_NONIID_FP16_loss_FP16.png


BIN
save/MNIST_CNN_NONIID_acc.png


BIN
save/MNIST_CNN_NONIID_acc_FP16.png


BIN
save/MNIST_CNN_NONIID_loss.png


BIN
save/MNIST_CNN_NONIID_loss_FP16.png


BIN
save/MNIST_MLP_IID FP16 and FP32 Comparison_acc_FP16_32.png


BIN
save/MNIST_MLP_IID FP16 and FP32 Comparison_loss_FP16_32.png


BIN
save/MNIST_MLP_IID_FP16_acc_FP16.png


BIN
save/MNIST_MLP_IID_FP16_loss_FP16.png


BIN
save/MNIST_MLP_IID_acc.png


BIN
save/MNIST_MLP_IID_acc_FP16.png


BIN
save/MNIST_MLP_IID_loss.png


BIN
save/MNIST_MLP_IID_loss_FP16.png


BIN
save/MNIST_MLP_NONIID FP16 and FP32 Comparison_acc_FP16_32.png


BIN
save/MNIST_MLP_NONIID FP16 and FP32 Comparison_loss_FP16_32.png


BIN
save/MNIST_MLP_NONIID_FP16_acc_FP16.png


BIN
save/MNIST_MLP_NONIID_FP16_loss_FP16.png


BIN
save/MNIST_MLP_NONIID_acc.png


BIN
save/MNIST_MLP_NONIID_acc_FP16.png


BIN
save/MNIST_MLP_NONIID_loss.png


BIN
save/MNIST_MLP_NONIID_loss_FP16.png


BIN
save/objects/.DS_Store


BIN
save/objects/Bad/HFL4_mnist_mlp_100_lr[0.01]_C[0.1]_iid[0]_E[1]_B[10].pkl


BIN
save/objects/Bad/HFL4_mnist_mlp_100_lr[0.01]_C[0.1]_iid[1]_E[1]_B[10].pkl


BIN
save/objects/Bad/HFL4_mnist_mlp_100_lr[0.05]_C[0.1]_iid[0]_E[1]_B[10].pkl


BIN
save/objects/Bad/HFL4_mnist_mlp_100_lr[0.05]_C[0.1]_iid[1]_E[1]_B[10].pkl


BIN
save/objects/Bad/HFL4_mnist_mlp_150_lr[0.01]_C[0.1]_iid[1]_E[1]_B[10].pkl


BIN
save/objects/Bad/HFL4_mnist_mlp_150_lr[0.05]_C[0.1]_iid[0]_E[1]_B[10].pkl


BIN
save/objects/Bad/clustersize25_HFL4_mnist_mlp_100_lr[0.01]_C[0.1]_iid[1]_E[1]_B[10].pkl


BIN
save/objects/Bad/clustersize25_HFL4_mnist_mlp_100_lr[0.05]_C[0.1]_iid[0]_E[1]_B[10].pkl


BIN
save/objects/Old/FL_cifar_cnn_200_lr[0.01]_C[0.1]_iid[1]_E[5]_B[50].pkl


BIN
save/objects/Old/FL_mnist_mlp_141_lr[0.1]_C[0.1]_iid[1]_E[1]_B[10].pkl


BIN
save/objects/Old/FL_mnist_mlp_302_lr[0.1]_C[0.1]_iid[0]_E[1]_B[10].pkl


BIN
save/objects/Old/[10]_FL_mnist_cnn_3160_C[0.1]_iid[0]_E[1]_B[10].pkl


BIN
save/objects/Old/[1]FL_mnist_mlp_200_lr[0.05]_C[0.1]_iid[1]_E[1]_B[10].pkl


BIN
save/objects/Old/[1]_FL_mnist_mlp_500_C[0.1]_iid[1]_E[1]_B[10].pkl


BIN
save/objects/Old/[2]FL_mnist_mlp_302_lr[0.05]_C[0.1]_iid[0]_E[1]_B[10].pkl


BIN
save/objects/Old/[2]_FL_mnist_mlp_1468_C[0.1]_iid[0]_E[1]_B[10].pkl


BIN
save/objects/Old/[3]HFL2_mnist_mlp_101_lr[0.05]_C[0.1]_iid[1]_E[1]_B[10].pkl


BIN
save/objects/Old/[3]_HFL_mnist_mlp_500_C[0.1]_iid[1]_E[1]_B[10].pkl


BIN
save/objects/Old/[4]HFL2_mnist_mlp_101_lr[0.05]_C[0.1]_iid[0]_E[1]_B[10].pkl


BIN
save/objects/Old/[9]_FL_mnist_cnn_1054_C[0.1]_iid[1]_E[1]_B[10].pkl


BIN
save/objects/Old/clustersize50HFL4_mnist_mlp_100_lr[0.01]_C[0.1]_iid[1]_E[1]_B[10].pkl


BIN
save/objects/[10]FL_mnist_cnn_261_lr[0.01]_C[0.1]_iid[0]_E[1]_B[10].pkl


BIN
save/objects/[11]HFL2_mnist_cnn_100_lr[0.01]_C[0.1]_iid[1]_E[1]_B[10].pkl


BIN
save/objects/[12]HFL2_mnist_cnn_100_lr[0.01]_C[0.1]_iid[0]_E[1]_B[10].pkl


BIN
save/objects/[13]HFL4_mnist_cnn_100_lr[0.01]_C[0.1]_iid[1]_E[1]_B[10].pkl


BIN
save/objects/[14]HFL4_mnist_cnn_100_lr[0.01]_C[0.1]_iid[0]_E[1]_B[10].pkl


BIN
save/objects/[15]HFL8_mnist_cnn_30_lr[0.01]_C[0.1]_iid[1]_E[1]_B[10].pkl


BIN
save/objects/[16]HFL8_mnist_cnn_30_lr[0.01]_C[0.1]_iid[0]_E[1]_B[10].pkl


BIN
save/objects/[1]FL_mnist_mlp_468_C[0.1]_iid[1]_E[1]_B[10].pkl


BIN
save/objects/[20]FL_cifar_cnn_300_lr[0.01]_C[0.1]_iid[1]_E[5]_B[50].pkl


BIN
save/objects/[21]HFL2_cifar_cnn_100_lr[0.01]_C[0.1]_iid[1]_E[5]_B[50].pkl


BIN
save/objects/[22]HFL4_cifar_cnn_100_lr[0.01]_C[0.1]_iid[1]_E[5]_B[50].pkl


BIN
save/objects/[23]HFL8_cifar_cnn_100_lr[0.01]_C[0.1]_iid[1]_E[5]_B[50].pkl


BIN
save/objects/[2]FL_mnist_mlp_1196_lr[0.01]_C[0.1]_iid[0]_E[1]_B[10].pkl


BIN
save/objects/[3]HFL2_mnist_mlp_100_lr[0.01]_C[0.1]_iid[1]_E[1]_B[10].pkl


BIN
save/objects/[4]HFL2_mnist_mlp_100_lr[0.01]_C[0.1]_iid[0]_E[1]_B[10].pkl


BIN
save/objects/[5]HFL4_mnist_mlp_100_lr[0.01]_C[0.1]_iid[1]_E[1]_B[10].pkl


BIN
save/objects/[6]HFL4_mnist_mlp_150_lr[0.01]_C[0.1]_iid[0]_E[1]_B[10].pkl


BIN
save/objects/[7]HFL8_mnist_mlp_30_lr[0.01]_C[0.1]_iid[1]_E[1]_B[10].pkl


BIN
save/objects/[9]FL_mnist_cnn_100_lr[0.01]_C[0.1]_iid[1]_E[1]_B[10].pkl


BIN
save/objects/clustersize50HFL4_mnist_mlp_100_lr[0.01]_C[0.1]_iid[0]_E[1]_B[10].pkl


BIN
save/objects_fp16/BaseSGD_cifar_cnn_epoch[9]_lr[0.01]_iid[1]_FP16.pkl


BIN
save/objects_fp16/BaseSGD_mnist_cnn_epoch[9]_lr[0.01]_iid[1]_FP16.pkl


BIN
save/objects_fp16/BaseSGD_mnist_mlp_epoch[9]_lr[0.01]_iid[1].pkl


BIN
save/objects_fp16/BaseSGD_mnist_mlp_epoch[9]_lr[0.01]_iid[1]_FP16.pkl


BIN
save/objects_fp16/FL_cifar_cnn_100_lr[0.01]_C[0.1]_iid[1]_E[5]_B[50].pkl


BIN
save/objects_fp16/FL_cifar_cnn_100_lr[0.01]_C[0.1]_iid[1]_E[5]_B[50]_FP16.pkl


BIN
save/objects_fp16/FL_cifar_cnn_100_lr[0.01]_C[0.5]_iid[1]_E[5]_B[50].pkl


BIN
save/objects_fp16/FL_cifar_cnn_200_lr[0.01]_C[0.1]_iid[1]_E[5]_B[50]_FP16.pkl


BIN
save/objects_fp16/FL_cifar_cnn_300_lr[0.01]_C[0.1]_iid[1]_E[5]_B[50]_FP16.pkl


BIN
save/objects_fp16/FL_cifar_cnn_500_lr[0.01]_C[0.1]_iid[1]_E[5]_B[50]_FP16.pkl


BIN
save/objects_fp16/FL_mnist_cnn_100_lr[0.01]_C[0.1]_iid[0]_E[1]_B[10]_FP16.pkl


BIN
save/objects_fp16/FL_mnist_cnn_100_lr[0.01]_C[0.1]_iid[1]_E[1]_B[10]_FP16.pkl


BIN
save/objects_fp16/FL_mnist_cnn_261_lr[0.01]_C[0.1]_iid[0]_E[1]_B[10]_FP16.pkl


BIN
save/objects_fp16/FL_mnist_mlp_1196_lr[0.01]_C[0.1]_iid[0]_E[1]_B[10]_FP16.pkl


이 변경점에서 너무 많은 파일들이 변경되어 몇몇 파일들은 표시되지 않았습니다.