Ashwin R Jadhav 4 роки тому
батько
коміт
59e214f29e
1 змінених файлів з 28 додано та 29 видалено
  1. 28 29
      README.md

+ 28 - 29
README.md

@@ -17,26 +17,6 @@ Install all the packages from requirments.txt
 * Experiments are run on Mnist, Fashion Mnist and Cifar.
 * Experiments are run on Mnist, Fashion Mnist and Cifar.
 * To use your own dataset: Move your dataset to data directory and write a wrapper on pytorch dataset class.
 * To use your own dataset: Move your dataset to data directory and write a wrapper on pytorch dataset class.
 
 
-## Options
-The default values for various paramters parsed to the experiment are given in ```options.py```. Details are given some of those parameters:
-
-* ```--dataset:```  Default: 'mnist'. Options: 'mnist', 'fmnist', 'cifar'
-* ```--model:```    Default: 'mlp'. Options: 'mlp', 'cnn'
-* ```--gpu:```      Default: None (runs on CPU). Can also be set to the specific gpu id.
-* ```--epochs:```   Number of rounds of training.
-* ```--lr:```       Learning rate set to 0.01 by default.
-* ```--verbose:```  Detailed log outputs. Activated by default, set to 0 to deactivate.
-* ```--seed:```     Random Seed. Default set to 1.
-
-#### Federated Parameters
-* ```--iid:```      Distribution of data amongst users. Default set to IID. Set to 0 for non-IID.
-* ```--num_users:```Number of users. Default is 100.
-* ```--frac:```     Fraction of users to be used for federated updates. Default is 0.1.
-* ```--local_ep:``` Number of local training epochs in each user. Default is 10.
-* ```--local_bs:``` Batch size of local updates in each user. Default is 10.
-* ```--unequal:```  Used in non-iid setting. Option to split the data amongst users equally or unequally. Default set to 0 for equal splits. Set to 1 for unequal splits.
-
-
 ## Running the experiments
 ## Running the experiments
 The baseline experiment trains the model in the conventional way.
 The baseline experiment trains the model in the conventional way.
 
 
@@ -63,13 +43,32 @@ python federated_main --model=cnn --dataset=cifar --gpu=0 --epochs=10
 
 
 You can change the default values of other parameters to simulate different conditions. Refer to the options section.
 You can change the default values of other parameters to simulate different conditions. Refer to the options section.
 
 
+## Options
+The default values for various paramters parsed to the experiment are given in ```options.py```. Details are given some of those parameters:
+
+* ```--dataset:```  Default: 'mnist'. Options: 'mnist', 'fmnist', 'cifar'
+* ```--model:```    Default: 'mlp'. Options: 'mlp', 'cnn'
+* ```--gpu:```      Default: None (runs on CPU). Can also be set to the specific gpu id.
+* ```--epochs:```   Number of rounds of training.
+* ```--lr:```       Learning rate set to 0.01 by default.
+* ```--verbose:```  Detailed log outputs. Activated by default, set to 0 to deactivate.
+* ```--seed:```     Random Seed. Default set to 1.
+
+#### Federated Parameters
+* ```--iid:```      Distribution of data amongst users. Default set to IID. Set to 0 for non-IID.
+* ```--num_users:```Number of users. Default is 100.
+* ```--frac:```     Fraction of users to be used for federated updates. Default is 0.1.
+* ```--local_ep:``` Number of local training epochs in each user. Default is 10.
+* ```--local_bs:``` Batch size of local updates in each user. Default is 10.
+* ```--unequal:```  Used in non-iid setting. Option to split the data amongst users equally or unequally. Default set to 0 for equal splits. Set to 1 for unequal splits.
+
 ## Results on MNIST
 ## Results on MNIST
 #### Baseline Experiment:
 #### Baseline Experiment:
 The experiment involves training a single model in the conventional way.
 The experiment involves training a single model in the conventional way.
 
 
-```Parameters:``` <br />
-    Optimizer    : SGD <br />
-    Learning Rate: 0.01
+Parameters: <br />
+* ```Optimizer:```    : SGD 
+* ```Learning Rate:``` 0.01
 
 
 ```Table 1:``` Test accuracy after training for 10 epochs:
 ```Table 1:``` Test accuracy after training for 10 epochs:
 
 
@@ -83,12 +82,12 @@ The experiment involves training a single model in the conventional way.
 #### Federated Experiment:
 #### Federated Experiment:
 The experiment involves training a global model in the federated setting.
 The experiment involves training a global model in the federated setting.
 
 
-* ```Federated parameters:```` <br />
-    Fraction of users (C): 0.1 <br />
-    Local Batch size  (B): 10 <br />
-    Local Epochs      (E): 10 <br />
-    Optimizer            : SGD <br />
-    Learning Rate        : 0.01 <br />
+Federated parameters (default values):
+* ```Fraction of users (C)```: 0.1 
+* ```Local Batch size  (B)```: 10 
+* ```Local Epochs      (E)```: 10 
+* ```Optimizer            ```: SGD 
+* ```Learning Rate        ```: 0.01 <br />
 
 
 ```Table 2:``` Test accuracy after training for 10 global epochs with:
 ```Table 2:``` Test accuracy after training for 10 global epochs with: