瀏覽代碼

Update README.md

Ashwin R Jadhav 4 年之前
父節點
當前提交
22d21b00ba
共有 1 個文件被更改,包括 26 次插入26 次删除
  1. 26 26
      README.md

+ 26 - 26
README.md

@@ -18,27 +18,27 @@ Install all the packages from requirments.txt
 * To use your own dataset: Move your dataset to data directory and write a wrapper on pytorch dataset class.
 
 ## Options
-* The default values for various paramters parsed to the experiment are given in ```options.py```. Details are given some of those parameters:
+The default values for various paramters parsed to the experiment are given in ```options.py```. Details are given some of those parameters:
 
-```--dataset:```  Default: 'mnist'. Options: 'mnist', 'fmnist', 'cifar'
-```--model:```    Default: 'mlp'. Options: 'mlp', 'cnn'
-```--gpu:```      Default: None (runs on CPU). Can also be set to the specific gpu id.
-```--epochs:```   Number of rounds of training.
-```--lr:```       Learning rate set to 0.01 by default.
-```--verbose:```  Detailed log outputs. Activated by default, set to 0 to deactivate.
-```--seed:```     Random Seed. Default set to 1.
+* ```--dataset:```  Default: 'mnist'. Options: 'mnist', 'fmnist', 'cifar'
+* ```--model:```    Default: 'mlp'. Options: 'mlp', 'cnn'
+* ```--gpu:```      Default: None (runs on CPU). Can also be set to the specific gpu id.
+* ```--epochs:```   Number of rounds of training.
+* ```--lr:```       Learning rate set to 0.01 by default.
+* ```--verbose:```  Detailed log outputs. Activated by default, set to 0 to deactivate.
+* ```--seed:```     Random Seed. Default set to 1.
 
-* Federated Parameters
-```--iid:```      Distribution of data amongst users. Default set to IID. Set to 0 for non-IID.
-```--num_users:```Number of users. Default is 100.
-```--frac:```     Fraction of users to be used for federated updates. Default is 0.1.
-```--local_ep:``` Number of local training epochs in each user. Default is 10.
-```--local_bs:``` Batch size of local updates in each user. Default is 10.
-```--unequal:```  Used in non-iid setting. Option to split the data amongst users equally or unequally. Default set to 0 for equal splits. Set to 1 for unequal splits.
+#### Federated Parameters
+* ```--iid:```      Distribution of data amongst users. Default set to IID. Set to 0 for non-IID.
+* ```--num_users:```Number of users. Default is 100.
+* ```--frac:```     Fraction of users to be used for federated updates. Default is 0.1.
+* ```--local_ep:``` Number of local training epochs in each user. Default is 10.
+* ```--local_bs:``` Batch size of local updates in each user. Default is 10.
+* ```--unequal:```  Used in non-iid setting. Option to split the data amongst users equally or unequally. Default set to 0 for equal splits. Set to 1 for unequal splits.
 
 
 ## Running the experiments
-* The baseline experiment trains the model in the conventional way.
+The baseline experiment trains the model in the conventional way.
 
 * To run the baseline experiment with MNIST on MLP using CPU:
 ```
@@ -50,7 +50,7 @@ python baseline_main --model=mlp --dataset=mnist --gpu=0 --epochs=10
 ```
 -----
 
-* Federated experiment involves training a global model using many local models.
+Federated experiment involves training a global model using many local models.
 
 * To run the federated experiment with CIFAR on CNN (using CPU):
 ```
@@ -61,14 +61,14 @@ python federated_main --model=cnn --dataset=cifar --gpu=None --epochs=10
 python federated_main --model=cnn --dataset=cifar --gpu=0 --epochs=10
 ```
 
-* You can change the default values of other parameters to simulate different conditions. Refer to the options section.
+You can change the default values of other parameters to simulate different conditions. Refer to the options section.
 
 ## Results on MNIST
 #### Baseline Experiment:
 The experiment involves training a single model in the conventional way.
 
-```Parameters:```
-    Optimizer    : SGD
+```Parameters:``` <br />
+    Optimizer    : SGD <br />
     Learning Rate: 0.01
 
 ```Table 1:``` Test accuracy after training for 10 epochs:
@@ -83,12 +83,12 @@ The experiment involves training a single model in the conventional way.
 #### Federated Experiment:
 The experiment involves training a global model in the federated setting.
 
-```Federated parameters:````
-    Fraction of users (C): 0.1
-    Local Batch size  (B): 10
-    Local Epochs      (E): 10
-    Optimizer            : SGD
-    Learning Rate        : 0.01
+* ```Federated parameters:```` <br />
+    Fraction of users (C): 0.1 <br />
+    Local Batch size  (B): 10 <br />
+    Local Epochs      (E): 10 <br />
+    Optimizer            : SGD <br />
+    Learning Rate        : 0.01 <br />
 
 ```Table 2:``` Test accuracy after training for 10 global epochs with: