Flower Example for Federated Variational Autoencoder using Pytorch#
This example demonstrates how a variational autoencoder (VAE) can be trained in a federated way using the Flower framework.
Project Setup#
Start by cloning the example project. We prepared a single-line command that you can copy into your shell which will checkout the example for you:
git clone --depth=1 https://github.com/adap/flower.git && mv flower/examples/pytorch_federated_variational_autoencoder . && rm -rf flower && cd pytorch_federated_variational_autoencoder
This will create a new directory called pytorch_federated_variational_autoencoder
containing the following files:
-- pyproject.toml
-- requirements.txt
-- client.py
-- server.py
-- README.md
-- models.py
Installing Dependencies#
Project dependencies (such as torch
and flwr
) are defined in pyproject.toml
and requirements.txt
. We recommend Poetry to install those dependencies and manage your virtual environment (Poetry installation) or pip, but feel free to use a different way of installing dependencies and managing virtual environments if you have other preferences.
Poetry#
poetry install
poetry shell
Poetry will install all your dependencies in a newly created virtual environment. To verify that everything works correctly you can run the following command:
poetry run python3 -c "import flwr"
If you don’t see any errors you’re good to go!
pip#
Write the command below in your terminal to install the dependencies according to the configuration file requirements.txt.
pip install -r requirements.txt
Federating the Variational Autoencoder Model#
Afterwards you are ready to start the Flower server as well as the clients. You can simply start the server in a terminal as follows:
poetry run python3 server.py
Now you are ready to start the Flower clients which will participate in the learning. To do so simply open two more terminals and run the following command in each:
poetry run python3 client.py
Alternatively you can run all of it in one shell as follows:
poetry run python3 server.py &
poetry run python3 client.py &
poetry run python3 client.py
You will see that the federated training of variational autoencoder has started. You can add steps_per_epoch=3
to model.fit()
if you just want to evaluate that everything works without having to wait for the client-side training to finish (this will save you a lot of time during development).