run_simulation#
- run_simulation(server_app: ServerApp, client_app: ClientApp, num_supernodes: int, backend_name: str = 'ray', backend_config: Dict[str, int | float | str | bytes | bool | List[int] | List[float] | List[str] | List[bytes] | List[bool]] | None = None, enable_tf_gpu_growth: bool = False, verbose_logging: bool = False) None [source]#
Run a Flower App using the Simulation Engine.
- Paramètres:
server_app (ServerApp) – The ServerApp to be executed. It will send messages to different ClientApp instances running on different (virtual) SuperNodes.
client_app (ClientApp) – The ClientApp to be executed by each of the SuperNodes. It will receive messages sent by the ServerApp.
num_supernodes (int) – Number of nodes that run a ClientApp. They can be sampled by a Driver in the ServerApp and receive a Message describing what the ClientApp should perform.
backend_name (str (default: ray)) – A simulation backend that runs `ClientApp`s.
backend_config (Optional[Dict[str, ConfigsRecordValues]]) – “A dictionary, e.g {« <keyA> »: <value>, « <keyB> »: <value>} to configure a backend. Values supported in <value> are those included by flwr.common.typing.ConfigsRecordValues.
enable_tf_gpu_growth (bool (default: False)) – A boolean to indicate whether to enable GPU growth on the main thread. This is desirable if you make use of a TensorFlow model on your ServerApp while having your ClientApp running on the same GPU. Without enabling this, you might encounter an out-of-memory error because TensorFlow, by default, allocates all GPU memory. Read more about how tf.config.experimental.set_memory_growth() works in the TensorFlow documentation: https://www.tensorflow.org/api/stable.
verbose_logging (bool (default: False)) – When diabled, only INFO, WARNING and ERROR log messages will be shown. If enabled, DEBUG-level logs will be displayed.