DifferentialPrivacyClientSideFixedClipping#
- class DifferentialPrivacyClientSideFixedClipping(strategy: Strategy, noise_multiplier: float, clipping_norm: float, num_sampled_clients: int)[source]#
Bases :
Strategy
Strategy wrapper for central DP with client-side fixed clipping.
Use fixedclipping_mod modifier at the client side.
In comparison to DifferentialPrivacyServerSideFixedClipping, which performs clipping on the server-side, DifferentialPrivacyClientSideFixedClipping expects clipping to happen on the client-side, usually by using the built-in fixedclipping_mod.
- Paramètres:
strategy (Strategy) – The strategy to which DP functionalities will be added by this wrapper.
noise_multiplier (float) – The noise multiplier for the Gaussian mechanism for model updates. A value of 1.0 or higher is recommended for strong privacy.
clipping_norm (float) – The value of the clipping norm.
num_sampled_clients (int) – The number of clients that are sampled on each round.
Exemples
Create a strategy:
>>> strategy = fl.server.strategy.FedAvg(...)
Wrap the strategy with the DifferentialPrivacyClientSideFixedClipping wrapper:
>>> dp_strategy = DifferentialPrivacyClientSideFixedClipping( >>> strategy, cfg.noise_multiplier, cfg.clipping_norm, cfg.num_sampled_clients >>> )
On the client, add the fixedclipping_mod to the client-side mods:
>>> app = fl.client.ClientApp( >>> client_fn=client_fn, mods=[fixedclipping_mod] >>> )
Methods
aggregate_evaluate
(server_round, results, ...)Aggregate evaluation losses using the given strategy.
aggregate_fit
(server_round, results, failures)Add noise to the aggregated parameters.
configure_evaluate
(server_round, parameters, ...)Configure the next round of evaluation.
configure_fit
(server_round, parameters, ...)Configure the next round of training.
evaluate
(server_round, parameters)Evaluate model parameters using an evaluation function from the strategy.
initialize_parameters
(client_manager)Initialize global model parameters using given strategy.
- aggregate_evaluate(server_round: int, results: List[Tuple[ClientProxy, EvaluateRes]], failures: List[Tuple[ClientProxy, EvaluateRes] | BaseException]) Tuple[float | None, Dict[str, bool | bytes | float | int | str]] [source]#
Aggregate evaluation losses using the given strategy.
- aggregate_fit(server_round: int, results: List[Tuple[ClientProxy, FitRes]], failures: List[Tuple[ClientProxy, FitRes] | BaseException]) Tuple[Parameters | None, Dict[str, bool | bytes | float | int | str]] [source]#
Add noise to the aggregated parameters.
- configure_evaluate(server_round: int, parameters: Parameters, client_manager: ClientManager) List[Tuple[ClientProxy, EvaluateIns]] [source]#
Configure the next round of evaluation.
- configure_fit(server_round: int, parameters: Parameters, client_manager: ClientManager) List[Tuple[ClientProxy, FitIns]] [source]#
Configure the next round of training.
- evaluate(server_round: int, parameters: Parameters) Tuple[float, Dict[str, bool | bytes | float | int | str]] | None [source]#
Evaluate model parameters using an evaluation function from the strategy.
- initialize_parameters(client_manager: ClientManager) Parameters | None [source]#
Initialize global model parameters using given strategy.