DifferentialPrivacyServerSideAdaptiveClipping#
- class DifferentialPrivacyServerSideAdaptiveClipping(strategy: Strategy, noise_multiplier: float, num_sampled_clients: int, initial_clipping_norm: float = 0.1, target_clipped_quantile: float = 0.5, clip_norm_lr: float = 0.2, clipped_count_stddev: float | None = None)[源代码]#
基类:
Strategy
Strategy wrapper for central DP with server-side adaptive clipping.
- 参数:
strategy (Strategy) -- The strategy to which DP functionalities will be added by this wrapper.
noise_multiplier (float) -- The noise multiplier for the Gaussian mechanism for model updates.
num_sampled_clients (int) -- The number of clients that are sampled on each round.
initial_clipping_norm (float) -- The initial value of clipping norm. Defaults to 0.1. Andrew et al. recommends to set to 0.1.
target_clipped_quantile (float) -- The desired quantile of updates which should be clipped. Defaults to 0.5.
clip_norm_lr (float) -- The learning rate for the clipping norm adaptation. Defaults to 0.2. Andrew et al. recommends to set to 0.2.
clipped_count_stddev (float) -- The standard deviation of the noise added to the count of updates below the estimate. Andrew et al. recommends to set to expected_num_records/20
示例
Create a strategy:
>>> strategy = fl.server.strategy.FedAvg( ... )
Wrap the strategy with the DifferentialPrivacyServerSideAdaptiveClipping wrapper
>>> dp_strategy = DifferentialPrivacyServerSideAdaptiveClipping( >>> strategy, cfg.noise_multiplier, cfg.num_sampled_clients, ... >>> )
Methods
aggregate_evaluate
(server_round, results, ...)使用给定的策略汇总评估损失。
aggregate_fit
(server_round, results, failures)Aggregate training results and update clip norms.
configure_evaluate
(server_round, parameters, ...)配置下一轮评估。
configure_fit
(server_round, parameters, ...)配置下一轮训练。
evaluate
(server_round, parameters)使用策略中的评估函数评估模型参数。
initialize_parameters
(client_manager)使用给定的策略初始化全局模型参数。
- aggregate_evaluate(server_round: int, results: List[Tuple[ClientProxy, EvaluateRes]], failures: List[Tuple[ClientProxy, EvaluateRes] | BaseException]) Tuple[float | None, Dict[str, bool | bytes | float | int | str]] [源代码]#
使用给定的策略汇总评估损失。
- aggregate_fit(server_round: int, results: List[Tuple[ClientProxy, FitRes]], failures: List[Tuple[ClientProxy, FitRes] | BaseException]) Tuple[Parameters | None, Dict[str, bool | bytes | float | int | str]] [源代码]#
Aggregate training results and update clip norms.
- configure_evaluate(server_round: int, parameters: Parameters, client_manager: ClientManager) List[Tuple[ClientProxy, EvaluateIns]] [源代码]#
配置下一轮评估。
- configure_fit(server_round: int, parameters: Parameters, client_manager: ClientManager) List[Tuple[ClientProxy, FitIns]] [源代码]#
配置下一轮训练。
- evaluate(server_round: int, parameters: Parameters) Tuple[float, Dict[str, bool | bytes | float | int | str]] | None [源代码]#
使用策略中的评估函数评估模型参数。
- initialize_parameters(client_manager: ClientManager) Parameters | None [源代码]#
使用给定的策略初始化全局模型参数。