$$
\begin{align*}
\eta_t &= \eta_{\min} + \frac{1}{2}(\eta_{\max} - \eta_{\min})\left(1 + \cos\left(\frac{T_{\text{cur}}}{T_i} \pi \right)\right) \\
\text{when } T_{\text{cur}} = T_i {\text{ set }} \eta_t &= \eta_{\min}
\text{when } T_{\text{cur}} = 0 \text{ after restart}, {\text{ set }}\eta_t = \eta_{\max} \quad
\end{align*}
$$
It has been proposed in [SGDR: Stochastic Gradient Descent with Warm Restarts](https://arxiv.org/abs/1608.03983).
This class is similar to the [Pytorch CosineAnnealingWarmRestarts LRS](https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CosineAnnealingWarmRestarts.html#torch.optim.lr_scheduler.CosineAnnealingWarmRestarts).
**Parameters:**
* **optimizer** ([_torch.optim.Optimizer_](https://pytorch.org/docs/stable/optim.html#torch.optim.Optimizer "(in PyTorch v2.4)")) – The optimizer to schedule
* **initial\_learning\_rate** (_float_) – The initial learning rate.
* **T_0** (_int_) – Number of iterations for the first restart.
* **T_mult** (_int_) – A factor increases Ti after a restart. Currently T_mult must be set to 1.0
* **eta_min** (_float_) – Minimum learning rate.
**_property_** `initial_val`[#](#cerebras.pytorch.optim.lr_scheduler.CosineAnnealingWarmRestarts.initial_val "Permalink to this definition")
### optim.lr_scheduler.MultiplicativeLR[#](#optim-lr-scheduler-multiplicativelr "Permalink to this headline")
**_class_ cerebras.pytorch.optim.lr_scheduler.**`MultiplicativeLR`(_*args_, _**kwargs_)[\[source\]](../../../_modules/cerebras/pytorch/optim/lr_scheduler.html#MultiplicativeLR)[#](#cerebras.pytorch.optim.lr_scheduler.MultiplicativeLR "Permalink to this definition")
Multiply the learning rate of each parameter group by the supplied coefficient.
**Parameters:** * **optimizer** ([_torch.optim.Optimizer_](https://pytorch.org/docs/stable/optim.html#torch.optim.Optimizer "(in PyTorch v2.4)")) – The optimizer to schedule
* **initial\_learning\_rate** (_float_) – The initial learning rate.
* **coefficient** (_float_) – Multiplicative factor of learning rate.
**_property_** `initial_val`[#](#cerebras.pytorch.optim.lr_scheduler.MultiplicativeLR.initial_val "Permalink to this definition")
### optim.lr_scheduler.ChainedScheduler[#](#optim-lr-scheduler-chainedscheduler "Permalink to this headline")
**_class_ cerebras.pytorch.optim.lr_scheduler.**`ChainedScheduler`**(_*args_, _**kwargs_)**[\[source\]](../../../_modules/cerebras/pytorch/optim/lr_scheduler.html#ChainedScheduler)[#](#cerebras.pytorch.optim.lr_scheduler.ChainedScheduler "Permalink to this definition")
### optim.lr_scheduler.CyclicLR[#](#optim-lr-scheduler-cycliclr "Permalink to this headline")
**_class_ cerebras.pytorch.optim.lr_scheduler.**`CyclicLR`**(_*args_, _**kwargs_)**[\[source\]](../../../_modules/cerebras/pytorch/optim/lr_scheduler.html#CyclicLR)[#](#cerebras.pytorch.optim.lr_scheduler.CyclicLR "Permalink to this definition")
Sets the learning rate of each parameter group according to cyclical learning rate policy (CLR). The policy cycles the learning rate between two boundaries with a constant frequency, as detailed in the paper [Cyclical Learning Rates for Training Neural Networks](https://arxiv.org/abs/1506.01186). The distance between the two boundaries can be scaled on a per-iteration or per-cycle basis.
Cyclical learning rate policy changes the learning rate after every batch. step should be called after a batch has been used for training.
This class has three built-in policies, as put forth in the paper:
* “triangular”: A basic triangular cycle without amplitude scaling.
* “triangular2”: A basic triangular cycle that scales initial amplitude by
half each cycle.
* “exp_range”: A cycle that scales initial amplitude by
$$ ({\text{gamma}^{\text{cycle iterations}}})$$ at each cycle iteration.
This class is similar to the [Pytorch CyclicLR LRS](https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CyclicLR.html#torch.optim.lr_scheduler.CyclicLR).
**Parameters:**
* **optimizer** ([_torch.optim.Optimizer_](https://pytorch.org/docs/stable/optim.html#torch.optim.Optimizer "(in PyTorch v2.4)")) – The optimizer to schedule.
* **base_lr** (_float_) – Initial learning rate which is the lower boundary in the cycle.
* **max_lr** (_float_) – Upper learning rate boundaries in the cycle.
* **step\_size\_up** (_int_) – Number of training iterations in the increasing half of a cycle.
* **step\_size\_down** (_int_) – Number of training iterations in the decreasing half of a cycle.
* **mode** (_str_) – One of `{‘triangular’, ‘triangular2’, ‘exp_range’}`.
* **gamma** (_float_) – Constant in ‘exp_range’ scaling function: gamma**(cycle iterations).
* **scale_mode** (_str_) – `{‘cycle’, ‘iterations’}` Defines whether scale_fn is evaluated on cycle number or cycle iterations.
**_property_** `base_val`[#](#cerebras.pytorch.optim.lr_scheduler.CyclicLR.base_val "Permalink to this definition")
**_property_** `max_val`[#](#cerebras.pytorch.optim.lr_scheduler.CyclicLR.max_val "Permalink to this definition")
### optim.lr_scheduler.OneCycleLR[#](#optim-lr-scheduler-onecyclelr "Permalink to this headline")
**_class_ cerebras.pytorch.optim.**`lr_scheduler.OneCycleLR`*(_*args_, _**kwargs_)*[\[source\]](../../../_modules/cerebras/pytorch/optim/lr_scheduler.html#OneCycleLR)[#](#cerebras.pytorch.optim.lr_scheduler.OneCycleLR "Permalink to this definition")
Sets the learning rate of each parameter group according to the 1cycle learning rate policy. The 1cycle policy anneals the learning rate from an initial learning rate to some maximum learning rate and then from that maximum learning rate to some minimum learning rate much lower than the initial learning rate. This policy was initially described in the paper [Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates](https://arxiv.org/abs/1708.07120).
This scheduler is not chainable.
This class is similar to the [Pytorch OneCycleLR LRS](https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.OneCycleLR.html#torch.optim.lr_scheduler.OneCycleLR).
**Parameters:**
* **optimizer** ([_torch.optim.Optimizer_](https://pytorch.org/docs/stable/optim.html#torch.optim.Optimizer "(in PyTorch v2.4)")) – The optimizer to schedule
* **initial\_learning\_rate** (_float_) – Initial learning rate. Compared with PyTorch, this is equivalent to max\_lr / div\_factor.
* **max_lr** (_float_) – Upper learning rate boundaries in the cycle.
* **total_steps** (_int_) – The total number of steps in the cycle.
* **pct_start** (_float_) – The percentage of the cycle (in number of steps) spent increasing the learning rate.
* **final\_div\_factor** (_float_) – Determines the minimum learning rate via min\_lr = initial\_lr/final\_div\_factor.
* **three_phase** (_bool_) – If True, use a third phase of the schedule to annihilate the learning rate
* **anneal_strategy** (_str_) – Specifies the annealing strategy: “cos” for cosine annealing, “linear” for linear annealing.
**_property_** `initial_val`[#](#cerebras.pytorch.optim.lr_scheduler.OneCycleLR.initial_val "Permalink to this definition")
**_property_** `max_val`[#](#cerebras.pytorch.optim.lr_scheduler.OneCycleLR.max_val "Permalink to this definition")
## Weight Decay Schedulers in `cerebras.pytorch`[#](#weight-decay-schedulers-in-cerebras-pytorch "Permalink to this headline")
Available weight decay schedulers in the `cerebras.pytorch` package
| | |
| --- | --- |
| [`ConstantWD`](#cerebras.pytorch.optim.weight_decay_scheduler.ConstantWD "cerebras.pytorch.optim.weight_decay_scheduler.ConstantWD") | [`PolynomialWD`](#cerebras.pytorch.optim.weight_decay_scheduler.PolynomialWD "cerebras.pytorch.optim.weight_decay_scheduler.PolynomialWD") |
| [`LinearWD`](#cerebras.pytorch.optim.weight_decay_scheduler.LinearWD "cerebras.pytorch.optim.weight_decay_scheduler.LinearWD") | [`ExponentialWD`](#cerebras.pytorch.optim.weight_decay_scheduler.ExponentialWD "cerebras.pytorch.optim.weight_decay_scheduler.ExponentialWD") |
| [`InverseExponentialTimeDecayWD`](#cerebras.pytorch.optim.weight_decay_scheduler.InverseExponentialTimeDecayWD "cerebras.pytorch.optim.weight_decay_scheduler.InverseExponentialTimeDecayWD") | [`InverseSquareRootDecayWD`](#cerebras.pytorch.optim.weight_decay_scheduler.InverseSquareRootDecayWD "cerebras.pytorch.optim.weight_decay_scheduler.InverseSquareRootDecayWD") |
| [`CosineDecayWD`](#cerebras.pytorch.optim.weight_decay_scheduler.CosineDecayWD "cerebras.pytorch.optim.weight_decay_scheduler.CosineDecayWD") | [`SequentialWD`](#cerebras.pytorch.optim.weight_decay_scheduler.SequentialWD "cerebras.pytorch.optim.weight_decay_scheduler.SequentialWD") |
| [`PiecewiseConstantWD`](#cerebras.pytorch.optim.weight_decay_scheduler.PiecewiseConstantWD "cerebras.pytorch.optim.weight_decay_scheduler.PiecewiseConstantWD") | [`MultiStepWD`](#cerebras.pytorch.optim.weight_decay_scheduler.MultiStepWD "cerebras.pytorch.optim.weight_decay_scheduler.MultiStepWD") |
| [`StepWD`](#cerebras.pytorch.optim.weight_decay_scheduler.StepWD "cerebras.pytorch.optim.weight_decay_scheduler.StepWD") | [`CosineAnnealingWD`](#cerebras.pytorch.optim.weight_decay_scheduler.CosineAnnealingWD "cerebras.pytorch.optim.weight_decay_scheduler.CosineAnnealingWD") |
| [`LambdaWD`](#cerebras.pytorch.optim.weight_decay_scheduler.LambdaWD "cerebras.pytorch.optim.weight_decay_scheduler.LambdaWD") | [`CosineAnnealingWarmRestartsWD`](#cerebras.pytorch.optim.weight_decay_scheduler.CosineAnnealingWarmRestartsWD "cerebras.pytorch.optim.weight_decay_scheduler.CosineAnnealingWarmRestartsWD") |
| [`MultiplicativeWD`](#cerebras.pytorch.optim.weight_decay_scheduler.MultiplicativeWD "cerebras.pytorch.optim.weight_decay_scheduler.MultiplicativeWD") | [`ChainedWD`](#cerebras.pytorch.optim.weight_decay_scheduler.ChainedWD "cerebras.pytorch.optim.weight_decay_scheduler.ChainedWD") |
### optim.weight\_decay\_scheduler.WeightDecayScheduler[#](#optim-weight-decay-scheduler-weightdecayscheduler "Permalink to this headline")
**_class_ cerebras.pytorch.optim.weight\_decay\_scheduler.**`WeightDecayScheduler`**(_optimizer_, _total_iters_, _last_epoch=- 1_, _param\_group\_tags=None_)**[\[source\]](../../../_modules/cerebras/pytorch/optim/weight_decay_scheduler.html#WeightDecayScheduler)[#](#cerebras.pytorch.optim.weight_decay_scheduler.WeightDecayScheduler "Permalink to this definition")
**_property_** `param\_group\_key`[#](#cerebras.pytorch.optim.weight_decay_scheduler.WeightDecayScheduler.param_group_key "Permalink to this definition")
### optim.weight\_decay\_scheduler.ConstantWD[#](#optim-weight-decay-scheduler-constantwd "Permalink to this headline")
**_class_ cerebras.pytorch.optim.weight\_decay\_scheduler.**`ConstantWD`**(_optimizer_, _val_, _total_iters=None_, _param\_group\_tags=None_)**[\[source\]](../../../_modules/cerebras/pytorch/optim/weight_decay_scheduler.html#ConstantWD)[#](#cerebras.pytorch.optim.weight_decay_scheduler.ConstantWD "Permalink to this definition")
Maintains a constant weight decay for each parameter group (no decaying).
**Parameters:**
* **optimizer** – The optimizer to schedule
* **val** (_float_) – The weight decay value to maintain
* **total_iters** (_int_) – The number of steps to decay for
### optim.weight\_decay\_scheduler.PolynomialWD[#](#optim-weight-decay-scheduler-polynomialwd "Permalink to this headline")
**_class_ cerebras.pytorch.optim.weight\_decay\_scheduler.**`PolynomialWD`**(_optimizer_, _initial_val_, _end_val_, _total_iters_, _power=1.0_, _cycle=False_, _param\_group\_tags=None_)**[\[source\]](../../../_modules/cerebras/pytorch/optim/weight_decay_scheduler.html#PolynomialWD)[#](#cerebras.pytorch.optim.weight_decay_scheduler.PolynomialWD "Permalink to this definition")
Decays the weight decay of each parameter group using a polynomial function in the given total_iters.
This class is similar to the [Pytorch PolynomialLR LRS](https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.PolynomialLR.html#torch.optim.lr_scheduler.PolynomialLR).
**Parameters:**
* **optimizer** – The optimizer to schedule
* **initial_val** (_float_) – The initial weight decay
* **end_val** (_float_) – The final weight decay
* **total_iters** (_int_) – Number of steps to perform the decay
* **power** (_float_) – Exponent to apply to “x” (as in y=mx+b), which is ratio of step completion (1 for linear) Default: 1.0 (only Linear supported at the moment)
* **cycle** (_bool_) – Whether to cycle
### optim.weight\_decay\_scheduler.LinearWD[#](#optim-weight-decay-scheduler-linearwd "Permalink to this headline")
**_class_ cerebras.pytorch.optim.weight\_decay\_scheduler.**`LinearWD`**(_optimizer_, _initial_val_, _end_val_, _total_iters_, _cycle=False_, _param\_group\_tags=None_)**[\[source\]](../../../_modules/cerebras/pytorch/optim/weight_decay_scheduler.html#LinearWD)[#](#cerebras.pytorch.optim.weight_decay_scheduler.LinearWD "Permalink to this definition")
Alias for Polynomial Scheduler scheduler with a power of 1
### optim.weight\_decay\_scheduler.ExponentialWD[#](#optim-weight-decay-scheduler-exponentialwd "Permalink to this headline")
**_class_ cerebras.pytorch.optim.weight\_decay\_scheduler.**`ExponentialWD`**(_optimizer_, _initial_val_, _total_iters_, _decay_rate_, _staircase=False_, _param\_group\_tags=None_)**[\[source\]](../../../_modules/cerebras/pytorch/optim/weight_decay_scheduler.html#ExponentialWD)[#](#cerebras.pytorch.optim.weight_decay_scheduler.ExponentialWD "Permalink to this definition")
Decays the weight decay of each parameter group by decay_rate every step.
This class is similar to the [Pytorch ExponentialLR LRS](https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ExponentialLR.html#torch.optim.lr_scheduler.ExponentialLR).
**Parameters:**
* **optimizer** ([_torch.optim.Optimizer_](https://pytorch.org/docs/stable/optim.html#torch.optim.Optimizer "(in PyTorch v2.4)")) – The optimizer to schedule
* **initial_val** (_float_) – The initial weight decay.
* **total_iters** (_int_) – Number of steps to perform the decay
* **decay_rate** (_float_) – The decay rate
* **staircase** (_bool_) – If True decay the weight decay at discrete intervals
### optim.weight\_decay\_scheduler.InverseExponentialTimeDecayWD[#](#optim-weight-decay-scheduler-inverseexponentialtimedecaywd "Permalink to this headline")
**_class_ cerebras.pytorch.optim.**weight\_decay\_scheduler.InverseExponentialTimeDecayWD(_optimizer_, _initial_val_, _step_exponent_, _total_iters_, _decay_rate_, _staircase=False_, _param\_group\_tags=None_)[\[source\]](../../../_modules/cerebras/pytorch/optim/weight_decay_scheduler.html#InverseExponentialTimeDecayWD)[#](#cerebras.pytorch.optim.weight_decay_scheduler.InverseExponentialTimeDecayWD "Permalink to this definition")
Decays the weight decay inverse-exponentially over time, as described in the [Keras InverseTimeDecay class](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/schedules/InverseTimeDecay).
**Parameters:**
* **optimizer** ([_torch.optim.Optimizer_](https://pytorch.org/docs/stable/optim.html#torch.optim.Optimizer "(in PyTorch v2.4)")) – The optimizer to schedule
* **initial_val** (_float_) – The initial weight decay.
* **step_exponent** (_int_) – Exponential weight decay.
* **total_iters** (_int_) – Number of steps to perform the decay.
* **decay_rate** (_float_) – The decay rate.
* **staircase** (_bool_) – If True decay the weight decay at discrete intervals.
### optim.weight\_decay\_scheduler.InverseSquareRootDecayWD[#](#optim-weight-decay-scheduler-inversesquarerootdecaywd "Permalink to this headline")
**_class_ cerebras.pytorch.optim.weight\_decay\_scheduler.**`InverseSquareRootDecayWD`**(_optimizer_, _initial_val=1.0_, _scale=1.0_, _warmup_steps=1.0_, _param\_group\_tags=None_)**[\[source\]](../../../_modules/cerebras/pytorch/optim/weight_decay_scheduler.html#InverseSquareRootDecayWD)[#](#cerebras.pytorch.optim.weight_decay_scheduler.InverseSquareRootDecayWD "Permalink to this definition")
Decays the weight decay inverse-squareroot over time, as described in the following equation:
$$
wd_t = \frac{\text{scale}}{\sqrt{\max\{t, \text{warmup\_steps}\}}}
$$
**Parameters:**
* **optimizer** ([_torch.optim.Optimizer_](https://pytorch.org/docs/stable/optim.html#torch.optim.Optimizer "(in PyTorch v2.4)")) – The optimizer to schedule
* **initial_val** (_float_) – The initial weight decay.
* **scale** (_float_) – Multiplicative factor to scale the result.
* **warmup_steps** (_int_) – use initial\_val for the first warmup\_steps.
### optim.weight\_decay\_scheduler.CosineDecayWD[#](#optim-weight-decay-scheduler-cosinedecaywd "Permalink to this headline")
**_class_ cerebras.pytorch.optim.weight\_decay\_scheduler**.`CosineDecayWD`**(_optimizer_, _initial_val_, _end_val_, _total_iters_, _param\_group\_tags=None_)**[\[source\]](../../../_modules/cerebras/pytorch/optim/weight_decay_scheduler.html#CosineDecayWD)[#](#cerebras.pytorch.optim.weight_decay_scheduler.CosineDecayWD "Permalink to this definition")
Applies the cosine decay schedule as described in the [Keras CosineDecay class](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/schedules/CosineDecay).
**Parameters:**
* **optimizer** – The optimizer to schedule
* **initial_val** (_float_) – The initial weight decay
* **end_val** (_float_) – The final weight decay
* **total_iters** (_int_) – Number of steps to perform the decay
### optim.weight\_decay\_scheduler.SequentialWD[#](#optim-weight-decay-scheduler-sequentialwd "Permalink to this headline")
**_class_ cerebras.pytorch.optim.weight\_decay\_scheduler.**`SequentialWD`**(_optimizer_, _schedulers_, _milestones_, _last_epoch=- 1_, _param\_group\_tags=None_)**[\[source\]](../../../_modules/cerebras/pytorch/optim/weight_decay_scheduler.html#SequentialWD)[#](#cerebras.pytorch.optim.weight_decay_scheduler.SequentialWD "Permalink to this definition")
Receives the list of schedulers that is expected to be called sequentially during optimization process and milestone points that provides exact intervals to reflect which scheduler is supposed to be called at a given step.
This class is similar to [Pytorch SequentialLR LRS](https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.SequentialLR.html#torch.optim.lr_scheduler.SequentialLR).
**Parameters:**
* **optimizer** – Wrapped optimizer
* **schedulers** (_list_) – List of chained schedulers.
* **milestones** (_list_) – List of integers that reflects milestone points.
* **last_epoch** (_int_) – The index of last epoch. Default: -1.
### optim.weight\_decay\_scheduler.PiecewiseConstantWD[#](#optim-weight-decay-scheduler-piecewiseconstantwd "Permalink to this headline")
**_class_ cerebras.pytorch.optim.weight\_decay\_scheduler.**`PiecewiseConstantWD`**(_optimizer_, _vals_, _milestones_, _param\_group\_tags=None_)**[\[source\]](../../../_modules/cerebras/pytorch/optim/weight_decay_scheduler.html#PiecewiseConstantWD)[#](#cerebras.pytorch.optim.weight_decay_scheduler.PiecewiseConstantWD "Permalink to this definition")
Adjusts the weight decay to a predefined constant at each milestone and holds this value until the next milestone. Notice that such adjustment can happen simultaneously with other changes to the weight decays from outside this scheduler.
**Parameters:**
* **optimizer** ([_torch.optim.Optimizer_](https://pytorch.org/docs/stable/optim.html#torch.optim.Optimizer "(in PyTorch v2.4)")) – The optimizer to schedule
* **vals** (_List__\[__float__\]_) – List of weight decays to maintain before/during each milestone.
* **milestones** (_List__\[__int__\]_) – List of step indices. Must be increasing.
### optim.weight\_decay\_scheduler.MultiStepWD[#](#optim-weight-decay-scheduler-multistepwd "Permalink to this headline")
**_class_ cerebras.pytorch.optim.weight\_decay\_scheduler.**`MultiStepWD`**(_optimizer_, _initial_val_, _gamma_, _milestones_, _param\_group\_tags=None_)**[\[source\]](../../../_modules/cerebras/pytorch/optim/weight_decay_scheduler.html#MultiStepWD)[#](#cerebras.pytorch.optim.weight_decay_scheduler.MultiStepWD "Permalink to this definition")
Decays the weight decay of each parameter group by gamma once the number of steps reaches one of the milestones. Notice that such decay can happen simultaneously with other changes to the weight decay from outside this scheduler.
This class is similar to the [Pytorch MultiStepLR LRS](https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.MultiStepLR.html#torch.optim.lr_scheduler.MultiStepLR).
**Parameters:**
* **optimizer** ([_torch.optim.Optimizer_](https://pytorch.org/docs/stable/optim.html#torch.optim.Optimizer "(in PyTorch v2.4)")) – The optimizer to schedule
* **initial_val** (_float_) – The initial weight decay.
* **gamma** (_float_) – Multiplicative factor of weight decay decay.
* **milestones** (_List__\[__int__\]_) – List of step indices. Must be increasing.
### optim.weight\_decay\_scheduler.StepWD[#](#optim-weight-decay-scheduler-stepwd "Permalink to this headline")
**_class_ cerebras.pytorch.optim.weight\_decay\_scheduler**`.StepWD`**(_optimizer_, _initial_val_, _step_size_, _gamma_, _param\_group\_tags=None_)**[\[source\]](../../../_modules/cerebras/pytorch/optim/weight_decay_scheduler.html#StepWD)[#](#cerebras.pytorch.optim.weight_decay_scheduler.StepWD "Permalink to this definition")
Decays the weight decay of each parameter group by gamma every step_size. Notice that such decay can happen simultaneously with other changes to the weight decay from outside this scheduler.
This class is similar to the [Pytorch StepLR LRS](https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.StepLR.html#torch.optim.lr_scheduler.StepLR).
**Parameters:**
* **optimizer** ([_torch.optim.Optimizer_](https://pytorch.org/docs/stable/optim.html#torch.optim.Optimizer "(in PyTorch v2.4)")) – The optimizer to schedule
* **initial_val** (_float_) – The initial val.
* **step_size** (_int_) – Period of decay.
* **gamma** (_float_) – Multiplicative factor of decay.
### optim.weight\_decay\_scheduler.CosineAnnealingWD[#](#optim-weight-decay-scheduler-cosineannealingwd "Permalink to this headline")
**_class_ cerebras.pytorch.optim.**weight\_decay\_scheduler.CosineAnnealingWD(_optimizer_, _initial_val_, _T_max_, _eta_min=0.0_, _param\_group\_tags=None_)[\[source\]](../../../_modules/cerebras/pytorch/optim/weight_decay_scheduler.html#CosineAnnealingWD)[#](#cerebras.pytorch.optim.weight_decay_scheduler.CosineAnnealingWD "Permalink to this definition")
Set the weight decay of each parameter group using a cosine annealing schedule, where $$(\eta_{\max})$$ is set to the initial wd and $$T_{\text{cur}}$$ is the number of steps since the last restart in SGDR:
$$
\begin{align*}
\eta_t &= \eta_{\min} + \frac{1}{2}(\eta_{\max} - \eta_{\min})\left(1 + \cos\left(\frac{T_{\text{cur}}}{T_{\max}} \pi \right)\right), \quad T_{\text{cur}} \neq (2k + 1)T_{\max} \\
\eta_{t+1} &= \eta_t + \frac{1}{2}(\eta_{\max} - \eta_{\min})\left(1 - \cos\left(\frac{1}{T_{\max}} \pi \right)\right), \quad T_{\text{cur}} = (2k + 1)T_{\max}
\end{align*}
$$
Notice that because the schedule is defined recursively, the weight decay can be simultaneously modified outside this scheduler by other operators. If the weight decay is set solely by this scheduler, the weight decay at each step becomes:
$$
\begin{align*}
\eta_t &= \eta_{\min} + \frac{1}{2}(\eta_{\max} - \eta_{\min})\left(1 + \cos\left(\frac{T_{\text{cur}}}{T_{\max}} \pi \right)\right)
\end{align*}
$$
It has been proposed in [SGDR: Stochastic Gradient Descent with Warm Restarts](https://arxiv.org/abs/1608.03983). Note that this only implements the cosine annealing part of SGDR, and not the restarts.
This class is similar to the [Pytorch CosineAnnealingLR LRS](https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CosineAnnealingLR.html#torch.optim.lr_scheduler.CosineAnnealingLR).
**Parameters:**
* **optimizer** ([_torch.optim.Optimizer_](https://pytorch.org/docs/stable/optim.html#torch.optim.Optimizer "(in PyTorch v2.4)")) – The optimizer to schedule
* **initial_val** (_float_) – The initial weight decay.
* **T_max** (_int_) – Maximum number of iterations.
* **eta_min** (_float_) – Minimum weight decay.
### optim.weight\_decay\_scheduler.LambdaWD[#](#optim-weight-decay-scheduler-lambdawd "Permalink to this headline")
**_class_ cerebras.pytorch.optim.weight\_decay\_scheduler.**`LambdaWD`**(_optimizer_, _initial_val_, _param\_group\_tags=None_)**[\[source\]](../../../_modules/cerebras/pytorch/optim/weight_decay_scheduler.html#LambdaWD)[#](#cerebras.pytorch.optim.weight_decay_scheduler.LambdaWD "Permalink to this definition")
Sets the weight decay of each parameter group to the initial wd times a given function (which is specified by overriding set\_value\_lambda).
**Parameters:**
* **optimizer** ([_torch.optim.Optimizer_](https://pytorch.org/docs/stable/optim.html#torch.optim.Optimizer "(in PyTorch v2.4)")) – The optimizer to schedule
* **initial_val** (_float_) – The initial weight decay.
### optim.weight\_decay\_scheduler.CosineAnnealingWarmRestartsWD[#](#optim-weight-decay-scheduler-cosineannealingwarmrestartswd "Permalink to this headline")
**_class_ cerebras.pytorch.optim.weight\_decay\_scheduler.**`CosineAnnealingWarmRestartsWD`**(_optimizer_, _initial_val_, _T_0_, _T_mult=1_, _eta_min=0.0_, _param\_group\_tags=None_)**[\[source\]](../../../_modules/cerebras/pytorch/optim/weight_decay_scheduler.html#CosineAnnealingWarmRestartsWD)[#](#cerebras.pytorch.optim.weight_decay_scheduler.CosineAnnealingWarmRestartsWD "Permalink to this definition")
Set the weight decay of each parameter group using a cosine annealing schedule, where $$(\eta_{\max} )$$ is set to the initial wd, $$ T_{\text{cur}} $$ is the number of steps since the last restart and $$T_i {\text{ set }} $$ is the number of steps between two warm restarts in SGDR:
$$
\begin{align*}
\eta_t = \eta_{\min} + \frac{1}{2} (\eta_{\max} - \eta_{\min}) \left( 1 + \cos\left( \frac{T_{\text{cur}}}{T_i} \pi \right) \right) \\
\text{when } T_{\text{cur}} = T_i {\text{ set }} \eta_t &= \eta_{\min}.
\text{when } T_{\text{cur}} = 0 \text{ after restart}, {\text{ set }}\eta_t = \eta_{\max} \quad
\end{align*}
$$
It has been proposed in [SGDR: Stochastic Gradient Descent with Warm Restarts](https://arxiv.org/abs/1608.03983).
This class is similar to the [Pytorch CosineAnnealingWarmRestarts LRS](https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CosineAnnealingWarmRestarts.html#torch.optim.lr_scheduler.CosineAnnealingWarmRestarts).
**Parameters:**
* **optimizer** ([_torch.optim.Optimizer_](https://pytorch.org/docs/stable/optim.html#torch.optim.Optimizer "(in PyTorch v2.4)")) – The optimizer to schedule
* **initial_val** (_float_) – The initial weight decay.
* **T_0** (_int_) – Number of iterations for the first restart.
* **T_mult** (_int_) – A factor increases Ti after a restart. Currently T_mult must be set to 1.0
* **eta_min** (_float_) – Minimum weight decay.
### optim.weight\_decay\_scheduler.MultiplicativeWD[#](#optim-weight-decay-scheduler-multiplicativewd "Permalink to this headline")
**_class_ cerebras.pytorch.optim.weight\_decay\_scheduler.**`MultiplicativeWD`**(_optimizer_, _initial_val_, _coefficient_, _param\_group\_tags=None_)**[\[source\]](../../../_modules/cerebras/pytorch/optim/weight_decay_scheduler.html#MultiplicativeWD)[#](#cerebras.pytorch.optim.weight_decay_scheduler.MultiplicativeWD "Permalink to this definition")
Multiply the weight decay of each parameter group by the supplied coefficient.
**Parameters:**
* **optimizer** ([_torch.optim.Optimizer_](https://pytorch.org/docs/stable/optim.html#torch.optim.Optimizer "(in PyTorch v2.4)")) – The optimizer to schedule
* **initial_val** (_float_) – The initial weight decay.
* **coefficient** (_float_) – Multiplicative factor of weight decay.
### optim.weight\_decay\_scheduler.ChainedWD[#](#optim-weight-decay-scheduler-chainedwd "Permalink to this headline")
**_class_ cerebras.pytorch.optim.weight\_decay\_scheduler.** `ChainedWD`**(_schedulers_, _param\_group\_tags=None_)[\[**source\]](../../../_modules/cerebras/pytorch/optim/weight_decay_scheduler.html#ChainedWD)[#](#cerebras.pytorch.optim.weight_decay_scheduler.ChainedWD "Permalink to this definition")
Chains list of weight decay schedulers. It takes a list of chainable weight decay schedulers and performs consecutive step() functions belonging to them by just one call.
### optim.weight\_decay\_scheduler.CyclicWD[#](#optim-weight-decay-scheduler-cyclicwd "Permalink to this headline")
**_class_ cerebras.pytorch.optim.weight\_decay\_scheduler.**`CyclicWD`**(_optimizer_, _base_val_, _max_val_, _step\_size\_up=2000_, _step\_size\_down=None_, _mode='triangular'_, _gamma=1.0_, _scale_mode='cycle'_, _param\_group\_tags=None_)**[\[source\]](../../../_modules/cerebras/pytorch/optim/weight_decay_scheduler.html#CyclicWD)[#](#cerebras.pytorch.optim.weight_decay_scheduler.CyclicWD "Permalink to this definition")
Sets the weight decay of each parameter group according to cyclical weight decay policy (CLR). The policy cycles the learning rate between two boundaries with a constant frequency, as detailed in the paper [Cyclical Learning Rates for Training Neural Networks](https://arxiv.org/abs/1506.01186). The distance between the two boundaries can be scaled on a per-iteration or per-cycle basis.
Cyclical weight decay policy changes the weight decay after every batch. step should be called after a batch has been used for training.
This class has three built-in policies, as put forth in the paper:
* “triangular”: A basic triangular cycle without amplitude scaling.
* “triangular2”: A basic triangular cycle that scales initial amplitude by
half each cycle.
* “exp_range”: A cycle that scales initial amplitude by
$$(gama ^{cycle iteration})$$ at each cycle iteration.
This class is similar to the [Pytorch CyclicLR LRS](https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CyclicLR.html#torch.optim.lr_scheduler.CyclicLR).
**Parameters:**
* **optimizer** ([_torch.optim.Optimizer_](https://pytorch.org/docs/stable/optim.html#torch.optim.Optimizer "(in PyTorch v2.4)")) – The optimizer to schedule.
* **base_val** (_float_) – Initial weight decay which is the lower boundary in the cycle.
* **max_val** (_float_) – Upper weight decay boundaries in the cycle.
* **step\_size\_up** (_int_) – Number of training iterations in the increasing half of a cycle.
* **step\_size\_down** (_int_) – Number of training iterations in the decreasing half of a cycle.
* **mode** (_str_) – One of `{‘triangular’, ‘triangular2’, ‘exp_range’}`.
* **gamma** (_float_) – Constant in ‘exp_range’ scaling function: gamma**(cycle iterations).
* **scale_mode** (_str_) – `{‘cycle’, ‘iterations’}` Defines whether scale_fn is evaluated on cycle number or cycle iterations.
### optim.weight\_decay\_scheduler.OneCycleWD[#](#optim-weight-decay-scheduler-onecyclewd "Permalink to this headline")
**_class_ cerebras.pytorch.optim.weight\_decay\_scheduler.**`OneCycleWD`**(_optimizer_, _initial_val_, _max_val_, _total_steps=1000_, _pct_start=0.3_, _final\_div\_factor=10000.0_, _three_phase=False_, _anneal_strategy='cos'_, _param\_group\_tags=None_)**[\[source\]](../../../_modules/cerebras/pytorch/optim/weight_decay_scheduler.html#OneCycleWD)[#](#cerebras.pytorch.optim.weight_decay_scheduler.OneCycleWD "Permalink to this definition")
Sets the weight decay of each parameter group according to the 1cycle weight decay policy. The 1cycle policy anneals the learning rate from an initial weight decay to some maximum weight decay and then from that maximum weight decay to some minimum weight decay much lower than the initial weight decay. This policy was initially described in the paper [Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates](https://arxiv.org/abs/1708.07120).
This scheduler is not chainable.
This class is similar to the [Pytorch OneCycleLR LRS](https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.OneCycleLR.html#torch.optim.lr_scheduler.OneCycleLR).
**Parameters:**
* **optimizer** ([_torch.optim.Optimizer_](https://pytorch.org/docs/stable/optim.html#torch.optim.Optimizer "(in PyTorch v2.4)")) – The optimizer to schedule
* **initial_val** (_float_) – Initial weight decay. Compared with PyTorch, this is equivalent to max\_val / div\_factor.
* **max_val** (_float_) – Upper weight decay boundaries in the cycle.
* **total_steps** (_int_) – The total number of steps in the cycle.
* **pct_start** (_float_) – The percentage of the cycle (in number of steps) spent increasing the weight decay.
* **final\_div\_factor** (_float_) – Determines the minimum weight decay via min\_val = initial\_val/final\_div\_factor.
* **three_phase** (_bool_) – If True, use a third phase of the schedule to annihilate the weight decay
* **anneal_strategy** (_str_) – Specifies the annealing strategy: “cos” for cosine annealing, “linear” for linear annealing.