Module ml4opf.loss_functions
ML4OPF Loss Functions
Sub-modules
ml4opf.loss_functions.ldf
-
Formulation-agnostic Lagrangian Dual Framework Loss Function.
ml4opf.loss_functions.objective
-
Use the formulation objective as the loss function
ml4opf.loss_functions.penalty
-
Penalize constraint violations
Classes
class LDFLoss (v: OPFViolation,
step_size: float,
kickin: int,
update_freq: int,
divide_by_counter: bool = True,
exclude_keys: str | list[str] | None = None)-
LDFLoss implements the Lagrangian Dual Framework.
exclude_keys
is either None to use all violations, "all" to skip all violations, or a list of keys to skip specific violations.Initialize LDFLoss module.
Ancestors
- torch.nn.modules.module.Module
Methods
def end_epoch(self)
-
Call this method at the end of each epoch.
def forward(self,
base_loss: torch.Tensor,
exclude_keys: str | list[str] | None = None,
**calc_violation_inputs: torch.Tensor) ‑> torch.Tensor-
Compute the LDF Loss for a batch of samples.
def init_mults(self, shapes=None)
-
Initialize λ and trackers to zeros.
def reset_trackers(self)
-
Reset the violation trackers to zeros.
def start_epoch(self, epoch) ‑> str | None
-
Call this method at the start of each epoch.
def update(self)
-
Update the lagrangian dual multipliers (λ)
class ObjectiveLoss (v: OPFViolation,
reduction: str | None = 'mean')-
ObjectiveLoss is the original objective of the OPF.
It takes as input the same arguments as the corresponding formulation's
ml4opf.loss_functions.objective
method, and returns the objective value.Initialize ObjectiveLoss module.
Args
v
:OPFViolation
- OPFViolation module.
reduction
:Optional[str]
- Reduction operation. Default: "mean".
Ancestors
- torch.nn.modules.module.Module
Class variables
var SUPPORTED_REDUCTIONS
Methods
def forward(self, *objective_args, **objective_kwargs) ‑> torch.Tensor
-
Compute the objective value for a batch of samples.