Module ml4opf.layers
ML4OPF Layers
Sub-modules
ml4opf.layers.bound_repair
-
Differentiable repair layer for satisfying bound constraints x̲≤x≤x̅.
ml4opf.layers.hypersimplex_repair
-
Differentiable repair layer for the hyper-simplex constraint, ∑x=X, x̲≤x≤x̅.
ml4opf.layers.slackbus_repair
ml4opf.layers.voltagedifference_repair
Classes
class BoundRepair (xmin: torch.Tensor | None,
xmax: torch.Tensor | None,
method: str = 'relu',
sanity_check: bool = True,
memory_efficient: int = 0)-
An activation function that clips the output to a given range.
Initializes the BoundRepair module.
If both
xmin
andxmax
are None, per-sample bounds must be provided as input to the forward method. In this casememory_efficient
is ignored (it is set to 2 regardless).Args
xmin
:Tensor
- Lower bounds for clipping.
xmax
:Tensor
- Upper bounds for clipping.
method
:str
- The method to use for clipping. One of ["relu", "sigmoid", "clamp", "softplus", "tanh", "none"].
sanity_check
:bool
- If True, performs sanity checks on the input.
memory_efficient
:int
- 0: pre-compute masks and pre-index bounds, 1: pre-compute masks, 2: do not pre-compute anything
Ancestors
- torch.nn.modules.module.Module
Class variables
var SUPPORTED_METHODS
Static methods
def double_relu(x: torch.Tensor, xmin: torch.Tensor, xmax: torch.Tensor)
-
ReLU bound repair function for double-sided bounds.
\text{relu}(x - \underline{x}) - \text{relu}(x - \overline{x}) + \underline{x}
Args
x
:Tensor
- Input tensor.
xmin
:Tensor
- Lower bound.
xmax
:Tensor
- Upper bound.
Returns
Tensor
- Output tensor satisfying the bounds.
def double_sigmoid(x: torch.Tensor, xmin: torch.Tensor, xmax: torch.Tensor)
-
Sigmoid bound repair function for double-sided bounds.
\text{sigmoid}(x) \cdot (\overline{x} - \underline{x}) + \underline{x}
Args
x
:Tensor
- Input tensor.
xmin
:Tensor
- Lower bound.
xmax
:Tensor
- Upper bound.
Returns
Tensor
- Output tensor satisfying the bounds.
def double_softplus(x: torch.Tensor, xmin: torch.Tensor, xmax: torch.Tensor)
-
Softplus bound repair function for double-sided bounds.
\text{softplus}(x - \underline{x}) - \text{softplus}(x - \overline{x}) + \underline{x}
Args
x
:Tensor
- Input tensor.
xmin
:Tensor
- Lower bound.
xmax
:Tensor
- Upper bound.
Returns
Tensor
- Output tensor satisfying the bounds.
def double_tanh(x: torch.Tensor, xmin: torch.Tensor, xmax: torch.Tensor)
-
Tanh bound repair function for double-sided bounds.
(\frac{1}{2} \tanh(x) + \frac{1}{2}) \cdot (\overline{x} - \underline{x}) + \underline{x}
Args
x
:Tensor
- Input tensor.
xmin
:Tensor
- Lower bound.
xmax
:Tensor
- Upper bound.
Returns
Tensor
- Output tensor satisfying the bounds.
def lower_relu(x: torch.Tensor, xmin: torch.Tensor)
-
ReLU bound repair function for lower bounds.
\text{relu}(x - \underline{x}) + \underline{x}
Args
x
:Tensor
- Input tensor.
xmin
:Tensor
- Lower bound.
Returns
Tensor
- Output tensor satisfying the bounds.
def lower_softplus(x: torch.Tensor, xmin: torch.Tensor)
-
Softplus bound repair function for lower bounds.
\text{softplus}(x - \underline{x}) + \underline{x}
Args
x
:Tensor
- Input tensor.
xmin
:Tensor
- Lower bound.
Returns
Tensor
- Output tensor satisfying the bounds.
def upper_relu(x: torch.Tensor, xmax: torch.Tensor)
-
ReLU bound repair function for upper bounds.
-\text{relu}(\overline{x} - x) + \overline{x}
Args
x
:Tensor
- Input tensor.
xmax
:Tensor
- Upper bound.
Returns
Tensor
- Output tensor satisfying the bounds.
def upper_softplus(x: torch.Tensor, xmax: torch.Tensor)
-
Softplus bound repair function for upper bounds.
-\text{softplus}(\overline{x} - x) + \overline{x}
Args
x
:Tensor
- Input tensor.
xmax
:Tensor
- Upper bound.
Returns
Tensor
- Output tensor satisfying the bounds.
Methods
def clamp(self, x: torch.Tensor)
-
Bound repair function that uses
torch.clamp
.\text{clamp}(x, \underline{x}, \overline{x})
Args
x
:Tensor
- Input tensor.
Returns
Tensor
- Output tensor satisfying the bounds.
def forward(self,
x: torch.Tensor,
xmin: torch.Tensor | None = None,
xmax: torch.Tensor | None = None) ‑> Callable[..., Any]-
Applies the bound clipping function to the input.
def load_state_dict(self, state_dict: dict, strict: bool = True)
-
Loads the state dictionary and re-initializes the pre-computed quantities.
def none(self, x: torch.Tensor)
-
no-op, just return x
def preprocess_bounds(self, memory_efficient: int)
-
Pre-computes masks and pre-indexes bounds depending on
memory_efficient
level.Args
memory_efficient (int):
0
: (fastest, most memory) pre-compute masks and index bounds1
: pre-compute masks only2
: (slowest, least memory) do not pre-compute anything def relu(self, x: torch.Tensor)
-
Apply the ReLU-based bound repair functions to the input, supporting any combination of single- or double-sided bounds.
Args
x
:Tensor
- Input tensor.
Returns
Tensor
- Output tensor satisfying the bounds.
def sigmoid(self, x: torch.Tensor)
-
Apply the sigmoid bound repair function to the input, supporting only unbounded or double-sided bounds.
Args
x
:Tensor
- Input tensor.
Returns
Tensor
- Output tensor satisfying the bounds.
def softplus(self, x: torch.Tensor)
-
Apply the softplus bound-clipping function to the input, supporting any combination of single- or double-sided bounds.
Args
x
:Tensor
- Input tensor.
Returns
Tensor
- Output tensor satisfying the bounds.
def tanh(self, x: torch.Tensor)
-
Apply the tanh bound-clipping function to the input, supporting only unbounded or double-sided bounds.
Args
x
:Tensor
- Input tensor.
Returns
Tensor
- Output tensor satisfying the bounds.
class HyperSimplexRepair (xmin: torch.Tensor | None = None,
xmax: torch.Tensor | None = None,
X: torch.Tensor | None = None)-
Repair layer for the hyper-simplex constraint ∑x=X, x̲≤x≤x̅.
Initialize internal Module state, shared by both nn.Module and ScriptModule.
Ancestors
- torch.nn.modules.module.Module
Methods
def forward(self,
x: torch.Tensor,
xmin: torch.Tensor | None = None,
xmax: torch.Tensor | None = None,
X: torch.Tensor | None = None) ‑> Callable[..., Any]-
Project onto ∑x=X, x̲≤x≤x̅