Module ml4opf.models.ldf_nn.ldf_nn

Classes

class LDFNN (opfmodel: OPFModel,
slices: list[slice],
optimizer: str = 'adam',
loss: str = 'mse',
hidden_sizes: list[int] = [100, 100],
activation: str = 'relu',
boundrepair: str = 'none',
learning_rate: float = 0.001,
step_size: float = 1e-05,
kickin: int = 0,
update_freq: int = 500,
divide_by_counter: bool = True,
exclude_keys: list[str] = [],
weight_init_seed: int = 42)

Base class for LDF containing formulation-agnostic methods.

Ancestors

  • BasicNN
  • pytorch_lightning.core.module.LightningModule
  • lightning_fabric.utilities.device_dtype_mixin._DeviceDtypeModuleMixin
  • pytorch_lightning.core.mixins.hparams_mixin.HyperparametersMixin
  • pytorch_lightning.core.hooks.ModelHooks
  • pytorch_lightning.core.hooks.DataHooks
  • pytorch_lightning.core.hooks.CheckpointHooks
  • torch.nn.modules.module.Module

Subclasses

Class variables

var opfmodelOPFModel
var violationOPFViolation

Methods

def on_train_epoch_end(self)

Called in the training loop at the very end of the epoch.

To access all batch outputs at the end of the epoch, you can cache step outputs as an attribute of the :class:~pytorch_lightning.LightningModule and access them in this hook:

.. code-block:: python

class MyLightningModule(L.LightningModule):
    def __init__(self):
        super().__init__()
        self.training_step_outputs = []

    def training_step(self):
        loss = ...
        self.training_step_outputs.append(loss)
        return loss

    def on_train_epoch_end(self):
        # do something with all training_step outputs, for example:
        epoch_mean = torch.stack(self.training_step_outputs).mean()
        self.log("training_epoch_mean", epoch_mean)
        # free up the memory
        self.training_step_outputs.clear()
def on_train_epoch_start(self)

Called in the training loop at the very beginning of the epoch.

def set_loss(self, loss)

Inherited members

class LDFNeuralNet (config: dict,
problem: OPFProblem)

A basic feed-forward neural network.

Args

config : dict
Dictionary containing the model configuration.

optimizer (str): Optimizer. Supported: "adam", "adamw", "sgd".

loss (str): Loss function. Supported: "mse", "l1".

hidden_sizes (list[int]): List of hidden layer sizes.

activation (str): Activation function. Supported: "relu", "tanh", "sigmoid".

boundrepair (str): Bound clipping method. Supported: "none", "relu", "clamp", "sigmoid".

learning_rate (float): Learning rate.

problem : OPFProblem
The OPFProblem object.

Ancestors

Subclasses

Class variables

var modelLDFNN

Inherited members