Module ml4opf.formulations.problem

Abstract base class for each OPF problem.

Each formulation should inhereit from OPFProblem and implement the following:

  • _parse_sanity_check: Use self.train_data, self.test_data, and self.json_data to perform sanity checks making sure they correspond to the same dataset.

  • feasibility_check: Dictionary of keys and values to check feasibility of the problem. Each key is checked to have the corresponding value.

  • default_combos: A dictionary where keys represent elements of the tuple from the TensorDataset and values are keys of the train_data dictionary which are concatenated. Used by make_dataset.

  • default_order: The order of the keys in the default_combos dictionary.

Classes

class OPFProblem (data_directory: str, dataset_name: str, **parse_kwargs)

OPF Problem

This class parses the JSON/HDF5 files on initialization, providing a standard interface for accessing OPF data.

OPFProblem also includes methods for creating input/target tensors from the HDF5 data by concatenating keys, though more complex datasets (e.g., for graph neural networks) can be created by accessing train_data and json_data directly.

By default, initializing OPFProblem will parse the HDF5/JSON files, remove infeasible samples, and set aside 5000 samples for testing. The test data can be accessed via test_data - train_data will only contain the training data. Models should split the training data into training/validation sets themselves downstream.

Initialization Arguments:

  • data_directory (str): Path to the folder containing the problem files

  • dataset_name (str): Name of the problem to use

  • primal (bool): Whether to parse the primal data (default: True)

  • dual (bool): Whether to parse the dual data (default: True)

  • train_set (bool): Whether to parse the training set (default: True)

  • test_set (bool): Whether to parse the test set (default: True)

  • convert_to_float32 (bool): Whether to convert the data to float32 (default: True)

  • sanity_check (bool): Whether to perform a sanity check on the parsed data (default: True)

Attributes:

  • path (Path): Path to the problem file folder

  • name (str): Name of the problem to use

  • train_data (dict): Dictionary of parsed HDF5 data. If make_test_set is True, this is only the training set.

  • test_data (dict): Dictionary of parsed HDF5 data for the test set. If make_test_set is False, this is None.

  • json_data (dict): Dictionary of parsed JSON data.

  • violation (OPFViolation): OPFViolation object for computing constraint violations for this problem.

Methods:

  • parse: Parse the JSON and HDF5 files for the problem

  • make_dataset: Create input/target tensors by combining keys from the h5 data. Returns the TensorDataset and slices for extracting the original components.

  • slice_batch: Extract the original components from a batch of data given the slices.

  • slice_tensor: Extract the original components from a tensor given the slices.

Ancestors

  • abc.ABC

Subclasses

Static methods

def slice_batch(batch: tuple[torch.Tensor, ...], slices: list[dict[str, slice]])

Slice the batch tensors into the original tensors

Args

batch : tuple[Tensor, …]
Batch of tensors from the TensorDataset
slices : list[dict[str, slice]]
List of dictionaries of slices

Returns

tuple[dict[str, Tensor], …]
Sliced tensors
def slice_tensor(tensor: torch.Tensor, slices: dict[str, slice])

Slice the tensor into the original tensors

Args

tensor : Tensor
Tensor to slice
slices : dict[str, slice]
Dictionary of slices

Returns

dict[str, Tensor]: Sliced tensors

Instance variables

prop default_combos : dict[str, list[str]]

A dictionary where keys represent elements of the tuple from the TensorDataset and values are keys of the train_data dictionary which are concatenated. Used by make_dataset.

prop default_order : list[str]

The order of the keys in the default_combos dictionary.

prop feasibility_check : dict[str, str]

Dictionary of keys and values to check feasibility of the problem.

Each key is checked to have the corresponding value. If any of them does not match, the sample is removed from the dataset in PGLearnParser. See ACOPFProblem.feasibility_check for an example.

Methods

def make_dataset(self,
combos: dict[str, list[str]] | None = None,
order: list[str] | None = None,
data: dict[str, torch.Tensor] | None = None,
test_set: bool = False,
sanity_check: bool = True) ‑> tuple[dict[str, torch.Tensor], list[dict[str, slice]]]

Make a TensorDataset from self.train_data given the keys in combos and the order of the keys in order.

def parse(self,
primal: bool = True,
dual: bool = True,
train_set: bool = True,
test_set: bool = True,
convert_to_float32: bool = True,
sanity_check: bool = True)

Parse the JSON and HDF5 files for the problem