Module ml4opf.parsers.pglearn
Parser for the PGLearn datasets
Classes
class MaybeGunzipH5File (name: str, *args, **kwargs)
-
Represents an HDF5 file.
Create a new file object.
See the h5py user guide for a detailed explanation of the options.
name Name of the file on disk, or file-like object. Note: for files created with the 'core' driver, HDF5 still requires this be non-empty. mode r Readonly, file must exist (default) r+ Read/write, file must exist w Create file, truncate if exists w- or x Create file, fail if exists a Read/write if exists, create otherwise driver Name of the driver to use. Legal values are None (default, recommended), 'core', 'sec2', 'direct', 'stdio', 'mpio', 'ros3'. libver Library version bounds. Supported values: 'earliest', 'v108', 'v110', 'v112' and 'latest'. userblock_size Desired size of user block. Only allowed when creating a new file (mode w, w- or x). swmr Open the file in SWMR read mode. Only used when mode = 'r'. rdcc_nbytes Total size of the dataset chunk cache in bytes. The default size is 1024**2 (1 MiB) per dataset. Applies to all datasets unless individually changed. rdcc_w0 The chunk preemption policy for all datasets. This must be between 0 and 1 inclusive and indicates the weighting according to which chunks which have been fully read or written are penalized when determining which chunks to flush from cache. A value of 0 means fully read or written chunks are treated no differently than other chunks (the preemption is strictly LRU) while a value of 1 means fully read or written chunks are always preempted before other chunks. If your application only reads or writes data once, this can be safely set to 1. Otherwise, this should be set lower depending on how often you re-read or re-write the same data. The default value is 0.75. Applies to all datasets unless individually changed. rdcc_nslots The number of chunk slots in the raw data chunk cache for this file. Increasing this value reduces the number of cache collisions, but slightly increases the memory used. Due to the hashing strategy, this value should ideally be a prime number. As a rule of thumb, this value should be at least 10 times the number of chunks that can fit in rdcc_nbytes bytes. For maximum performance, this value should be set approximately 100 times that number of chunks. The default value is 521. Applies to all datasets unless individually changed. track_order Track dataset/group/attribute creation order under root group if True. If None use global default h5.get_config().track_order. fs_strategy The file space handling strategy to be used. Only allowed when creating a new file (mode w, w- or x). Defined as: "fsm" FSM, Aggregators, VFD "page" Paged FSM, VFD "aggregate" Aggregators, VFD "none" VFD If None use HDF5 defaults. fs_page_size File space page size in bytes. Only used when fs_strategy="page". If None use the HDF5 default (4096 bytes). fs_persist A boolean value to indicate whether free space should be persistent or not. Only allowed when creating a new file. The default value is False. fs_threshold The smallest free-space section size that the free space manager will track. Only allowed when creating a new file. The default value is 1. page_buf_size Page buffer size in bytes. Only allowed for HDF5 files created with fs_strategy="page". Must be a power of two value and greater or equal than the file space page size when creating the file. It is not used by default. min_meta_keep Minimum percentage of metadata to keep in the page buffer before allowing pages containing metadata to be evicted. Applicable only if page_buf_size is set. Default value is zero. min_raw_keep Minimum percentage of raw data to keep in the page buffer before allowing pages containing raw data to be evicted. Applicable only if page_buf_size is set. Default value is zero. locking The file locking behavior. Defined as:
- False (or "false") -- Disable file locking - True (or "true") -- Enable file locking - "best-effort" -- Enable file locking but ignore some errors - None -- Use HDF5 defaults !!! warning "Warning" The HDF5_USE_FILE_LOCKING environment variable can override this parameter. Only available with HDF5 >= 1.12.1 or 1.10.x >= 1.10.7.
alignment_threshold Together with
alignment_interval
, this property ensures that any file object greater than or equal in size to the alignment threshold (in bytes) will be aligned on an address which is a multiple of alignment interval.alignment_interval This property should be used in conjunction with
alignment_threshold
. See the description above. For more details, see https://portal.hdfgroup.org/display/HDF5/H5P_SET_ALIGNMENTmeta_block_size Set the current minimum size, in bytes, of new metadata block allocations. See https://portal.hdfgroup.org/display/HDF5/H5P_SET_META_BLOCK_SIZE
Additional keywords Passed on to the selected file driver.
Ancestors
- h5py._hl.files.File
- h5py._hl.group.Group
- h5py._hl.base.HLObject
- h5py._hl.base.CommonStateObject
- h5py._hl.base.MutableMappingHDF5
- h5py._hl.base.MappingHDF5
- collections.abc.MutableMapping
- collections.abc.Mapping
- collections.abc.Collection
- collections.abc.Sized
- collections.abc.Iterable
- collections.abc.Container
Methods
def close(self)
-
Close the file. All open objects become invalid
class PGLearnParser (data_path: str | pathlib.Path)
-
Parser for PGLearn dataset.
Initialize the parser by validating and setting the path.
Class variables
var padval
Static methods
def convert_to_float32(dat: dict[str, torch.Tensor | numpy.ndarray | numpy.str_])
-
Convert all float64 data to float32 in-place.
def make_tree(dat: dict[str, torch.Tensor | numpy.ndarray | numpy.str_],
delimiter: str = '/')-
Convert a flat dictionary to a tree. Note that the keys of
dat
must have a tree structure where data is only at the leaves. Assumes keys are delimited by "/", i.e. "solution/primal/pg".Args
dat
:dict
- Flat dictionary of data.
delimiter
:str
, optional- Delimiter to use for splitting keys. Defaults to "/".
Returns
dict
- Tree dictionary of data from
dat
.
def pad_to_dense(array, padval, dtype=builtins.int)
Methods
def open_json(self)
-
Open the JSON file, supporting gzip and bz2 compression based on the file suffix.
def parse_h5(self,
dataset_name: str,
split: str = 'train',
primal: bool = True,
dual: bool = False,
convert_to_float32: bool = True) ‑> dict[str, torch.Tensor | numpy.ndarray | numpy.str_] | tuple[dict[str, torch.Tensor | numpy.ndarray | numpy.str_], dict[str, torch.Tensor | numpy.ndarray | numpy.str_]]-
Parse the HDF5 file.
Args
dataset_name
:str
- The name of the dataset. Typically the formulation ("ACOPF", "DCOPF", etc.).
split
:str
, optional- The split to return. Defaults to "train".
primal
:bool
, optional- If True, parse the primal file. Defaults to True.
dual
:bool
, optional- If True, parse the dual file. Defaults to False.
convert_to_float32
:bool
, optional- If True, convert all float64 data to torch.float32. Defaults to True.
Returns
dict
- Flattened dictionary of HDF5 data with PyTorch tensors for numerical data and NumPy arrays for string/object data.
If
make_test_set
is True, then this function will return a tuple of two dictionaries. The first dictionary is the training set and the second dictionary is the test set. The test set is a random 10% sample of the training set.This parser will return a single-level dictionary where the keys are in the form of
solution/primal/pg
wheresolution
is the group,primal
is the subgroup, andpg
is the dataset from the HDF5 file. The values are PyTorch tensors. This parser usesh5py.File.visititems
to iterate over the HDF5 file quickly. def parse_json(self, model_type: str | Sequence[str] = None)
-
Parse the JSON file from PGLearn.
Args
model_type
:Union[str, Sequence[str]]
- The reference solutions to save. Default: [] (no reference solutions saved.)
Returns
dict
- Dictionary containing the parsed data.
In the JSON file, the data is stored by each individual component. So to get generator 1's upper bound on active generation, you'd look at: raw_json['data']['gen']['1']['pmax'] and get a float.
In the parsed version, we aggregate each of the components attributes into torch.Tensor arrays. So to get generator 1's upper bound on active generation, you'd look at: dat['gen']['pmax'][0] and get a float. Note that the index is 0-based and an integer, not 1-based and a string.
To access the reference solution, pass a model_type (or multiple) and then access dat["ref_solutions"][model_type].
def validate_path(self, path: str | pathlib.Path) ‑> pathlib.Path
-
Validate the path to the HDF5 file.