PGLearn.E2ELRReserveScalerType
E2ELRReserveScaler

Samples reserve requirements following the procedure below:

  1. Sample a minimum reserve requirement MRR from a uniform distribution U(lb, ub) (mrr_dist).
  2. Compute the upper bound of reserve requirements for each generator as rmax = α * (pmax - pmin).
  3. Fix the lower bound of reserve requirement per generator to zero.
  4. Fix the reserve cost of each generator to zero.

The parameter α is a scaling factor that determines each generator's maximum reserve capacity. It is the factor parameter times the ratio of the largest generator's capacity to the sum of all generators' dispatchable capacity.

source
PGLearn.GlocalType
Glocal{G,L}

A glocal distribution with global/local factors α::G and η::L.

This distribution represents a random variable of the form ϵ = α×η, where

  • α is a scalar random variable, with distribution d_α::G
  • η is a vector random variable, with distribution d_η::L
  • α and η are independent random variables
source
PGLearn.LoadScalerType
LoadScaler{D}

Scales loads with multiplicative noise sampled from d::D.

The distribution d::D is a 2*L-dimensional distribution. The sampled active (resp reactive) demand for load $i$ is denoted by $\tilde{p}_{i}$ (resp. $\tilde{q}_{i}$) and has the form

\[\tilde{p}_{i} = \epsilon_{i} \bar{p}_{i}, \quad \tilde{q}_{i} = \epsilon_{i+L} \bar{q}_{i},\]

where $\bar{p}_{i}, \bar{q}_{i}$ are the reference active/reactive demand for load $i$, and $\epsilon \in \mathbb{R}^{2L}$ is multiplicative noise sampled from distribution d::D.

source
PGLearn.Nminus1StatusSamplerType
Nminus1StatusSampler

Samples a branch or generator to be inactive following the procedure below:

  1. With probability 1/2, decide if a branch or generator is inactive in this instance.

If a branch is inactive:

  1. Identify the set of branches which are not bridges. Removing a bridge results in a disconnected network.
  2. Sample a branch to be inactive from the set of non-bridge branches.
  3. Construct the status vector to be all ones except for zero at the sampled branch.

If a generator is inactive:

  1. Sample a generator to be inactive.
  2. Construct the status vector to be all ones except for zero at the sampled generator.
source
PGLearn.OPFDataMethod
OPFData(network::Dict{String,Any})

Convert a PowerModels data dictionary to OPFData structure.

The PowerModels data dictionary must be in basic format.

source
PGLearn._dedupe_and_sort_h5!Method
_dedupe_and_sort_h5!(D)

De-duplicated and sort dataset D in increasing order of random seeds.

Equivalent to _dedupe_h5!(D); _sort_h5!(D), but more efficient.

source
PGLearn._get_case_infoMethod
_get_case_info(config)

Extract case file and name from input config.

To be valid, the input config should include:

  • either a case_file or pglib_case entry
  • if no case_file is provided, pglib_case should be a valid, unique PGLib case name.

The case name will be set to the generic "case" value if none is provided.

Warning

if case_file is provided, pglib_case will be ignored. Therefore, users should provide case_name when supplying case_file.

source
PGLearn._merge_h5Method
_merge_h5(V::Vector{Array{T,N})

Concatenate a collection of N-dimensional arrays along their last dimension.

This function is semantically equivalent to cat(V...; dims=ndims(first(V))), but uses a more efficient, splatting-free, implementation. All elements of V must have the same size in the first N-1 dimensions.

source
PGLearn._select_h5!Method
_select_h5!(D, p)

Select data points in D as indicated by p.

D should be a dictionary in h5-compatible format, and p is either a vector of indices, or a logical vector of the same length as D["meta"]["seed"].

  • If p is a vector of indices, then all values of p should be integers between 1 and the number of elements in D
  • If p is a logical vector, then it should have the same length as D["meta"]["seed"]. Only datapoints i for which p[i] is true are selected.
source
PGLearn._sort_h5!Method
_sort_h5!(D)

Sort dataset D in increasing order of random seeds.

The dictionary D should be in h5-compatible format. It is modified in-place.

The function expects D["meta"]["seed"] to exist and be a Vector{Int}. An error is thrown if such an entry is not found.

source
PGLearn.bridgesMethod
bridges(data)

Identify whether each branch is a bridge.

The input data must be in basic format.

A branch is a bridge if removing it renders the network disconnected. Returns a dictionary res::Dict{String,Bool} such that res[br] is true if branch br is a bridge, and false otherwise.

source
PGLearn.compute_flow!Method
compute_flow!(pf, pg, Φ::FullPTDF)

Compute power flow pf = Φ*pg given PTDF matrix Φ and nodal injections pg.

source
PGLearn.compute_flow!Method
compute_flow!(pf, pg, Φ::LazyPTDF)

Compute power flow pf = Φ*pg lazily, without forming the PTDF matrix.

Namely, pf is computed as pf = BA * (F \ pg), where F is an LDLᵀ factorization of AᵀBA.

source
PGLearn.convert_float_dataMethod
convert_float_data(D, F)

Convert all floating-point scalars and arrays to F.

Arguments

  • D: Should be a JSON-serializable dictionary, which generally means that all keys are String and all values are JSON-compatible.
  • F: Must be a subtype of AbstractFloat

Returns

  • d::Dict{String,Any}: a dictionary with same nested structure as D, with all floating-point scalars and arrays converted to F.
source
PGLearn.save_h5Method
save_h5(filename, D; warn=true)

Saves dictionary D to HDF5 file filename.

Arguments

  • filename::AbstractString: Path to the HDF5 file; must be a valid path.
  • D: Dictionary to save to the file. All keys in D must be of String type, and it must be HDF5-compatible. Additional restrictions are enforced on the values of D, see below.
  • warn::Bool=true: Whether to raise a warning when converting numerical data.
Warning

Only the following types are supported:

  • String
  • (un)signed integers up to 64-bit precision
  • Float32 and Float64
  • Complex versions of the above numeric types
  • Dense Arrays of the the above scalar types

Numerical data whose type is not listed above will be converted to Float64, which may incur a loss of precision. A warning will be displayed if this happens unless warn is set to false. If conversion to Float64 is not possible, an error is thrown.

source
PGLearn.save_jsonMethod
save_json(filename::AbstractString, data; indent)

Save data into JSON file filename. The following formats are supported:

  • uncompressed JSON .json
  • Gzip-compressed JSON .json.gz
  • Bzip2-compressed JSON .json.bz2

If the file extension does not match one of the above, an error is thrown.

source
PGLearn.tensorizeMethod
tensorize(V)

Concatenate elements of V into a higher-dimensional tensor.

Similar to Base.stack, with one major difference: if V is a vector of scalars, the result is a 2D array M whose last dimension is length(V), and such that M[:, i] == V[i].

This function is only defined for Vector{T} and Vector{Array{T,N}} inputs, to avoid any unexpected behavior of Base.stack.

source
Random.rand!Method
rand!(rng::AbstractRNG, s::AbstractOPFSampler, data::Dict)

Sample one new OPF instance and modify data in-place.

data must be a Dict in PowerModels format, representing the same network (i.e., same grid components with same indexing) as the one used to create s.

source