AdaptativeBlockLearning

Documentation for AdaptativeBlockLearning.jl

AdaptativeBlockLearning.AutoAdaptativeHyperParamsType
AutoAdaptativeHyperParams

Hyperparameters for the method adaptative_block_learning

@with_kw struct AutoAdaptativeHyperParams
    samples::Int64 = 1000
    epochs::Int64 = 100
    η::Float64 = 1e-3
    max_k::Int64 = 10
    transform = Normal(0.0f0, 1.0f0)
end;
source
AdaptativeBlockLearning.HyperParamsType
HyperParams

Hyperparameters for the method adaptative_block_learning

@with_kw struct HyperParams
    samples::Int64 = 1000               # number of samples per histogram
    K::Int64 = 2                        # number of simulted observations
    epochs::Int64 = 100                 # number of epochs
    η::Float64 = 1e-3                   # learning rate
    transform = Normal(0.0f0, 1.0f0)    # transform to apply to the data
end;
source
AdaptativeBlockLearning.adaptative_block_learningMethod
adaptative_block_learning(model, data, hparams)

Custom loss function for the model. model is a Flux neuronal network model, data is a loader Flux object and hparams is a HyperParams object.

Arguments

  • nn_model::Flux.Chain: is a Flux neuronal network model
  • data::Flux.DataLoader: is a loader Flux object
  • hparams::HyperParams: is a HyperParams object
source
AdaptativeBlockLearning.auto_adaptative_block_learningMethod
auto_adaptative_block_learning(model, data, hparams)

Custom loss function for the model.

This method gradually adapts K (starting from 2) up to max_k (inclusive). The value of K is chosen based on a simple two-sample test between the histogram associated with the obtained result and the uniform distribution.

To see the value of K used in the test, set the logger level to debug before executing.

#Arguments

  • model::Flux.Chain: is a Flux neuronal network model
  • data::Flux.DataLoader: is a loader Flux object
  • hparams::AutoAdaptativeHyperParams: is a AutoAdaptativeHyperParams object
source
AdaptativeBlockLearning.generate_aₖMethod
generate_aₖ(ŷ, y)

Generate a one step histogram (Vector{Float}) of the given vector of K simulted observations and the real data y generate_aₖ(ŷ, y) = ∑ₖ γ(ŷ, y, k)

\[\vec{aₖ} = ∑_{k=0}^K γ(ŷ, y, k) = ∑_{k=0}^K ∑_{i=1}^N ψₖ \circ ϕ(ŷ, yᵢ)\]

source
AdaptativeBlockLearning.scalar_diffMethod
scalar_diff(aₖ)

Scalar difference between aₖ vector and uniform distribution vector.

\[loss(weights) = \langle (a₀ - N/(K+1), \cdots, aₖ - N/(K+1)), (a₀ - N/(K+1), \cdots, aₖ - N/(K+1))\rangle = ∑_{k=0}^{K}(a_{k} - (N/(K+1)))^2\]

source
AdaptativeBlockLearning.γMethod
γ(yₖ, yₙ, m)

Calculate the contribution of ψₘ ∘ ϕ(yₖ, yₙ) to the m bin of the histogram (Vector{Float}).

\[γ(yₖ, yₙ, m) = ψₘ \circ ϕ(yₖ, yₙ)\]

source
AdaptativeBlockLearning.γ_fastMethod
γ_fast(yₖ, yₙ, m)

Apply the γ function to the given parameters. This function is faster than the original γ function because it uses StaticArrays. However because Zygote does not support StaticArrays, this function can not be used in the training process.

source
AdaptativeBlockLearning.ϕMethod
ϕ(yₖ, yₙ)

Sum of the sigmoid function centered at yₙ applied to the vector yₖ.

\[ϕ(yₖ, yₙ) = ∑_{i=1}^K σ(yₖ^i, yₙ)\]

source