ForwardPass

ParallelTemperingMonteCarlo.MachineLearningPotential.ForwardPass.NeuralNetworkPotentialType
NeuralNetworkPotential

The basic struct containing the parameters of the neural network itself. n_layers and n_params define the length of the vectors, these are required by the Fortran program. num_nodes is a vector containing the number of nodes per layer, also required to appropriately assign the parameters to the correct node. activation_functions should usually be [1 2 2 1] meaning "linear, tanh, tanh, linear" last is the vector of parameters assigned to each connexion.

source
ParallelTemperingMonteCarlo.MachineLearningPotential.ForwardPass.forward_passMethod
forward_pass( input::AbstractArray, batchsize, num_layers, num_nodes, activation_functions, num_parameters, parameters)
forward_pass(input::AbstractArray,batchsize,nnparams::NeuralNetworkPotential)
forward_pass(eatom,input::AbstractArray,batchsize,nnparams::NeuralNetworkPotential; directory = pwd())
forward_pass( eatom,input::AbstractArray, batchsize, num_layers, num_nodes, activation_functions, num_parameters, parameters,dir)

Calls the RuNNer forward pass module written by A. Knoll located in directory. This self-defines the eatoms output, a vector of the atomic energies. batchsize is based on the number of atoms whose energies we want to determine. The remaining inputs are contained in nnparams. Details of this struct can be found in the definition of the NeuralNetworkPotential struct. The last two definitions are identical except eatoms is an input rather than a vector determined during the calculation. This can save memory in the long run.

source