API Reference

Core Functions

ADCME.add_collectionMethod
add_collection(name::String, v::PyObject)

Adds v to the collection with name name. If name does not exist, a new one is created.

source
ADCME.add_collectionMethod
add_collection(name::String, vs::PyObject...)

Adds operators vs to the collection with name name. If name does not exist, a new one is created.

source
ADCME.control_dependenciesMethod
control_dependencies(f, ops::Union{Array{PyObject}, PyObject})

Executes all operations in ops before any operations created inside the block.

op1 = tf.print("print op1")
op3 = tf.print("print op3")
control_dependencies(op1) do
    global op2 = tf.print("print op2")
end
run(sess, [op2,op3])

In this example, op1 must be executed before op2. But there is no guarantee when op3 will be executed. There are several possible outputs of the program such as

print op3
print op1
print op2

or

print op1
print op3
print op2
source
ADCME.get_collectionFunction
get_collection(name::Union{String, Missing})

Returns the collection with name name. If name is missing, returns all the trainable variables.

source
ADCME.has_gpuMethod
has_gpu()

Checks if GPU is available.

Note

ADCME will use GPU automatically if GPU is available. To disable GPU, set the environment variable ENV["CUDA_VISIBLE_DEVICES"]="" before importing ADCME

source
ADCME.if_elseMethod
if_else(condition::Union{PyObject,Array,Bool}, fn1, fn2, args...;kwargs...)
  • If condition is a scalar boolean, it outputs fn1 or fn2 (a function with no input argument or a tensor) based on whether condition is true or false.
  • If condition is a boolean array, if returns condition .* fn1 + (1 - condition) .* fn2
source
ADCME.stop_gradientMethod
stop_gradient(o::PyObject, args...;kwargs...)

Disconnects o from gradients backpropagation.

source
ADCME.save_profileFunction
save_profile(filename::String="default_timeline.json")

Save the timeline information to file filename.

  • Open Chrome and navigate to chrome://tracing
  • Load the timeline file
source
Base.bindMethod
bind(op::PyObject, ops...)

Adding operations ops to the dependencies of op. The function is useful when we want to execute ops but ops is not in the dependency of the final output. For example, if we want to print i each time i is evaluated

i = constant(1.0)
op = tf.print(i)
i = bind(i, op)
source

Variables

ADCME.VariableMethod
Variable(initial_value;kwargs...)

Constructs a ref tensor from value.

source
ADCME.cellMethod
cell(arr::Array, args...;kwargs...)

Construct a cell tensor.

Example

julia> r = cell([[1.],[2.,3.]])
julia> run(sess, r[1])
1-element Array{Float32,1}:
 1.0
julia> run(sess, r[2])
2-element Array{Float32,1}:
 2.0
 3.0
source
ADCME.constantMethod
constant(value; kwargs...)

Constructs a non-trainable tensor from value.

source
ADCME.convert_to_tensorMethod
convert_to_tensor(o::Union{PyObject, Number, Array{T}, Missing, Nothing}; dtype::Union{Type, Missing}=missing) where T<:Number

Converts the input o to tensor. If o is already a tensor and dtype (if provided) is the same as that of o, the operator does nothing. Otherwise, convert_to_tensor converts the numerical array to a constant tensor or casts the data type.

source
ADCME.gradient_checkpointingFunction
gradient_checkpointing(type::String="speed")

Uses checkpointing scheme for gradients.

  • 'speed': checkpoint all outputs of convolutions and matmuls. these ops are usually the most expensive, so checkpointing them maximizes the running speed (this is a good option if nonlinearities, concats, batchnorms, etc are taking up a lot of memory)
  • 'memory': try to minimize the memory usage (currently using a very simple strategy that identifies a number of bottleneck tensors in the graph to checkpoint)
  • 'collection': look for a tensorflow collection named 'checkpoints', which holds the tensors to checkpoint
source
ADCME.gradientsMethod
gradients(ys::PyObject, xs::PyObject; kwargs...)

Computes the gradients of ys w.r.t xs.

  • If ys is a scalar, gradients returns the gradients with the same shape as xs.
  • If ys is a vector, gradients returns the Jacobian $\frac{\partial y}{\partial x}$
Note

The second usage is not suggested since ADCME adopts reverse mode automatic differentiation. Although in the case ys is a vector and xs is a scalar, gradients cleverly uses forward mode automatic differentiation, it requires that the second order gradients are implemented for relevant operators.

source
ADCME.hessianMethod

hessian computes the hessian of a scalar function f with respect to vector inputs xs

source
ADCME.tensorMethod
tensor(v::Array{T,2}; dtype=Float64, sparse=false) where T

Convert a generic array v to a tensor. For example,

v = [0.0 constant(1.0) 2.0
    constant(2.0) 0.0 1.0]
u = tensor(v)

u will be a $2\times 3$ tensor.

Note

This function is expensive. Use with caution.

source

Random Variables

ADCME.categoricalMethod

categorical(n::Union{PyObject, Integer}; kwargs...)

kwargs has a keyword argument logits, a 2-D Tensor with shape [batch_size, num_classes]. Each slice [i, :] represents the unnormalized log-probabilities for all classes.

source
ADCME.choiceMethod

choice(inputs::Union{PyObject, Array}, n_samples::Union{PyObject, Integer};replace::Bool=false)

Choose n_samples samples from inputs with/without replacement.

source

Sparse Matrix

ADCME.SparseTensorMethod
SparseTensor(I::Union{PyObject,Array{T,1}}, J::Union{PyObject,Array{T,1}}, V::Union{Array{Float64,1}, PyObject}, m::Union{S, PyObject, Nothing}=nothing, n::Union{S, PyObject, Nothing}=nothing) where {T<:Integer, S<:Integer}

Constructs a sparse tensor. Examples:

ii = [1;2;3;4]
jj = [1;2;3;4]
vv = [1.0;1.0;1.0;1.0]
s = SparseTensor(ii, jj, vv, 4, 4)
s = SparseTensor(sprand(10,10,0.3))
source
ADCME.SparseAssemblerFunction
SparseAssembler(handle::Union{PyObject, <:Integer}, n::Union{PyObject, <:Integer}, tol::Union{PyObject, <:Real}=0.0)

Creates a SparseAssembler for accumulating row, col, val for sparse matrices.

  • handle: an integer handle for creating a sparse matrix. If the handle already exists, SparseAssembler return the existing sparse matrix handle. If you are creating different sparse matrices, the handles should be different.
  • n: Number of rows of the sparse matrix.
  • tol (optional): Tolerance. SparseAssembler will treats any values less than tol as zero.

Example 1

handle = SparseAssembler(100, 5, 1e-8)
op1 = accumulate(handle, 1, [1;2;3], [1.0;2.0;3.0])
op2 = accumulate(handle, 2, [1;2;3], [1.0;2.0;3.0])
J = assemble(5, 5, [op1;op2])

J will be a SparseTensor object.

Example 2

handle = SparseAssembler(0, 5)
op1 = accumulate(handle, 1, [1;2;3], ones(3))
op2 = accumulate(handle, 1, [3], [1.])
op3 = accumulate(handle, 2, [1;3], ones(2))
J = assemble(5, 5, [op1;op2;op3]) # op1, op2, op3 are parallel
Array(run(sess, J))≈[1.0  1.0  2.0  0.0  0.0
                1.0  0.0  1.0  0.0  0.0
                0.0  0.0  0.0  0.0  0.0
                0.0  0.0  0.0  0.0  0.0
                0.0  0.0  0.0  0.0  0.0]
source
ADCME.assembleMethod
assemble(m::Union{PyObject, <:Integer}, n::Union{PyObject, <:Integer}, ops::PyObject)

Assembles the sparse matrix from the ops created by accumulate. ops is either a single output from accumulate, or concated from several ops

op1 = accumulate(handle, 1, [1;2;3], [1.0;2.0;3.0])
op2 = accumulate(handle, 2, [1;2;3], [1.0;2.0;3.0])
op = [op1;op2] # equivalent to `vcat([op1, op2]...)`

m and n are rows and columns of the sparse matrix.

See SparseAssembler for an example.

source
ADCME.findMethod
find(s::SparseTensor)

Returns the row, column and values for sparse tensor s.

source
ADCME.spdiagMethod
spdiag(n::Int64)

Constructs a sparse identity matrix of size $n\times n$.

source
ADCME.spdiagMethod
spdiag(o::PyObject)

Constructs a sparse diagonal matrix where the diagonal entries are o

source
ADCME.spzeroFunction
spzero(m::Int64, n::Union{Missing, Int64}=missing)

Constructs a empty sparse matrix of size $m\times n$. n=m if n is missing

source
Base.accumulateMethod
accumulate(handle::PyObject, row::Union{PyObject, <:Integer}, cols::Union{PyObject, Array{<:Integer}}, vals::Union{PyObject, Array{<:Real}})

Accumulates row-th row. It adds the value to the sparse matrix

for k = 1:length(cols)
    A[row, cols[k]] += vals[k]
end

handle is the handle created by SparseAssembler.

See SparseAssembler for an example.

Note

The function accumulate returns a op::PyObject. Only when op is executed, the nonzero values are populated into the sparse matrix.

source

Operations

ADCME.pmapMethod
pmap(fn::Function, o::Union{Array{PyObject}, PyObject})

Parallel for loop. There should be no data dependency between different iterations.

Example

x = constant(ones(10))
y1 = pmap(x->2.0*x, x)
y2 = pmap(x->x[1]+x[2], [x,x])
y3 = pmap(1:10, x) do z
    i = z[1]
    xi = z[2]
    xi + cast(Float64, i)
end
run(sess, y1)
run(sess, y2)
run(sess, y3)
source
ADCME.vectorMethod
vector(i::Union{Array{T}, PyObject, UnitRange, StepRange}, v::Union{Array{Float64},PyObject},s::Union{Int64,PyObject})

Returns a vector V with length s such that

V[i] = v
source
LinearAlgebra.svdMethod
svd(o::PyObject, args...; kwargs...)

Returns a TFSVD structure which holds the following data structures

S::PyObject
U::PyObject
V::PyObject
Vt::PyObject

We have the equality $o = USV'$

source

IO

ADCME.DiaryType
Diary(suffix::Union{String, Nothing}=nothing)

Creates a diary at a temporary directory path. It returns a writer and the corresponding directory path

source
ADCME.loadFunction
load(sess::PyObject, file::String, vars::Union{PyObject, Nothing, Array{PyObject}}=nothing, args...; kwargs...)

Loads the values of variables to the session sess from the file file. If vars is nothing, it loads values to all the trainable variables. See also save, load

source
ADCME.psaveMethod
psave(o::PyObject, file::String)

Saves a Python objection o to file. See also pload

source
ADCME.saveFunction
save(sess::PyObject, file::String, vars::Union{PyObject, Nothing, Array{PyObject}}=nothing, args...; kwargs...)

Saves the values of vars in the session sess. The result is written into file as a dictionary. If vars is nothing, it saves all the trainable variables. See also save, load

source
ADCME.scalarFunction
scalar(o::PyObject, name::String)

Returns a scalar summary object.

source
Base.writeMethod
write(sw::Diary, step::Int64, cnt::Union{String, Array{String}})

Writes to Diary.

source

Optimization

ADCME.BFGS!Function
BFGS!(sess::PyObject, loss::PyObject, max_iter::Int64=15000; 
vars::Array{PyObject}=PyObject[], callback::Union{Function, Nothing}=nothing, kwargs...)

BFGS! is a simplified interface for BFGS optimizer. See also ScipyOptimizerInterface. callback is a callback function with signature

callback(vs::Array{Float64}, iter::Int64, loss::Float64)

vars is an array consisting of tensors and its values will be the input to vs.

example

a = Variable(1.0)
loss = (a - 10.0)^2
BFGS!(sess, loss)
source
ADCME.BFGS!Function
BFGS!(value_and_gradients_function::Function, initial_position::Union{PyObject, Array{Float64}}, max_iter::Int64=50, args...;kwargs...)

Applies the BFGS optimizer to value_and_gradients_function

source
ADCME.BFGS!Method
BFGS!(sess::PyObject, loss::PyObject, grads::Union{Array{T},Nothing,PyObject}, 
    vars::Union{Array{PyObject},PyObject}; kwargs...) where T<:Union{Nothing, PyObject}

Running BFGS algorithm $\min_{\texttt{vars}} \texttt{loss}(\texttt{vars})$ The gradients grads must be provided. Typically, grads[i] = gradients(loss, vars[i]). grads[i] can exist on different devices (GPU or CPU).

source
ADCME.CustomOptimizerMethod
CustomOptimizer(opt::Function, name::String)

creates a custom optimizer with struct name name. For example, we can integrate Optim.jl with ADCME by constructing a new optimizer

CustomOptimizer("Con") do f, df, c, dc, x0, nineq, neq, x_L, x_U
    opt = Opt(:LD_MMA, length(x0))
    bd = zeros(length(x0)); bd[end-1:end] = [-Inf, 0.0]
    opt.lower_bounds = bd
    opt.xtol_rel = 1e-4
    opt.min_objective = (x,g)->(g[:]= df(x); return f(x)[1])
    inequality_constraint!(opt, (x,g)->( g[:]= dc(x);c(x)[1]), 1e-8)
    (minf,minx,ret) = NLopt.optimize(opt, x0)
    minx
end

Then we can create an optimizer with

opt = Con(loss, inequalities=[c1], equalities=[c2])

To trigger the optimization, use

opt.minimize(sess)

or

minimize(opt, sess)

Note thanks to the global variable scope of Julia, step_callback, optimizer_kwargs can actually be passed from Julia environment directly.

source
ADCME.NonlinearConstrainedProblemMethod
NonlinearConstrainedProblem(f::Function, L::Function, θ::PyObject, u0::Union{PyObject, Array{Float64}}; options::Union{Dict{String, T}, Missing}=missing) where T<:Integer

Computes the gradients $\frac{\partial L}{\partial \theta}$

\[\min \ L(u) \quad \mathrm{s.t.} \ F(\theta, u) = 0\]

u0 is the initial guess for the numerical solution u, see newton_raphson.

Caveats: Assume r, A = f(θ, u) and θ are the unknown parameters, gradients(r, θ) must be defined (backprop works properly)

Returns: It returns a tuple (L: loss, C: constraints, and Graidents)

\[\left(L(u), u, \frac{\partial L}{\partial θ}\right)\]
source
ADCME.ScipyOptimizerMinimizeMethod
ScipyOptimizerMinimize(sess::PyObject, opt::PyObject; kwargs...)

Minimizes a scalar Tensor. Variables subject to optimization are updated in-place at the end of optimization.

Note that this method does not just return a minimization Op, unlike minimize; instead it actually performs minimization by executing commands to control a Session https://www.tensorflow.org/api_docs/python/tf/contrib/opt/ScipyOptimizerInterface. See also ScipyOptimizerInterface and BFGS!.

  • feed_dict: A feed dict to be passed to calls to session.run.
  • fetches: A list of Tensors to fetch and supply to loss_callback as positional arguments.
  • step_callback: A function to be called at each optimization step; arguments are the current values of all optimization variables flattened into a single vector.
  • loss_callback: A function to be called every time the loss and gradients are computed, with evaluated fetches supplied as positional arguments.
  • run_kwargs: kwargs to pass to session.run.
source
ADCME.newton_raphsonMethod
newton_raphson(f::Function, u::Union{Array,PyObject}, θ::Union{Missing,PyObject}; options::Union{Dict{String, T}, Missing}=missing)

Newton Raphson solver for solving a nonlinear equation. f has the signature

  • f(θ::Union{Missing,PyObject}, u::PyObject)->(r::PyObject, A::Union{PyObject,SparseTensor}) (if linesearch is off)
  • f(θ::Union{Missing,PyObject}, u::PyObject)->(fval::PyObject, r::PyObject, A::Union{PyObject,SparseTensor}) (if linesearch is on)

where r is the residual and A is the Jacobian matrix; in the case where linesearch is on, the function value fval must also be supplied. θ are external parameters. u0 is the initial guess for u options:

  • max_iter: maximum number of iterations (default=100)
  • verbose: whether details are printed (default=false)
  • rtol: relative tolerance for termination (default=1e-12)
  • tol: absolute tolerance for termination (default=1e-12)
  • LM: a float number, Levenberg-Marquardt modification $x^{k+1} = x^k - (J^k + \mu^k)^{-1}g^k$ (default=0.0)
  • linesearch: whether linesearch is used (default=false)

Currently, the backtracing algorithm is implemented. The parameters for linesearch are also supplied via options

  • ls_c1: stop criterion, $f(x^k) < f(0) + \alpha c_1 f'(0)$
  • ls_ρ_hi: the new step size $\alpha_1\leq \rho_{hi}\alpha_0$
  • ls_ρ_lo: the new step size $\alpha_1\geq \rho_{lo}\alpha_0$
  • ls_iterations: maximum number of iterations for linesearch
  • ls_maxstep: maximum allowable steps
  • ls_αinitial: initial guess for the step size $\alpha$
source

Neural Networks

ADCME.aeFunction
ae(x::PyObject, output_dims::Array{Int64}, scope::String = "default")

Creates a neural network with intermediate numbers of neurons output_dims.

source
ADCME.aeMethod
ae(x::Union{Array{Float64}, PyObject}, output_dims::Array{Int64}, θ::Union{Array{Float64}, PyObject})

Creates a neural network with intermediate numbers of neurons output_dims. The weights are given by θ

Example 1: Explicitly construct weights and biases

x = constant(rand(10,2))
n = ae_num([2,20,20,20,2])
θ = Variable(randn(n)*0.001)
y = ae(x, [20,20,20,2], θ)

Example 2: Implicitly construct weights and biases

θ = ae_init([10,20,20,20,2]) 
x = constant(rand(10,10))
y = ae(x, [20,20,20,2], θ)

See also ae_num, ae_init.

source
ADCME.ae_initMethod
ae_init(output_dims::Array{Int64}; T::Type=Float64, method::String="xavier")

Return the initial weights and bias values by TensorFlow as a vector. Three types of random initializers are provided

  • xavier (default). It is useful for tanh fully connected neural network.
\[W^l_i \sim \sqrt{\frac{1}{n_{l-1}}}\]
  • xavier_avg. A variant of xavier
\[W^l_i \sim \sqrt{\frac{2}{n_l + n_{l-1}}}\]
  • he. This is the activation aware initialization of weights and helps mitigate the problem

of vanishing/exploding gradients.

\[W^l_i \sim \sqrt{\frac{2}{n_{l-1}}}\]
source
ADCME.ae_numMethod
ae_num(output_dims::Array{Int64})

Estimates the number of weights and biases for the neural network. Note the first dimension should be the feature dimension (this is different from ae since in ae the feature dimension can be inferred), and the last dimension should be the output dimension.

source
ADCME.ae_to_codeMethod
ae_to_code(file::String, scope::String)

Return the code string from the feed-forward neural network data in file. Usually we can immediately evaluate the code string into Julia session by

eval(Meta.parse(s))
source
ADCME.bnMethod
bn(args...;center = true, scale=true, kwargs...)

bn accepts a keyword parameter is_training.

Example

bn(inputs, name="batch_norm", is_training=true)
Note

bn should be used with control_dependency

update_ops = get_collection(UPDATE_OPS)
control_dependencies(update_ops) do 
    global train_step = AdamOptimizer().minimize(loss)
end 
source

Generative Neural Nets

ADCME.GANType
GAN(dat::PyObject, 
    generator::Function, 
    gan::GAN,
    loss::Union{String, Function, Missing}=missing; 
    latent_dim::Union{Missing, Int64}=missing, 
    batch_size::Union{Missing, Int64}=missing)

Creates a GAN instance.

  • dat $\in \mathbb{R}^{n\times d}$ is the training data for the GAN, where $n$ is the number of training data, and $d$ is the dimension per training data.
  • generator$:\mathbb{R}^{d'} \rightarrow \mathbb{R}^d$ is the generator function, $d'$ is the hidden dimension.
  • discriminator$:\mathbb{R}^{d} \rightarrow \mathbb{R}$ is the discriminator function.
  • loss is the loss function. See klgan, rklgan, wgan, lsgan for examples.
  • latent_dim (default=$d$) is the latent dimension.
  • batch_size (default=32) is the batch size in training.
source
ADCME.klganMethod
klgan(gan::GAN)

Computes the KL-divergence GAN loss function.

source
ADCME.lsganMethod
lsgan(gan::GAN)

Computes the least square GAN loss function.

source
ADCME.predictMethod
predict(gan::GAN, input::Union{PyObject, Array})

Predicts the GAN gan output given input input.

source
ADCME.rklganMethod
rklgan(gan::GAN)

Computes the reverse KL-divergence GAN loss function.

source
ADCME.wganMethod
wgan(gan::GAN)

Computes the Wasserstein GAN loss function.

source
ADCME.build!Method
build!(gan::GAN)

Builds the GAN instances. This function returns gan for convenience.

source

Tools

ADCME.compile_opMethod
compile_op(oplibpath::String; check::Bool=false)

Compile the library operator by force.

source
ADCME.customopMethod
customop()

Create a new custom operator.

example

julia> customop() # create an editable `customop.txt` file
[ Info: Edit custom_op.txt for custom operators
julia> customop() # after editing `customop.txt`, call it again to generate interface files.
source
ADCME.installMethod
install(s::String; force::Bool = false)

Install a custom operator via URL. s can be

  • A URL. ADCME will download the directory through git
  • A string. ADCME will search for the associated package on https://github.com/ADCMEMarket
source
ADCME.load_opMethod
load_op(oplibpath::String, opname::String)

Loads the operator opname from library oplibpath.

source
ADCME.load_op_and_gradMethod
load_op_and_grad(oplibpath::String, opname::String; multiple::Bool=false)

Loads the operator opname from library oplibpath; gradients are also imported. If multiple is true, the operator is assumed to have multiple outputs.

source
ADCME.load_system_opFunction
load_system_op(s::String, oplib::String, grad::Bool=true)

Loads custom operator from CustomOps directory (shipped with ADCME instead of TensorFlow) For example

s = "SparseOperator"
oplib = "libSO"
grad = true

this will direct Julia to find library CustomOps/SparseOperator/libSO.dylib on MACOSX

source
ADCME.test_jacobianMethod
test_jacobian(f::Function, x0::Array{Float64}; scale::Float64 = 1.0)

Testing the gradients of a vector function f: y, J = f(x) where y is a vector output and J is the Jacobian.

source
ADCME.xavier_initFunction
xavier_init(size, dtype=Float64)

Returns a matrix of size size and its values are from Xavier initialization.

source

Datasets