API Reference
Core Functions
ADCME.control_dependencies
— Methodcontrol_dependencies(f, ops::Union{Array{PyObject}, PyObject})
Executes all operations in ops
before any operations created inside the block.
op1 = tf.print("print op1")
op3 = tf.print("print op3")
control_dependencies(op1) do
global op2 = tf.print("print op2")
end
run(sess, [op2,op3])
In this example, op1
must be executed before op2
. But there is no guarantee when op3
will be executed. There are several possible outputs of the program such as
print op3
print op1
print op2
or
print op1
print op3
print op2
ADCME.get_collection
— Functionget_collection(name::Union{String, Missing})
Returns the collection with name name
. If name
is missing
, returns all the trainable variables.
ADCME.get_mpi
— Methodget_mpi()
Returns the MPI include directory and shared library.
ADCME.get_mpirun
— Methodget_mpirun()
Returns the default mpirun executable.
ADCME.has_mpi
— Functionhas_mpi(verbose::Bool = true)
Determines whether MPI is installed.
ADCME.if_else
— Methodif_else(condition::Union{PyObject,Array,Bool}, fn1, fn2, args...;kwargs...)
- If
condition
is a scalar boolean, it outputsfn1
orfn2
(a function with no input argument or a tensor) based on whethercondition
is true or false. - If
condition
is a boolean array, if returnscondition .* fn1 + (1 - condition) .* fn2
If you encounter an error like this:
tensorflow.python.framework.errors_impl.InvalidArgumentError: Retval[0] does not have value
It's probably that your code within if_else
is not valid.
ADCME.independent
— Methodindependent(o::PyObject, args...; kwargs...)
Returns o
but when computing the gradients, the top gradients will not be back-propagated into dependent variables of o
.
ADCME.reset_default_graph
— Methodreset_default_graph()
Resets the graph by removing all the operators.
ADCME.tensor
— Methodtensor(s::String)
Returns the tensor with name s
. See tensorname
.
ADCME.tensorname
— Methodtensorname(o::PyObject)
Returns the name of the tensor. See tensor
.
ADCME.while_loop
— Methodwhile_loop(condition::Union{PyObject,Function}, body::Function, loop_vars::Union{PyObject, Array{Any}, Array{PyObject}};
parallel_iterations::Int64=10, kwargs...)
Loops over loop_vars
while condition
is true. This operator only creates one extra node to mark the loops in the computational graph.
Example
The following script computes
\[\sum_{i=1}^{10} i\]
function condition(i, ta)
i <= 10
end
function body(i, ta)
u = read(ta, i-1)
ta = write(ta, i, u+1)
i+1, ta
end
ta = TensorArray(10)
ta = write(ta, 1, constant(1.0))
i = constant(2, dtype=Int32)
_, out = while_loop(condition, body, [i, ta])
summation = stack(out)[10]
Base.bind
— Methodbind(op::PyObject, ops...)
Adding operations ops
to the dependencies of op
. ops
are guaranteed to be executed before op
. The function is useful when we want to execute ops
but ops
is not in the dependency of the final output. For example, if we want to print i
each time i
is evaluated
i = constant(1.0)
op = tf.print(i)
i = bind(i, op)
ADCME.Session
— MethodSession(args...; kwargs...)
Create an ADCME session. By default, ADCME will take up all the GPU resources at the start. If you want the GPU usage to grow on a need basis, before starting ADCME, you need to set the environment variable via
ENV["TF_FORCE_GPU_ALLOW_GROWTH"] = "true"
Configuration
Session accepts some runtime optimization configurations
intra
: Number of threads used within an individual op for parallelisminter
: Number of threads used for parallelism between independent operations.CPU
: Maximum number of CPUs to use.GPU
: Maximum number of GPU devices to usesoft
: Set to True/enabled to facilitate operations to be placed on CPU instead of GPU
CPU
limits the number of CPUs being used, not the number of cores or threads.
ADCME.run_profile
— Methodrun_profile(args...;kwargs...)
Runs the session with tracing information.
ADCME.save_profile
— Functionsave_profile(filename::String="default_timeline.json")
Save the timeline information to file filename
.
- Open Chrome and navigate to chrome://tracing
- Load the timeline file
Variables
ADCME.TensorArray
— FunctionTensorArray(size_::Int64=0, args...;kwargs...)
Constructs a tensor array for while_loop
.
ADCME.Variable
— MethodVariable(initial_value;kwargs...)
Constructs a trainable tensor from value
.
ADCME.cell
— Methodcell(arr::Array, args...;kwargs...)
Construct a cell tensor.
Example
julia> r = cell([[1.],[2.,3.]])
julia> run(sess, r[1])
1-element Array{Float32,1}:
1.0
julia> run(sess, r[2])
2-element Array{Float32,1}:
2.0
3.0
ADCME.constant
— Methodconstant(value; kwargs...)
Constructs a non-trainable tensor from value
.
ADCME.convert_to_tensor
— Methodconvert_to_tensor(o::Union{PyObject, Number, Array{T}, Missing, Nothing}; dtype::Union{Type, Missing}=missing) where T<:Number
convert_to_tensor(os::Array, dtypes::Array)
Converts the input o
to tensor. If o
is already a tensor and dtype
(if provided) is the same as that of o
, the operator does nothing. Otherwise, convert_to_tensor
converts the numerical array to a constant tensor or casts the data type. convert_to_tensor
also accepts multiple tensors.
Example
convert_to_tensor([1.0, constant(rand(2)), rand(10)], [Float32, Float64, Float32])
ADCME.get_variable
— Methodget_variable(o::Union{PyObject, Bool, Array{<:Number}};
name::Union{String, Missing} = missing,
scope::String = "")
Creates a new variable with initial value o
. If name
exists, get_variable
returns the variable instead of create a new one.
ADCME.get_variable
— Methodget_variable(dtype::Type;
shape::Union{Array{<:Integer}, NTuple{N, <:Integer}},
name::Union{Missing,String} = missing
scope::String = "")
Creates a new variable with initial value o
. If name
exists, get_variable
returns the variable instead of create a new one.
ADCME.gradient_checkpointing
— Functiongradient_checkpointing(type::String="speed")
Uses checkpointing scheme for gradients.
- 'speed': checkpoint all outputs of convolutions and matmuls. these ops are usually the most expensive, so checkpointing them maximizes the running speed (this is a good option if nonlinearities, concats, batchnorms, etc are taking up a lot of memory)
- 'memory': try to minimize the memory usage (currently using a very simple strategy that identifies a number of bottleneck tensors in the graph to checkpoint)
- 'collection': look for a tensorflow collection named 'checkpoints', which holds the tensors to checkpoint
ADCME.gradient_magnitude
— Methodgradient_magnitude(l::PyObject, o::Union{Array, PyObject})
Returns the gradient sum
\[\sqrt{\sum_{i=1}^n \|\frac{\partial l}{\partial o_i}\|^2}\]
This function is useful for debugging the training
ADCME.gradients
— Methodgradients(ys::PyObject, xs::PyObject; kwargs...)
Computes the gradients of ys
w.r.t xs
.
- If
ys
is a scalar,gradients
returns the gradients with the same shape asxs
. - If
ys
is a vector,gradients
returns the Jacobian $\frac{\partial y}{\partial x}$
The second usage is not suggested since ADCME
adopts reverse mode automatic differentiation. Although in the case ys
is a vector and xs
is a scalar, gradients
cleverly uses forward mode automatic differentiation, it requires that the second order gradients are implemented for relevant operators.
ADCME.gradients_colocate
— Methodgradients_colocate(loss::PyObject, xs::Union{PyObject, Array{PyObject}}, args...;use_locking::Bool = true, kwargs...)
Computes the gradients of a scalar loss function loss
with respect to xs
. The gradients are colocated with respect to the forward pass. This function is usually in distributed computing.
ADCME.hessian
— Methodhessian(ys::PyObject, xs::PyObject; kwargs...)
hessian
computes the hessian of a scalar function f with respect to vector inputs xs.
Example
x = constant(rand(10))
y = 0.5 * sum(x^2)
o = hessian(y, x)
sess = Session(); init(sess)
run(sess, o) # should be an identity matrix
ADCME.is_variable
— Methodis_variable(o::PyObject)
Determines whether o
is a trainable variable.
ADCME.jacobian
— Methodjacobian(y::PyObject, x::PyObject)
Computes the Jacobian matrix $J_{ij} = \frac{\partial y_i}{\partial x_j}$
ADCME.ones_like
— Methodones_like(o::Union{PyObject,Real, Array{<:Real}}, args...; kwargs...)
Returns a all-one tensor, which has the same size as o
.
Example
a = rand(100,10)
b = ones_like(a)
@assert run(sess, b)≈ones(100,10)
ADCME.placeholder
— Methodplaceholder(dtype::Type; kwargs...)
Creates a placeholder of the type dtype
.
Example
a = placeholder(Float64, shape=[20,10])
b = placeholder(Float64, shape=[]) # a scalar
c = placeholder(Float64, shape=[nothing]) # a vector
ADCME.placeholder
— Methodplaceholder(o::Union{Number, Array, PyObject}; kwargs...)
Creates a placeholder of the same type and size as o
. o
is the default value.
ADCME.tensor
— Methodtensor(v::Array{T,2}; dtype=Float64, sparse=false) where T
ADCME.tensor
— Methodtensor(v::Array{T,2}; dtype=Float64, sparse=false) where T
Convert a generic array v
to a tensor. For example,
v = [0.0 constant(1.0) 2.0
constant(2.0) 0.0 1.0]
u = tensor(v)
u
will be a $2\times 3$ tensor.
This function is expensive. Use with caution.
ADCME.zeros_like
— Methodzeros_like(o::Union{PyObject,Real, Array{<:Real}}, args...; kwargs...)
Returns a all-zero tensor, which has the same size as o
.
Example
a = rand(100,10)
b = zeros_like(a)
@assert run(sess, b)≈zeros(100,10)
Base.copy
— Methodcopy(o::PyObject)
Creates a tensor that has the same value that is currently stored in a variable.
The output is a graph node that will have that value when evaluated. Any time you evaluate it, it will grab the current value of o
.
Base.read
— Methodread(ta::PyObject, i::Union{PyObject,Integer})
Reads data from TensorArray
at index i
.
Base.write
— Methodwrite(ta::PyObject, i::Union{PyObject,Integer}, obj)
Writes data obj
to TensorArray
at index i
.
Random Variables
ADCME.categorical
— Methodcategorical(n::Union{PyObject, Integer}; kwargs...)
kwargs
has a keyword argument logits
, a 2-D Tensor with shape [batch_size, num_classes]
. Each slice [i, :]
represents the unnormalized log-probabilities for all classes.
ADCME.choice
— Methodchoice(inputs::Union{PyObject, Array}, n_samples::Union{PyObject, Integer};replace::Bool=false)
Choose n_samples
samples from inputs
with/without replacement.
ADCME.logpdf
— Methodlogpdf(dist::T, x) where T<:ADCMEDistribution
Returns the log(prob) for a distribution dist
.
Sparse Matrix
ADCME.SparseTensor
— TypeSparseTensor
A sparse matrix object. It has two fields
o
: internal data structure_diag
:true
if the sparse matrix is marked as "diagonal".
ADCME.SparseTensor
— MethodSparseTensor(A::SparseMatrixCSC)
SparseTensor(A::Array{Float64, 2})
Creates a SparseTensor
from numerical arrays.
ADCME.SparseTensor
— MethodSparseTensor(I::Union{PyObject,Array{T,1}}, J::Union{PyObject,Array{T,1}}, V::Union{Array{Float64,1}, PyObject}, m::Union{S, PyObject, Nothing}=nothing, n::Union{S, PyObject, Nothing}=nothing) where {T<:Integer, S<:Integer}
Constructs a sparse tensor. Examples:
ii = [1;2;3;4]
jj = [1;2;3;4]
vv = [1.0;1.0;1.0;1.0]
s = SparseTensor(ii, jj, vv, 4, 4)
s = SparseTensor(sprand(10,10,0.3))
Core.Array
— MethodArray(A::SparseTensor)
Converts a sparse tensor A
to dense matrix.
ADCME.RawSparseTensor
— MethodRawSparseTensor(indices::Union{PyObject,Array{T,2}}, value::Union{PyObject,Array{Float64,1}},
m::Union{PyObject,Int64}, n::Union{PyObject,Int64}; is_diag::Bool=false) where T<:Integer
A convenient wrapper for making custom operators. Here indices
is 0-based.
ADCME.SparseAssembler
— FunctionSparseAssembler(handle::Union{PyObject, <:Integer}, n::Union{PyObject, <:Integer}, tol::Union{PyObject, <:Real}=0.0)
Creates a SparseAssembler for accumulating row
, col
, val
for sparse matrices.
handle
: an integer handle for creating a sparse matrix. If the handle already exists,SparseAssembler
return the existing sparse matrix handle. If you are creating different sparse matrices, the handles should be different.n
: Number of rows of the sparse matrix.tol
(optional): Tolerance.SparseAssembler
will treats any values less thantol
as zero.
Example 1
handle = SparseAssembler(100, 5, 1e-8)
op1 = accumulate(handle, 1, [1;2;3], [1.0;2.0;3.0])
op2 = accumulate(handle, 2, [1;2;3], [1.0;2.0;3.0])
J = assemble(5, 5, [op1;op2])
J
will be a SparseTensor
object.
Example 2
handle = SparseAssembler(0, 5)
op1 = accumulate(handle, 1, [1;2;3], ones(3))
op2 = accumulate(handle, 1, [3], [1.])
op3 = accumulate(handle, 2, [1;3], ones(2))
J = assemble(5, 5, [op1;op2;op3]) # op1, op2, op3 are parallel
Array(run(sess, J))≈[1.0 1.0 2.0 0.0 0.0
1.0 0.0 1.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0]
ADCME.assemble
— Methodassemble(m::Union{PyObject, <:Integer}, n::Union{PyObject, <:Integer}, ops::PyObject)
Assembles the sparse matrix from the ops
created by accumulate
. ops
is either a single output from accumulate
, or concated from several ops
op1 = accumulate(handle, 1, [1;2;3], [1.0;2.0;3.0])
op2 = accumulate(handle, 2, [1;2;3], [1.0;2.0;3.0])
op = [op1;op2] # equivalent to `vcat([op1, op2]...)`
m
and n
are rows and columns of the sparse matrix.
See SparseAssembler
for an example.
ADCME.compress
— Methodcompress(A::SparseTensor)
Compresses the duplicated index in A
.
Example
using ADCME
indices = [
1 1
1 1
2 2
3 3
]
v = [1.0;1.0;1.0;1.0]
A = SparseTensor(indices[:,1], indices[:,2], v, 3, 3)
Ac = compress(A)
sess = Session(); init(sess)
run(sess, A.o.indices) # expected: [0 0;0 0;1 1;2 2]
run(sess, A.o.values) # expected: [1.0;1.0;1.0;1.0]
run(sess, Ac.o.indices) # expected: [0 0;1 1;2 2]
run(sess, Ac.o.values) # expected: [2.0;1.0;1.0]
The indices of A
should be sorted. compress
does not check the validity of the input arguments.
ADCME.find
— Methodfind(s::SparseTensor)
Returns the row, column and values for sparse tensor s
.
ADCME.scatter_add
— Methodscatter_update(A::Union{SparseTensor, SparseMatrixCSC{Float64,Int64}},
i1::Union{Integer, Colon, UnitRange{T}, PyObject,Array{S,1}},
i2::Union{Integer, Colon, UnitRange{T}, PyObject,Array{T,1}},
B::Union{SparseTensor, SparseMatrixCSC{Float64,Int64}}) where {S<:Real,T<:Real}
Adds B
to a subblock of a sparse matrix A
. Equivalently,
A[i1, i2] += B
ADCME.scatter_update
— Methodscatter_update(A::Union{SparseTensor, SparseMatrixCSC{Float64,Int64}},
i1::Union{Integer, Colon, UnitRange{T}, PyObject,Array{S,1}},
i2::Union{Integer, Colon, UnitRange{T}, PyObject,Array{T,1}},
B::Union{SparseTensor, SparseMatrixCSC{Float64,Int64}}) where {S<:Real,T<:Real}
Updates a subblock of a sparse matrix by B
. Equivalently,
A[i1, i2] = B
ADCME.solve
— Methodsolve(A_factorized::Tuple{SparseTensor, PyObject}, rhs::Union{Array{Float64,1}, PyObject})
Solves the equation A_factorized * x = rhs
using the factorized sparse matrix. See factorize
.
ADCME.spdiag
— Methodspdiag(n::Int64)
Constructs a sparse identity matrix of size $n\times n$, which is equivalent to spdiag(n, 0=>ones(n))
ADCME.spdiag
— Methodspdiag(m::Integer, pair::Pair...)
Constructs a square $m\times m$ SparseTensor
from pairs of the form
offset => array
Example
Suppose we want to construct a $10\times 10$ tridiagonal matrix, where the lower off-diagonals are all -2, the diagonals are all 2, and the upper off-diagonals are all 3, the corresponding Julia code is
spdiag(10, -1=>-2*ones(9), 0=>2*ones(10), 1=>3ones(9))
ADCME.spdiag
— Methodspdiag(o::PyObject)
Constructs a sparse diagonal matrix where the diagonal entries are o
, which is equivalent to spdiag(length(o), 0=>o)
ADCME.spzero
— Functionspzero(m::Int64, n::Union{Missing, Int64}=missing)
Constructs a empty sparse matrix of size $m\times n$. n=m
if n
is missing
ADCME.trisolve
— Methodtrisolve(a::Union{PyObject, Array{Float64,1}},b::Union{PyObject, Array{Float64,1}},
c::Union{PyObject, Array{Float64,1}},d::Union{PyObject, Array{Float64,1}})
Solves a tridiagonal matrix linear system. The equation is as follows
\[a_i x_{i-1} + b_i x_i + c_i x_{i+1} = d_i\]
In the matrix format,
\[\begin{bmatrix} b_1 & c_1 & &0 \\ a_2 & b_2 & c_2 & \\ & a_3 & b_3 & &\\ & & & & c_{n-1}\\ 0 & & &a_n & b_n \end{bmatrix}\begin{bmatrix} x_1\\ x_2\\ \vdots \\ x_n \end{bmatrix} = \begin{bmatrix} d_1\\ d_2\\ \vdots\\ d_n\end{bmatrix}\]
ADCME.zero_out_row
— Methodzero_out_row(A::Union{SparseMatrixCSC, SparseTensor}, bd::Array{Int64, 1})
Zeros out rows of a sparse tensor A
. The counterpart of numerical array function is
\[A[bd,:] .= 0.0\]
Base.:\
— Function\(A::SparseTensor, o::PyObject, method::String="SparseLU")
Solves the linear equation $A x = o$
Method
For square matrices $A$, one of the following methods is available
auto
: using the solver specified byADCME.options.sparse.solver
SparseLU
SparseQR
SimplicialLDLT
SimplicialLLT
In the case o
is 2 dimensional, \
is understood as "batched solve". o
must have size $n_{b} \times m$, and $A$ has a size $m\times n$. It returns the solution matrix of size $n_b \times n$
\[s_{i,:} = A^{-1} o_{i,:}\]
Base.:\
— MethodBase.:\(A_factorized::Tuple{SparseTensor, PyObject}, rhs::Union{Array{Float64,1}, PyObject})
Base.accumulate
— Methodaccumulate(handle::PyObject, row::Union{PyObject, <:Integer}, cols::Union{PyObject, Array{<:Integer}}, vals::Union{PyObject, Array{<:Real}})
Accumulates row
-th row. It adds the value to the sparse matrix
for k = 1:length(cols)
A[row, cols[k]] += vals[k]
end
handle
is the handle created by SparseAssembler
.
See SparseAssembler
for an example.
The function accumulate
returns a op::PyObject
. Only when op
is executed, the nonzero values are populated into the sparse matrix.
LinearAlgebra.factorize
— Functionfactorize(A::Union{SparseTensor, SparseMatrixCSC}, max_cache_size::Int64 = 999999)
Factorizes $A$ for sparse matrix solutions. max_cache_size
specifies the maximum cache sizes in the C++ kernels, which determines the maximum number of factorized matrices. The function returns the factorized matrix, which is basically Tuple{SparseTensor, PyObject}
.
Example
A = sprand(10,10,0.7)
Afac = factorize(A) # factorizing the matrix
run(sess, Afac\rand(10)) # no factorization, solving the equation
run(sess, Afac\rand(10)) # no factorization, solving the equation
Operations
ADCME.argsort
— Methodargsort(o::PyObject;
stable::Bool = false, rev::Bool=false, dims::Integer=-1, name::Union{Nothing,String}=nothing)
Returns the indices of a tensor that give its sorted order along an axis.
ADCME.batch_matmul
— Methodbatch_matmul(o1::PyObject, o2::PyObject)
Computes o1[i,:,:] * o2[i, :]
or o1[i,:,:] * o2[i, :, :]
for each index i
.
ADCME.clip
— Methodclip(o::Union{Array{Any}, Array{PyObject}}, vmin, vmax, args...;kwargs...)
Clips the values of o
to the range [vmin
, vmax
]
Example
a = constant(3.0)
a = clip(a, 1.0, 2.0)
b = constant(rand(3))
b = clip(b, 0.5, 1.0)
ADCME.cvec
— Methodrvec(o::PyObject; kwargs...)
Vectorizes the tensor o
to a column vector, assuming column major.
ADCME.pad
— Methodpad(o::PyObject, paddings::Array{Int64, 2}, args...; kwargs...)
Pads o
with values on the boundary.
Example
o = rand(3,3)
o = pad(o, [1 4 # first dimension
2 3]) # second dimension
run(sess, o)
Expected:
8×8 Array{Float64,2}:
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.250457 0.666905 0.823611 0.0 0.0 0.0
0.0 0.0 0.23456 0.625145 0.646713 0.0 0.0 0.0
0.0 0.0 0.552415 0.226417 0.67802 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
ADCME.pmap
— Methodpmap(fn::Function, o::Union{Array{PyObject}, PyObject})
Parallel for loop. There should be no data dependency between different iterations.
Example
x = constant(ones(10))
y1 = pmap(x->2.0*x, x)
y2 = pmap(x->x[1]+x[2], [x,x])
y3 = pmap(1:10, x) do z
i = z[1]
xi = z[2]
xi + cast(Float64, i)
end
run(sess, y1)
run(sess, y2)
run(sess, y3)
ADCME.rollmean
— Methodrollmean(u, window::Int64)
Returns the rolling mean given a window size m
\[o_k = \frac{\sum_{i=k}^{k+m-1} u_i}{m}\]
Rolling functions in ADCME:
ADCME.rollstd
— Methodrollstd(u, window::Int64)
Returns the rolling standard deviation given a window size m
\[o_k = \sqrt{\frac{\sum_{i=k}^{k+m-1} (u_i - m_i)^2}{m-1}}\]
Here $m_i$ is the rolling mean computed using rollmean
Rolling functions in ADCME:
ADCME.rollsum
— Methodrollsum(u, window::Int64)
Returns the rolling sum given a window size m
\[o_k = \sum_{i=k}^{k+m-1} u_i\]
Rolling functions in ADCME:
ADCME.rollvar
— Methodrollvar(u, window::Int64)
Returns the rolling variance given a window size m
\[o_k = \frac{\sum_{i=k}^{k+m-1} (u_i - m_i)^2}{m-1}\]
Here $m_i$ is the rolling mean computed using rollmean
Rolling functions in ADCME:
ADCME.rvec
— Methodrvec(o::PyObject; kwargs...)
Vectorizes the tensor o
to a row vector, assuming column major.
ADCME.scatter_add
— Methodscatter_add(A::PyObject,
xind::Union{Colon, Int64, Array{Int64}, BitArray{1}, Array{Bool,1}, UnitRange{Int64}, StepRange{Int64, Int64}, PyObject},
yind::Union{Colon, Int64, Array{Int64}, BitArray{1}, Array{Bool,1}, UnitRange{Int64}, StepRange{Int64, Int64}, PyObject},
updates::Union{Array{<:Real}, Real, PyObject})
A[xind, yind] += updates
ADCME.scatter_add
— Methodscatter_add(a::PyObject,
indices::Union{Colon, Int64, Array{Int64}, BitArray{1}, Array{Bool,1}, UnitRange{Int64}, StepRange{Int64, Int64}, PyObject},
updates::Union{Array{<:Real}, Real, PyObject})
Updates array add
a[indices] += updates
Example
Julia:
A[[1;2;3]] += rand(3)
A[2] += 1.0
ADCME:
A = scatter_add(A, [1;2;3], rand(3))
A = scatter_add(A, 2, 1.0)
ADCME.scatter_sub
— Methodscatter_add(A::PyObject,
xind::Union{Colon, Int64, Array{Int64}, BitArray{1}, Array{Bool,1}, UnitRange{Int64}, StepRange{Int64, Int64}, PyObject},
yind::Union{Colon, Int64, Array{Int64}, BitArray{1}, Array{Bool,1}, UnitRange{Int64}, StepRange{Int64, Int64}, PyObject},
updates::Union{Array{<:Real}, Real, PyObject})
A[xind, yind] -= updates
ADCME.scatter_sub
— Methodscatter_sub(a::PyObject,
indices::Union{Colon, Int64, Array{Int64}, BitArray{1}, Array{Bool,1}, UnitRange{Int64}, StepRange{Int64, Int64}, PyObject},
updates::Union{Array{<:Real}, Real, PyObject})
Updates array a
a[indices] -= updates
Example
Julia:
A[[1;2;3]] -= rand(3)
A[2] -= 1.0
ADCME:
A = scatter_sub(A, [1;2;3], rand(3))
A = scatter_sub(A, 2, 1.0)
ADCME.scatter_update
— Methodscatter_update(A::PyObject,
xind::Union{Colon, Int64, Array{Int64}, BitArray{1}, Array{Bool,1}, UnitRange{Int64}, StepRange{Int64, Int64}, PyObject},
yind::Union{Colon, Int64, Array{Int64}, BitArray{1}, Array{Bool,1}, UnitRange{Int64}, StepRange{Int64, Int64}, PyObject},
updates::Union{Array{<:Real}, Real, PyObject})
A[xind, yind] = updates
ADCME.scatter_update
— Methodscatter_update(a::PyObject,
indices::Union{Colon, Int64, Array{Int64}, BitArray{1}, Array{Bool,1}, UnitRange{Int64}, StepRange{Int64, Int64}, PyObject},
updates::Union{Array{<:Real}, Real, PyObject})
Updates array a
a[indices] = updates
Example
Julia:
A[[1;2;3]] = rand(3)
A[2] = 1.0
ADCME:
A = scatter_update(A, [1;2;3], rand(3))
A = scatter_update(A, 2, 1.0)
ADCME.set_shape
— Methodset_shape(o::PyObject, s::Union{Array{<:Integer}, Tuple{Vararg{<:Integer, N}}}) where N
set_shape(o::PyObject, s::Integer...)
Sets the shape of o
to s
. s
must be the actual shape of o
. This function is used to convert a tensor with unknown dimensions to a tensor with concrete dimensions.
Example
a = placeholder(Float64, shape=[nothing, 10])
b = set_shape(a, 3, 10)
run(sess, b, a=>rand(3,10)) # OK
run(sess, b, a=>rand(5,10)) # Error
run(sess, b, a=>rand(10,3)) # Error
ADCME.softmax_cross_entropy_with_logits
— Methodsoftmax_cross_entropy_with_logits(logits::Union{Array, PyObject}, labels::Union{Array, PyObject})
Computes softmax cross entropy between logits
and labels
logits
is typically the output of a linear layer. For example,
logits = [
0.124575 0.511463 0.945934
0.538054 0.0749339 0.187802
0.355604 0.052569 0.177009
0.896386 0.546113 0.456832
]
labels = [2;1;2;3]
The values of labels
are from {1,2,...,num_classes
}. Here num_classes
is the number of columns in logits
.
The predicted labels associated with logits
is
argmax(softmax(logits), dims = 2)
Labels can also be one hot vectors
labels = [0 1
1 0
0 1
0 1]
ADCME.solve_batch
— Methodsolve_batch(A::Union{PyObject, Array{<:Real, 2}}, rhs::Union{PyObject, Array{<:Real,2}})
Solves $Ax = b$ for a batch of right hand sides.
A
: a $m\times n$ matrix, where $m\geq n$rhs
: a $n_b\times m$ matrix. Each row is a new right hand side to solve.
The returned value is a $n_b\times n$ matrix.
Example
a = rand(10,5)
b = rand(100, 10)
sol = solve_batch(a, b)
@assert run(sess, sol) ≈ (a\b')'
Internally, the matrix $A$ is factorized first and then the factorization is used to solve multiple right hand side.
ADCME.stack
— Methodstack(o::PyObject)
Convert a TensorArray
o
to a normal tensor. The leading dimension is the size of the tensor array.
ADCME.topk
— Functiontopk(o::PyObject, k::Union{PyObject,Integer}=1;
sorted::Bool=true, name::Union{Nothing,String}=nothing)
Finds values and indices of the k
largest entries for the last dimension. If sorted=true
the resulting k elements will be sorted by the values in descending order.
ADCME.vector
— Methodvector(i::Union{Array{T}, PyObject, UnitRange, StepRange}, v::Union{Array{Float64},PyObject},s::Union{Int64,PyObject})
Returns a vector V
with length s
such that
V[i] = v
Base.adjoint
— Methodadjoint(o::PyObject; kwargs...)
Returns the conjugate adjoint of o
. When the dimension of o
is greater than 2, only the last two dimensions are permuted, i.e., permutedims(o, [1,2,...,n,n-1])
Base.map
— Methodmap(fn::Function, o::Union{Array{PyObject},PyObject};
kwargs...)
Applies fn
to each element of o
.
o
∈Array{PyObject}
: returns[fn(x) for x in o]
o
∈PyObject : splitso
according to the first dimension and then appliesfn
.
Example
a = constant(rand(10,5))
b = map(x->sum(x), a) # equivalent to `sum(a, dims=2)`
If fn
is a multivariate function, we need to specify the output type using dtype
keyword. For example,
a = constant(ones(10))
b = constant(ones(10))
fn = x->x[1]+x[2]
c = map(fn, [a, b], dtype=Float64)
Base.reshape
— Methodreshape(o::PyObject, s::Union{Array{<:Integer}, Tuple{Vararg{<:Integer, N}}}) where N
reshape(o::PyObject, s::Integer; kwargs...)
reshape(o::PyObject, m::Integer, n::Integer; kwargs...)
reshape(o::PyObject, ::Colon, n::Integer)
reshape(o::PyObject, n::Integer, ::Colon)
Reshapes the tensor according to row major if the "TensorFlow style" syntax is used; otherwise reshaping according to column major is assumed.
Example
reshape(a, [10,5]) # row major
reshape(a, 10, 5) # column major
Base.reverse
— Methodreverse(o::PyObject, kwargs...)
Given a tensor o
, and an index dims
representing the set of dimensions of tensor to reverse.
Example
a = rand(10,2)
A = constant(a)
@assert run(sess, reverse(A, dims=1)) == reverse(a, dims=1)
@assert run(sess, reverse(A, dims=2)) == reverse(a, dims=2)
@assert run(sess, reverse(A, dims=-1)) == reverse(a, dims=2)
Base.sort
— MethodBase.:sort(o::PyObject;
rev::Bool=false, dims::Integer=-1, name::Union{Nothing,String}=nothing)
Sort a multidimensional array o
along the given dimension.
rev
:true
for DESCENDING andfalse
(default) for ASCENDINGdims
:-1
for last dimension.
Base.split
— Methodsplit(o::PyObject,
num_or_size_splits::Union{Integer, Array{<:Integer}, PyObject}; kwargs...)
Splits o
according to num_or_size_splits
Example 1
a = constant(rand(10,8,6))
split(a, 5)
Expected output:
5-element Array{PyCall.PyObject,1}:
PyObject <tf.Tensor 'split_5:0' shape=(2, 8, 6) dtype=float64>
PyObject <tf.Tensor 'split_5:1' shape=(2, 8, 6) dtype=float64>
PyObject <tf.Tensor 'split_5:2' shape=(2, 8, 6) dtype=float64>
PyObject <tf.Tensor 'split_5:3' shape=(2, 8, 6) dtype=float64>
PyObject <tf.Tensor 'split_5:4' shape=(2, 8, 6) dtype=float64>
Example 2
a = constant(rand(10,8,6))
split(a, [4,3,1], dims=2)
Expected output:
3-element Array{PyCall.PyObject,1}:
PyObject <tf.Tensor 'split_6:0' shape=(10, 4, 6) dtype=float64>
PyObject <tf.Tensor 'split_6:1' shape=(10, 3, 6) dtype=float64>
PyObject <tf.Tensor 'split_6:2' shape=(10, 1, 6) dtype=float64>
Example 3
a = constant(rand(10,8,6))
split(a, 3, dims=3)
Expected output:
3-element Array{PyCall.PyObject,1}:
PyObject <tf.Tensor 'split_7:0' shape=(10, 8, 2) dtype=float64>
PyObject <tf.Tensor 'split_7:1' shape=(10, 8, 2) dtype=float64>
PyObject <tf.Tensor 'split_7:2' shape=(10, 8, 2) dtype=float64>
Base.vec
— Methodvec(o::PyObject;kwargs...)
Vectorizes the tensor o
assuming column major.
LinearAlgebra.svd
— Methodsvd(o::PyObject, args...; kwargs...)
Returns a TFSVD
structure which holds the following data structures
S::PyObject
U::PyObject
V::PyObject
Vt::PyObject
We have the equality $o = USV'$
Example
A = rand(10,20)
r = svd(constant(A))
A2 = r.U*diagm(r.S)*r.Vt # The value of `A2` should be equal to `A`
IO
ADCME.Diary
— TypeDiary(suffix::Union{String, Nothing}=nothing)
Creates a diary at a temporary directory path. It returns a writer and the corresponding directory path
ADCME.activate
— Functionactivate(sw::Diary, port::Int64=6006)
Running Diary
at http://localhost:port.
ADCME.load
— Functionload(sess::PyObject, file::String, vars::Union{PyObject, Nothing, Array{PyObject}}=nothing, args...; kwargs...)
Loads the values of variables to the session sess
from the file file
. If vars
is nothing, it loads values to all the trainable variables. See also save
, load
ADCME.load
— Methodload(sw::Diary, dirp::String)
Loads Diary
from dirp
.
ADCME.logging
— Methodlogging(file::Union{Nothing,String}, o::PyObject...; summarize::Int64 = 3, sep::String = " ")
Logging o
to file
. This operator must be used with bind
.
ADCME.pload
— Methodpload(file::String)
Loads a Python objection from file
. See also psave
ADCME.print_tensor
— Functionprint_tensor(in::Union{PyObject, Array{Float64,2}})
Prints the tensor in
ADCME.psave
— Methodpsave(o::PyObject, file::String)
Saves a Python objection o
to file
. See also pload
ADCME.save
— Functionsave(sess::PyObject, file::String, vars::Union{PyObject, Nothing, Array{PyObject}}=nothing, args...; kwargs...)
Saves the values of vars
in the session sess
. The result is written into file
as a dictionary. If vars
is nothing, it saves all the trainable variables. See also save
, load
ADCME.save
— Methodsave(sw::Diary, dirp::String)
Saves Diary
to dirp
.
ADCME.scalar
— Functionscalar(o::PyObject, name::String)
Returns a scalar summary object.
Base.write
— Methodwrite(sw::Diary, step::Int64, cnt::Union{String, Array{String}})
Writes to Diary
.
Optimization
ADCME.AdadeltaOptimizer
— FunctionAdadeltaOptimizer(learning_rate=1e-3;kwargs...)
See AdamOptimizer
for descriptions.
ADCME.AdagradDAOptimizer
— FunctionAdagradDAOptimizer(learning_rate=1e-3; global_step, kwargs...)
See AdamOptimizer
for descriptions.
ADCME.AdagradOptimizer
— FunctionAdagradOptimizer(learning_rate=1e-3;kwargs...)
See AdamOptimizer
for descriptions.
ADCME.AdamOptimizer
— FunctionAdamOptimizer(learning_rate=1e-3;kwargs...)
Constructs an ADAM optimizer.
Example
learning_rate = 1e-3
opt = AdamOptimizer(learning_rate).minimize(loss)
sess = Session(); init(sess)
for i = 1:1000
_, l = run(sess, [opt, loss])
@info "Iteration $i, loss = $l")
end
Dynamical Learning Rate
We can also use dynamical learning rate. For example, if we want to use a learning rate $l_t = \frac{1}{1+t}$, we have
learning_rate = placeholder(1.0)
opt = AdamOptimizer(learning_rate).minimize(loss)
sess = Session(); init(sess)
for i = 1:1000
_, l = run(sess, [opt, loss], lr = 1/(1+i))
@info "Iteration $i, loss = $l")
end
The usage of other optimizers such as GradientDescentOptimizer
, AdadeltaOptimizer
, and so on is similar: we can just replace AdamOptimizer
with the corresponding ones.
ADCME.BFGS!
— FunctionBFGS!(sess::PyObject, loss::PyObject, max_iter::Int64=15000;
vars::Array{PyObject}=PyObject[], callback::Union{Function, Nothing}=nothing, method::String = "L-BFGS-B", kwargs...)
BFGS!
is a simplified interface for L-BFGS-B optimizer. See also ScipyOptimizerInterface
. callback
is a callback function with signature
callback(vs::Array, iter::Int64, loss::Float64)
vars
is an array consisting of tensors and its values will be the input to vs
.
Example 1
a = Variable(1.0)
loss = (a - 10.0)^2
sess = Session(); init(sess)
BFGS!(sess, loss)
Example 2
θ1 = Variable(1.0)
θ2 = Variable(1.0)
loss = (θ1-1)^2 + (θ2-2)^2
cb = (vs, iter, loss)->begin
printstyled("[#iter $iter] θ1=$(vs[1]), θ2=$(vs[2]), loss=$loss\n", color=:green)
end
sess = Session(); init(sess)
cb(run(sess, [θ1, θ2]), 0, run(sess, loss))
BFGS!(sess, loss, 100; vars=[θ1, θ2], callback=cb)
Example 3
Use bounds
to specify upper and lower bound of a variable.
x = Variable(2.0)
loss = x^2
sess = Session(); init(sess)
BFGS!(sess, loss, bounds=Dict(x=>[1.0,3.0]))
Users can also use other scipy optimization algorithm by providing method
keyword arguments. For example, you can use the BFGS optimizer
BFGS!(sess, loss, method = "BFGS")
ADCME.BFGS!
— FunctionBFGS!(value_and_gradients_function::Function, initial_position::Union{PyObject, Array{Float64}}, max_iter::Int64=50, args...;kwargs...)
Applies the BFGS optimizer to value_and_gradients_function
ADCME.BFGS!
— MethodBFGS!(sess::PyObject, loss::PyObject, grads::Union{Array{T},Nothing,PyObject},
vars::Union{Array{PyObject},PyObject}; kwargs...) where T<:Union{Nothing, PyObject}
Running BFGS algorithm $\min_{\texttt{vars}} \texttt{loss}(\texttt{vars})$ The gradients grads
must be provided. Typically, grads[i] = gradients(loss, vars[i])
. grads[i]
can exist on different devices (GPU or CPU).
Example 1
import Optim # required
a = Variable(0.0)
loss = (a-1)^2
g = gradients(loss, a)
sess = Session(); init(sess)
BFGS!(sess, loss, g, a)
Example 2
import Optim # required
a = Variable(0.0)
loss = (a^2+a-1)^2
g = gradients(loss, a)
sess = Session(); init(sess)
cb = (vs, iter, loss)->begin
printstyled("[#iter $iter] a = $vs, loss=$loss\n", color=:green)
end
BFGS!(sess, loss, g, a; callback = cb)
ADCME.CustomOptimizer
— MethodCustomOptimizer(opt::Function, name::String)
creates a custom optimizer with struct name name
. For example, we can integrate Optim.jl
with ADCME
by constructing a new optimizer
CustomOptimizer("Con") do f, df, c, dc, x0, x_L, x_U
opt = Opt(:LD_MMA, length(x0))
bd = zeros(length(x0)); bd[end-1:end] = [-Inf, 0.0]
opt.lower_bounds = bd
opt.xtol_rel = 1e-4
opt.min_objective = (x,g)->(g[:]= df(x); return f(x)[1])
inequality_constraint!(opt, (x,g)->( g[:]= dc(x);c(x)[1]), 1e-8)
(minf,minx,ret) = NLopt.optimize(opt, x0)
minx
end
Here
∘ f
: a function that returns $f(x)$
∘ df
: a function that returns $\nabla f(x)$
∘ c
: a function that returns the constraints $c(x)$
∘ dc
: a function that returns $\nabla c(x)$
∘ x0
: initial guess
∘ nineq
: number of inequality constraints
∘ neq
: number of equality constraints
∘ x_L
: lower bounds of optimizable variables
∘ x_U
: upper bounds of optimizable variables
Then we can create an optimizer with
opt = Con(loss, inequalities=[c1], equalities=[c2])
To trigger the optimization, use
minimize(opt, sess)
Note thanks to the global variable scope of Julia, step_callback
, optimizer_kwargs
can actually be passed from Julia environment directly.
ADCME.GradientDescentOptimizer
— FunctionGradientDescentOptimizer(learning_rate=1e-3;kwargs...)
See AdamOptimizer
for descriptions.
ADCME.NonlinearConstrainedProblem
— MethodNonlinearConstrainedProblem(f::Function, L::Function, θ::PyObject, u0::Union{PyObject, Array{Float64}}; options::Union{Dict{String, T}, Missing}=missing) where T<:Integer
Computes the gradients $\frac{\partial L}{\partial \theta}$
\[\min \ L(u) \quad \mathrm{s.t.} \ F(\theta, u) = 0\]
u0
is the initial guess for the numerical solution u
, see newton_raphson
.
Caveats: Assume r, A = f(θ, u)
and θ
are the unknown parameters, gradients(r, θ)
must be defined (backprop works properly)
Returns: It returns a tuple (L
: loss, C
: constraints, and Graidents
)
\[\left(L(u), u, \frac{\partial L}{\partial θ}\right)\]
Example
We want to solve the following constrained optimization problem $\begin{aligned}\min_\theta &\; L(u) = (u-1)^3\\ \text{s.t.} &\; u^3 + u = \theta\end{aligned}$ The solution is $\theta = 2$. The Julia code is
function f(θ, u)
u^3 + u - θ, spdiag(3u^2+1)
end
function L(u)
sum((u-1)^2)
end
pl = Variable(ones(1))
l, θ, dldθ = NonlinearConstrainedProblem(f, L, pl, ones(1))
We can coupled it with a mathematical optimizer
using Optim
sess = Session(); init(sess)
BFGS!(sess, l, dldθ, pl)
ADCME.Optimize!
— MethodOptimize!(sess::PyObject, loss::PyObject, max_iter::Int64 = 15000;
vars::Union{Array{PyObject},PyObject, Missing} = missing,
grads::Union{Array{T},Nothing,PyObject, Missing} = missing,
optimizer = missing,
callback::Union{Function, Missing}=missing,
x_tol::Union{Missing, Float64} = missing,
f_tol::Union{Missing, Float64} = missing,
g_tol::Union{Missing, Float64} = missing, kwargs...) where T<:Union{Nothing, PyObject}
An interface for using optimizers in the Optim package or custom optimizers.
sess
: a session;loss
: a loss function;max_iter
: maximum number of max_iterations;vars
,grads
: optimizable variables and gradientsoptimizer
: Optim optimizers (default: LBFGS)callback
: callback after each linesearch completion (NOT one step in the linesearch)
Other arguments are passed to Options in Optim optimizers.
We can also construct a custom optimizer. For example, to construct an optimizer out of Ipopt:
import Ipopt
x = Variable(rand(2))
loss = (1-x[1])^2 + 100(x[2]-x[1]^2)^2
function opt(f, g, fg, x0, kwargs...)
prob = createProblem(2, -100ones(2), 100ones(2), 0, Float64[], Float64[], 0, 0,
f, (x,g)->nothing, (x,G)->g(G, x), (x, mode, rows, cols, values)->nothing, nothing)
prob.x = x0
Ipopt.addOption(prob, "hessian_approximation", "limited-memory")
status = Ipopt.solveProblem(prob)
println(Ipopt.ApplicationReturnStatus[status])
println(prob.x)
Ipopt.freeProblem(prob)
nothing
end
sess = Session(); init(sess)
Optimize!(sess, loss, optimizer = opt)
ADCME.RMSPropOptimizer
— FunctionRMSPropOptimizer(learning_rate=1e-3;kwargs...)
See AdamOptimizer
for descriptions.
ADCME.ScipyOptimizerInterface
— MethodScipyOptimizerInterface(loss; method="L-BFGS-B", options=Dict("maxiter"=> 15000, "ftol"=>1e-12, "gtol"=>1e-12), kwargs...)
A simple interface for Scipy Optimizer. See also ScipyOptimizerMinimize
and BFGS!
.
ADCME.ScipyOptimizerMinimize
— MethodScipyOptimizerMinimize(sess::PyObject, opt::PyObject; kwargs...)
Minimizes a scalar Tensor. Variables subject to optimization are updated in-place at the end of optimization.
Note that this method does not just return a minimization Op, unlike minimize
; instead it actually performs minimization by executing commands to control a Session https://www.tensorflow.org/api_docs/python/tf/contrib/opt/ScipyOptimizerInterface. See also ScipyOptimizerInterface
and BFGS!
.
- feed_dict: A feed dict to be passed to calls to session.run.
- fetches: A list of Tensors to fetch and supply to loss_callback as positional arguments.
- step_callback: A function to be called at each optimization step; arguments are the current values of all optimization variables packed into a single vector.
- loss_callback: A function to be called every time the loss and gradients are computed, with evaluated fetches supplied as positional arguments.
- run_kwargs: kwargs to pass to session.run.
ADCME.newton_raphson
— Methodnewton_raphson(func::Function,
u0::Union{Array,PyObject},
θ::Union{Missing,PyObject, Array{<:Real}}=missing,
args::PyObject...) where T<:Real
Newton Raphson solver for solving a nonlinear equation. ∘ func
has the signature
func(θ::Union{Missing,PyObject}, u::PyObject)->(r::PyObject, A::Union{PyObject,SparseTensor})
(iflinesearch
is off)func(θ::Union{Missing,PyObject}, u::PyObject)->(fval::PyObject, r::PyObject, A::Union{PyObject,SparseTensor})
(iflinesearch
is on)
where r
is the residual and A
is the Jacobian matrix; in the case where linesearch
is on, the function value fval
must also be supplied. ∘ θ
are external parameters. ∘ u0
is the initial guess for u
∘ args
: additional inputs to the func function ∘ kwargs
: keyword arguments to func
The solution can be configured via ADCME.options.newton_raphson
max_iter
: maximum number of iterations (default=100)rtol
: relative tolerance for termination (default=1e-12)tol
: absolute tolerance for termination (default=1e-12)LM
: a float number, Levenberg-Marquardt modification $x^{k+1} = x^k - (J^k + \mu^k)^{-1}g^k$ (default=0.0)linesearch
: whether linesearch is used (default=false)
Currently, the backtracing algorithm is implemented. The parameters for linesearch
are supplied via options.newton_raphson.linesearch_options
c1
: stop criterion, $f(x^k) < f(0) + \alpha c_1 f'(0)$ρ_hi
: the new step size $\alpha_1\leq \rho_{hi}\alpha_0$ρ_lo
: the new step size $\alpha_1\geq \rho_{lo}\alpha_0$iterations
: maximum number of iterations for linesearchmaxstep
: maximum allowable stepsαinitial
: initial guess for the step size $\alpha$
ADCME.newton_raphson_with_grad
— Methodnewton_raphson_with_grad(f::Function,
u0::Union{Array,PyObject},
θ::Union{Missing,PyObject, Array{<:Real}}=missing,
args::PyObject...) where T<:Real
Differentiable Newton-Raphson algorithm. See newton_raphson
.
Use ADCME.options.newton_raphson
to supply options.
Example
function f(θ, x)
x^3 - θ, 3spdiag(x^2)
end
θ = constant([2. .^3;3. ^3; 4. ^3])
x = newton_raphson_with_grad(f, constant(ones(3)), θ)
run(sess, x)≈[2.;3.;4.]
run(sess, gradients(sum(x), θ))
Neural Networks
ADCME.BatchNormalization
— TypeBatchNormalization(dims::Int64=2; kwargs...)
Creates a batch normalization layer.
Example
b = BatchNormalization(2)
x = rand(10,2)
training = placeholder(true)
y = b(x, training)
run(sess, y)
ADCME.Conv1D
— TypeConv1D(filters, kernel_size, strides, activation, args...;kwargs...)
c = Conv1D(32, 3, 1, "relu")
x = rand(100, 6, 128) # 128-length vectors with 6 timesteps ("channels")
y = c(x) # shape=(100, 4, 32)
ADCME.Conv2D
— TypeConv2D(filters, kernel_size, strides, activation, args...;kwargs...)
The arrangement is (samples, rows, cols, channels) (dataformat='channelslast')
Conv2D(32, 3, 1, "relu")
ADCME.Conv3D
— TypeConv3D(filters, kernel_size, strides, activation, args...;kwargs...)
The arrangement is (samples, rows, cols, channels) (dataformat='channelslast')
c = Conv3D(32, 3, 1, "relu")
x = constant(rand(100, 10, 10, 10, 16))
y = c(x)
# shape=(100, 8, 8, 8, 32)
ADCME.Dense
— TypeDense(units::Int64, activation::Union{String, Function, Nothing} = nothing,
args...;kwargs...)
Creates a callable dense neural network.
ADCME.Resnet1D
— TypeResnet1D(out_features::Int64, hidden_features::Int64;
num_blocks::Int64=2, activation::Union{String, Function, Nothing} = "relu",
dropout_probability::Float64 = 0.0, use_batch_norm::Bool = false, name::Union{String, Missing} = missing)
Creates a 1D residual network. If name
is not missing, Resnet1D
does not create a new entity.
Example
resnet = Resnet1D(20)
x = rand(1000,10)
y = resnet(x)
Example: Digit recognition
using MLDatasets
using ADCME
# load data
train_x, train_y = MNIST.traindata()
train_x = reshape(Float64.(train_x), :, size(train_x,3))'|>Array
test_x, test_y = MNIST.testdata()
test_x = reshape(Float64.(test_x), :, size(test_x,3))'|>Array
# construct loss function
ADCME.options.training.training = placeholder(true)
x = placeholder(rand(64, 784))
l = placeholder(rand(Int64, 64))
resnet = Resnet1D(10, num_blocks=10)
y = resnet(x)
loss = mean(sparse_softmax_cross_entropy_with_logits(labels=l, logits=y))
# train the neural network
opt = AdamOptimizer().minimize(loss)
sess = Session(); init(sess)
for i = 1:10000
idx = rand(1:60000, 64)
_, loss_ = run(sess, [opt, loss], feed_dict=Dict(l=>train_y[idx], x=>train_x[idx,:]))
@info i, loss_
end
# test
for i = 1:10
idx = rand(1:10000,100)
y0 = resnet(test_x[idx,:])
y0 = run(sess, y0, ADCME.options.training.training=>false)
pred = [x[2]-1 for x in argmax(y0, dims=2)]
@info "Accuracy = ", sum(pred .== test_y[idx])/100
end
ADCME.ae
— Functionae(x::PyObject, output_dims::Array{Int64}, scope::String = "default";
activation::Union{Function,String} = "tanh")
Alias: fc
, ae
Creates a neural network with intermediate numbers of neurons output_dims
.
ADCME.ae
— Methodae(x::Union{Array{Float64}, PyObject},
output_dims::Array{Int64},
θ::Union{Array{Array{Float64}}, Array{PyObject}};
activation::Union{Function,String} = "tanh")
Alias: fc
, ae
Constructs a neural network with given weights and biases θ
Example
x = constant(rand(10,30))
θ = ae_init([30, 20, 20, 5])
y = ae(x, [20, 20, 5], θ) # 10×5
ADCME.ae
— Methodae(x::Union{Array{Float64}, PyObject}, output_dims::Array{Int64}, θ::Union{Array{Float64}, PyObject};
activation::Union{Function,String, Nothing} = "tanh")
Alias: fc
, ae
Creates a neural network with intermediate numbers of neurons output_dims
. The weights are given by θ
Example 1: Explicitly construct weights and biases
x = constant(rand(10,2))
n = ae_num([2,20,20,20,2])
θ = Variable(randn(n)*0.001)
y = ae(x, [20,20,20,2], θ)
Example 2: Implicitly construct weights and biases
θ = ae_init([10,20,20,20,2])
x = constant(rand(10,10))
y = ae(x, [20,20,20,2], θ)
ADCME.ae_init
— Methodae_init(output_dims::Array{Int64}; T::Type=Float64, method::String="xavier")
fc_init(output_dims::Array{Int64})
Return the initial weights and bias values by TensorFlow as a vector. The neural network architecture is
\[o_1 (\text{Input layer}) \rightarrow o_2 \rightarrow \ldots \rightarrow o_n (\text{Output layer})\]
Three types of random initializers are provided
xavier
(default). It is useful fortanh
fully connected neural network.
W^l_i \sim \mathcal{N}\left(0, \sqrt{\frac{1}{n_{l-1}}}\right)
xavier_avg
. A variant ofxavier
\[W^l_i \sim \mathcal{N}\left(0, \sqrt{\frac{2}{n_l + n_{l-1}}}\right)\]
he
. This is the activation aware initialization of weights and helps mitigate the problem
of vanishing/exploding gradients.
\[W^l_i \sim \mathcal{N}\left(0, \sqrt{\frac{2}{n_{l-1}}}\right)\]
Example
x = constant(rand(10,30))
θ = fc_init([30, 20, 20, 5])
y = fc(x, [20, 20, 5], θ) # 10×5
ADCME.ae_num
— Methodae_num(output_dims::Array{Int64})
fc_num(output_dims::Array{Int64})
Estimates the number of weights and biases for the neural network. Note the first dimension should be the feature dimension (this is different from ae
since in ae
the feature dimension can be inferred), and the last dimension should be the output dimension.
Example
x = constant(rand(10,30))
θ = ae_init([30, 20, 20, 5])
@assert ae_num([30, 20, 20, 5])==length(θ)
y = ae(x, [20, 20, 5], θ)
ADCME.ae_to_code
— Functionae_to_code(file::String, scope::String; activation::String = "tanh")
Return the code string from the feed-forward neural network data in file
. Usually we can immediately evaluate the code string into Julia session by
eval(Meta.parse(s))
If activation
is not specified, tanh
is the default.
ADCME.bn
— Methodbn(args...;center = true, scale=true, kwargs...)
bn
accepts a keyword parameter is_training
.
Example
bn(inputs, name="batch_norm", is_training=true)
bn
should be used with control_dependency
update_ops = get_collection(UPDATE_OPS)
control_dependencies(update_ops) do
global train_step = AdamOptimizer().minimize(loss)
end
ADCME.dense
— Methoddense(inputs::Union{PyObject, Array{<:Real}}, units::Int64, args...;
activation::Union{String, Function} = nothing, kwargs...)
Creates a fully connected layer with the activation function specified by activation
ADCME.dropout
— Functiondropout(x::Union{PyObject, Real, Array{<:Real}},
rate::Union{Real, PyObject}, training::Union{PyObject,Bool} = true; kwargs...)
Randomly drops out entries in x
with a rate of rate
.
ADCME.fc
— Functionae(x::PyObject, output_dims::Array{Int64}, scope::String = "default";
activation::Union{Function,String} = "tanh")
Alias: fc
, ae
Creates a neural network with intermediate numbers of neurons output_dims
.
ae(x::Union{Array{Float64}, PyObject}, output_dims::Array{Int64}, θ::Union{Array{Float64}, PyObject};
activation::Union{Function,String, Nothing} = "tanh")
Alias: fc
, ae
Creates a neural network with intermediate numbers of neurons output_dims
. The weights are given by θ
Example 1: Explicitly construct weights and biases
x = constant(rand(10,2))
n = ae_num([2,20,20,20,2])
θ = Variable(randn(n)*0.001)
y = ae(x, [20,20,20,2], θ)
Example 2: Implicitly construct weights and biases
θ = ae_init([10,20,20,20,2])
x = constant(rand(10,10))
y = ae(x, [20,20,20,2], θ)
ae(x::Union{Array{Float64}, PyObject},
output_dims::Array{Int64},
θ::Union{Array{Array{Float64}}, Array{PyObject}};
activation::Union{Function,String} = "tanh")
Alias: fc
, ae
Constructs a neural network with given weights and biases θ
Example
x = constant(rand(10,30))
θ = ae_init([30, 20, 20, 5])
y = ae(x, [20, 20, 5], θ) # 10×5
ADCME.fc_init
— Functionae_init(output_dims::Array{Int64}; T::Type=Float64, method::String="xavier")
fc_init(output_dims::Array{Int64})
Return the initial weights and bias values by TensorFlow as a vector. The neural network architecture is
$o1 (\text{Input layer}) \rightarrow o2 \rightarrow \ldots \rightarrow o_n (\text{Output layer}) $
Three types of random initializers are provided
xavier
(default). It is useful fortanh
fully connected neural network.
W^l_i \sim \mathcal{N}\left(0, \sqrt{\frac{1}{n_{l-1}}}\right)
xavier_avg
. A variant ofxavier
W^li \sim \mathcal{N}\left(0, \sqrt{\frac{2}{nl + n_{l-1}}}\right) $
he
. This is the activation aware initialization of weights and helps mitigate the problem
of vanishing/exploding gradients.
$W^li \sim \mathcal{N}\left(0, \sqrt{\frac{2}{n{l-1}}}\right) $
Example
x = constant(rand(10,30))
θ = fc_init([30, 20, 20, 5])
y = fc(x, [20, 20, 5], θ) # 10×5
ADCME.fc_num
— Functionae_num(output_dims::Array{Int64})
fc_num(output_dims::Array{Int64})
Estimates the number of weights and biases for the neural network. Note the first dimension should be the feature dimension (this is different from ae
since in ae
the feature dimension can be inferred), and the last dimension should be the output dimension.
Example
x = constant(rand(10,30))
θ = ae_init([30, 20, 20, 5])
@assert ae_num([30, 20, 20, 5])==length(θ)
y = ae(x, [20, 20, 5], θ)
ADCME.fcx
— Methodfcx(x::Union{Array{Float64,2},PyObject}, output_dims::Array{Int64,1},
θ::Union{Array{Float64,1}, PyObject};
activation::String = "tanh")
Creates a fully connected neural network with output dimension o
and inputs $x\in \mathbb{R}^{m\times n}$.
\[x \rightarrow o_1 \rightarrow o_2 \rightarrow \ldots \rightarrow o_k\]
θ
is the weights and biases of the neural network, e.g., θ = ae_init(output_dims)
.
fcx
outputs two tensors:
the output of the neural network: $u\in \mathbb{R}^{m\times o_k}$.
the sensitivity of the neural network per sample: $\frac{\partial u}{\partial x}\in \mathbb{R}^{m \times o_k \times n}$
Generative Neural Nets
ADCME.GAN
— TypeGAN(dat::Union{Array,PyObject}, generator::Function, discriminator::Function,
loss::Union{Missing, Function}=missing; latent_dim::Union{Missing, Int64}=missing,
batch_size::Int64=32)
Creates a GAN instance.
dat
$\in \mathbb{R}^{n\times d}$ is the training data for the GAN, where $n$ is the number of training data, and $d$ is the dimension per training data.generator
$:\mathbb{R}^{d'} \rightarrow \mathbb{R}^d$ is the generator function, $d'$ is the hidden dimension.discriminator
$:\mathbb{R}^{d} \rightarrow \mathbb{R}$ is the discriminator function.loss
is the loss function. Seeklgan
,rklgan
,wgan
,lsgan
for examples.latent_dim
(default=$d$, the same as output dimension) is the latent dimension.batch_size
(default=32) is the batch size in training.
Example: Constructing a GAN
dat = rand(10000,10)
generator = (z, gan)->10*z
discriminator = (x, gan)->sum(x)
gan = GAN(dat, generator, discriminator, "wgan_stable")
Example: Learning a Gaussian random variable
using ADCME
using PyPlot
using Distributions
dat = randn(10000, 1) * 0.5 .+ 3.0
function gen(z, gan)
ae(z, [20,20,20,1], "generator_$(gan.ganid)", activation = "relu")
end
function disc(x, gan)
squeeze(ae(x, [20,20,20,1], "discriminator_$(gan.ganid)", activation = "relu"))
end
gan = GAN(dat, gen, disc, g->wgan_stable(g, 0.001); latent_dim = 10)
dopt = AdamOptimizer(0.0002, beta1=0.5, beta2=0.9).minimize(gan.d_loss, var_list=gan.d_vars)
gopt = AdamOptimizer(0.0002, beta1=0.5, beta2=0.9).minimize(gan.g_loss, var_list=gan.g_vars)
sess = Session(); init(sess)
for i = 1:5000
batch_x = rand(1:10000, 32)
batch_z = randn(32, 10)
for n_critic = 1:1
global _, dl = run(sess, [dopt, gan.d_loss],
feed_dict=Dict(gan.ids=>batch_x, gan.noise=>batch_z))
end
_, gl, gm, dm, gp = run(sess, [gopt, gan.g_loss,
gan.STORAGE["g_grad_magnitude"], gan.STORAGE["d_grad_magnitude"],
gan.STORAGE["gradient_penalty"]],
feed_dict=Dict(gan.ids=>batch_x, gan.noise=>batch_z))
mod(i, 100)==0 && (@info i, dl, gl, gm, dm, gp)
end
hist(run(sess, squeeze(rand(gan,10000))), bins=50, density = true)
nm = Normal(3.0,0.5)
x0 = 1.0:0.01:5.0
y0 = pdf.(nm, x0)
plot(x0, y0, "g")
ADCME.build!
— Methodbuild!(gan::GAN)
Builds the GAN instances. This function returns gan
for convenience.
ADCME.jsgan
— Methodjsgan(gan::GAN)
Computes the vanilla GAN loss function.
ADCME.klgan
— Methodklgan(gan::GAN)
Computes the KL-divergence GAN loss function.
ADCME.lsgan
— Methodlsgan(gan::GAN)
Computes the least square GAN loss function.
ADCME.predict
— Methodpredict(gan::GAN, input::Union{PyObject, Array})
Predicts the GAN gan
output given input input
.
ADCME.rklgan
— Methodrklgan(gan::GAN)
Computes the reverse KL-divergence GAN loss function.
ADCME.sample
— Methodsample(gan::GAN, n::Int64)
rand(gan::GAN, n::Int64)
Samples n
instances from gan
.
ADCME.wgan
— Methodwgan(gan::GAN)
Computes the Wasserstein GAN loss function.
ADCME.wgan_stable
— Functionwgan_stable(gan::GAN, λ::Float64)
Returns the discriminator and generator loss for the Wasserstein GAN loss with penalty parameter $\lambda$
The objective function is
\[L = E_{\tilde x\sim P_g} [D(\tilde x)] - E_{x\sim P_r} [D(x)] + \lambda E_{\hat x\sim P_{\hat x}}[(||\nabla_{\hat x}D(\hat x)||^2-1)^2]\]
Tools
ADCME.MCMCSimple
— TypeMCMCSimple(obs::Array{Float64, 1}, h::Function,
σ::Float64, θ0::Array{Float64,1}, lb::Float64, ub::Float64)
A very simple yet useful interface for MCMC simulation in many scientific computing problems.
obs
: Observationsh
: Forward computation functionσ
: Noise standard deviation for the observed dataub
,lb
: upper and lower boundθ0
: Initial guess
The mathematical model is
\[y_{obs} = h(\theta)\]
and we have a hard constraint lb\leq \theta \leq ub
.
ADCME.cmake
— Functioncmake(DIR::String=".."; CMAKE_ARGS::Union{Array{String}, String} = "")
The built-in Cmake command for building C/C++ libraries. If extra Cmake arguments are needed, please specify it through CMAKE_ARGS
.
Example
ADCME.cmake(CMAKE_ARGS=["SHARED=YES", "STAITC=NO"])
The executed command might be:
/home/darve/kailaix/.julia/adcme/bin/cmake -G Ninja -DCMAKE_MAKE_PROGRAM=/home/darve/kailaix/.julia/adcme/bin/ninja -DJULIA=/home/darve/kailaix/julia-1.3.1/bin/julia -DCMAKE_C_COMPILER=/home/darve/kailaix/.julia/adcme/bin/x86_64-conda_cos6-linux-gnu-gcc -DCMAKE_CXX_COMPILER=/home/darve/kailaix/.julia/adcme/bin/x86_64-conda_cos6-linux-gnu-g++ SHARED=YES STATIC=NO ..
ADCME.compile
— Methodcompile(s::String; force::Bool=false)
Compiles the library given by path deps/s
. If force
is false, compile
first check whether the binary product exists. If the binary product exists, return 2. Otherwise, compile
tries to compile the binary product, and returns 0 if successful; it return 1 otherwise.
ADCME.compile
— Methodcompile()
Compile a custom operator in the current directory. A CMakeLists.txt
must be present.
ADCME.customop
— Methodcustomop(;with_mpi::Bool = false)
Create a new custom operator. Typically users call customop
twice: the first call generates a customop.txt
, users edit the content in the file; the second all generates C++ source code, CMakeLists.txt, and gradtest.jl from customop.txt
.
Example
julia> customop() # create an editable `customop.txt` file
[ Info: Edit custom_op.txt for custom operators
julia> customop() # after editing `customop.txt`, call it again to generate interface files.
Options
with_mpi
: Whether the custom operator uses MPI
ADCME.debug
— Functiondebug(libfile::String = "")
Loading custom operator shared library. If the loading fails, detailed error message is printed.
ADCME.debug
— Methoddebug(sess::PyObject, o::PyObject)
In the case a session run yields an error from the TensorFlow backend, this function can help print the exact error. For example, you might encounter InvalidArgumentError()
with no detailed error information, and this function can be useful for debugging.
ADCME.doctor
— Methoddoctor()
Reports health of the current installed ADCME package. If some components are broken, possible fix is proposed.
ADCME.get_library_symbols
— Methodget_library_symbols(file::Union{String, PyObject})
Returns the symbols in the custom op library file
.
ADCME.get_placement
— Methodget_placement()
Returns the operation placements.
ADCME.install
— Methodinstall(s::String; force::Bool = false, islocal::Bool = false)
Install a custom operator from a URL, a directory (when islocal
is true), or a string. In any of the three case, install
copy the folder to /home/runner/work/ADCME.jl/ADCME.jl/deps/CustomOps/Plugin. When s
is a string, s
is converted to
https://github.com/ADCMEMarket/<s>
ADCME.load_library
— Methodload_library(filename::String)
Load custom operator libraries. If used with
ADCME.load_op
— Methodload_op(oplibpath::Union{PyObject, String}, opname::String; verbose::Union{Missing, Bool} = missing)
Loads the operator opname
from library oplibpath
.
ADCME.load_op_and_grad
— Methodload_op_and_grad(oplibpath::Union{PyObject, String}, opname::String; multiple::Bool=false)
Loads the operator opname
from library oplibpath
; gradients are also imported. If multiple
is true, the operator is assumed to have multiple outputs.
ADCME.load_system_op
— Functionload_system_op(opname::String, grad::Bool=true; multiple::Bool=false)
Loads custom operator from CustomOps directory (shipped with ADCME instead of TensorFlow) For example
s = "SparseOperator"
oplib = "libSO"
grad = true
this will direct Julia to find library CustomOps/SparseOperator/libSO.dylib
on MACOSX
ADCME.make_library
— Methodmake_library(Libdir::String)
Make shared library in Libdir
. The structure of the source codes files are
- Libdir
- *.cpp
- *.h
- CMakeLists
- build (Optional)
ADCME.nnuq
— Methodnnuq(H::Array{Float64,2}, invR::Union{Float64, Array{Float64,2}}, invQ::Union{Float64, Array{Float64,2}})
Returns the variance matrix for the Baysian inversion.
The negative log likelihood function is
\[l(s) =\frac{1}{2} (y-h(s))^T R^{-1} (y-h(s)) + \frac{1}{2} s^T Q^{-1} s\]
The covariance matrix is computed by first linearizing $h(s)$
\[h(s)\approx h(s_0) + \nabla h(s_0) (s-s_0)\]
and then computing the second order derivative
\[V = \left(\frac{\partial^2 l}{\partial s^T\partial s}\right)^{-1} = (H^T R^{-1} H + Q^{-1})^{-1}\]
Note the result is independent of $s_0$, $y_0$, and only depends on $\nabla h(s_0)$
ADCME.register
— Methodregister(forward::Function, backward::Function; multiple::Bool=false)
Register a function forward
with back-propagated gradients rule backward
to the backward. ∘ forward
: it takes $n$ inputs and outputs $m$ tensors. When $m>1$, the keyword multiple
must be true. ∘ backward
: it takes $\tilde m$ top gradients from float/double output tensors of forward
, $m$ outputs of the forward
, and $n$ inputs of the forward
. backward
outputs $n$ gradients for each input of forward
. When input $i$ of forward
is not float/double, backward
should return nothing
for the corresponding gradients.
Example
forward = x->log(1+exp(x))
backward = (dy, y, x)->dy*(1-1/(1+y))
f = register(forward, backward)
ADCME.sleep_for
— Methodsleep_for(t::Union{PyObject, <:Real})
Sleeps for t
seconds.
ADCME.test_gpu
— Methodtest_gpu()
Tests the GPU ultilities
ADCME.timestamp
— Functiontimestamp(deps::Union{PyObject, <:Real, Missing}=missing)
These functions are usually used with bind
for profiling. Note the timing is not very accurate in a multithreaded environment.
deps
:deps
is always executed before returning the timestamp.
Example
a = constant(3.0)
t0 = timestamp(a)
sleep_time = sleep_for(a)
t1 = timestamp(sleep_time)
sess = Session(); init(sess)
t0_, t1_ = run(sess, [t0, t1])
time = t1_ - t0_
ADCME.xavier_init
— Functionxavier_init(size, dtype=Float64)
Returns a matrix of size size
and its values are from Xavier initialization.
Base.precompile
— Functionprecompile(force::Bool=false)
Precompile the built-in custom operators.
ADCME.animate
— Methodanimate(update::Function, frames; kwargs...)
Creates an animation using update function update
.
Example
θ = LinRange(0, 2π, 100)
x = cos.(θ)
y = sin.(θ)
pl, = plot([], [], "o-")
t = title("0")
xlim(-1.2,1.2)
ylim(-1.2,1.2)
function update(i)
t.set_text("$i")
pl.set_data([x[1:i] y[1:i]]'|>Array)
end
animate(update, 1:100)
clf()
can be used to clear figures within update
function.
ADCME.gradview
— Methodgradview(sess::PyObject, pl::PyObject, loss::PyObject, u0; scale::Float64 = 1.0)
Visualizes the automatic differentiation and finite difference convergence converge. For correctly implemented differentiable codes, the convergence rate for AD should be 2 and for FD should be 1 (if not evaluated at stationary point).
scale
: you can control the step size for perturbation.
ADCME.jacview
— Methodjacview(sess::PyObject, f::Function, θ::Union{Array{Float64}, PyObject, Missing},
u0::Array{Float64}, args...)
Performs gradient test for a vector function. f
has the signature
f(θ, u) -> r, J
Here θ
is a nuisance parameter, u
is the state variables (w.r.t. which the Jacobian is computed), r
is the residual vector, and J
is the Jacobian matrix (a dense matrix or a SparseTensor
).
Example 1
u0 = rand(10)
function verify_jacobian_f(θ, u)
r = u^3+u - u0
r, spdiag(3u^2+1.0)
end
jacview(sess, verify_jacobian_f, missing, u0)
Example 2
u0 = rand(10)
rs = rand(10)
function verify_jacobian_f(θ, u)
r = [u^2;u] - [rs;rs]
r, [spdiag(2*u); spdiag(10)]
end
jacview(sess, verify_jacobian_f, missing, u0); close("all")
ADCME.lineview
— Functionlineview(sess::PyObject, pl::PyObject, loss::PyObject, θ1, θ2=nothing; n::Integer = 10)
Plots the function
\[h(α) = f((1-α)θ_1 + αθ_2)\]
Example
pl = placeholder(Float64, shape=[2])
l = sum(pl^2-pl*0.1)
sess = Session(); init(sess)
lineview(sess, pl, l, rand(2))
ADCME.saveanim
— Methodsaveanim(anim::PyObject, filename::String; kwargs...)
Saves the animation produced by animate
ADCME.test_gradients
— Methodtest_gradients(f::Function, x0::Array{Float64, 1}; scale::Float64 = 1.0, showfig::Bool = true)
Testing the gradients of a vector function f
: y, J = f(x)
where y
is a scalar output and J
is the vector gradient.
ADCME.test_hessian
— Methodtest_hessian(f::Function, x0::Array{Float64, 1}; scale::Float64 = 1.0)
Testing the Hessian of a scalar function f
: g, H = f(x)
where y
is a scalar output, g
is a vector gradient output, and H
is the Hessian.
ADCME.test_jacobian
— Methodtest_jacobian(f::Function, x0::Array{Float64, 1}; scale::Float64 = 1.0, showfig::Bool = true)
Testing the gradients of a vector function f
: y, J = f(x)
where y
is a vector output and J
is the Jacobian.
ADCME.Database
— TypeDatabase(filename::Union{Missing, String} = missing;
commit_after_execute::Bool = true)
Creates a database from filename
. If filename
is not provided, an in-memory database is created. If commit_after_execute
is false, no commit operation is performed after each execute
.
- do block syntax:
Database() do db
execute(db, "create table mytable (a real, b real)")
end
The database is automatically closed after execution. Therefore, if execute is a query operation, users need to store the results in a global variable.
- Query meta information
keys(db) # all tables
keys(db, "mytable") # all column names in `db.mytable`
ADCME.commit
— Methodcommit(db::Database)
Commits changes to db
.
ADCME.execute
— Methodexecute(db::Database, sql::String, args...)
Executes the SQL statement sql
in db
. Users can also use the do block syntax.
execute(db) do
"create table mytable (a real, b real)"
end
execute
can also be used to insert a batch of records
t1 = rand(10)
t2 = rand(10)
param = collect(zip(t1, t2))
execute(db, "INSERT TO mytable VALUES (?,?)", param)
or
execute(db, "INSERT TO mytable VALUES (?,?)", t1, t2)
ODE
ADCME.ExplicitNewmark
— TypeExplicitNewmark(M::Union{SparseTensor, SparseMatrixCSC}, Z1::Union{Missing, SparseTensor, SparseMatrixCSC}, Z2::Union{Missing, SparseTensor, SparseMatrixCSC}, Δt::Float64)
An explicit Newmark integrator for
\[M \ddot{\mathbf{d}} + Z_1 \dot{\mathbf{d}} + Z_2 \mathbf{d} + f = 0\]
The numerical scheme is
\[\left(\frac{1}{\Delta t^2} M + \frac{1}{2\Delta t}Z_1\right)d^{n+1} = \left(\frac{2}{\Delta t^2} M - Z_2\right)d^n - \left(\frac{1}{\Delta t^2} M - \frac{1}{2\Delta t}Z_1\right) d^{n-1} - f\]
To use this integrator,
en = ExplicitNewmark(M, Z1, Z2, Δt)
d2 = step(en, d0, d1, f)
ADCME.Newmark
— TypeNewmark(M::Union{SparseTensor, PyObject, Array{Float64, 2}, SparseMatrixCSC},
C::Union{SparseTensor, PyObject, Array{Float64, 2}, SparseMatrixCSC},
K::Union{SparseTensor, PyObject, Array{Float64, 2}, SparseMatrixCSC},
Δt::Float64; β::Float64 = 1/4, γ::Float64 = 1/2, factorize_S::Bool = true)
Creates a Newmark algorithm object. This function uses "a-form" implementation for
\[M\ddot \mathbf{d} + C \dot\mathbf{d} + K \mathbf{d} + f = 0\]
The numerical scheme is
\[\begin{aligned} \tilde d_{n+1} &= d_n + \Delta t v_n + \Delta^2/2(1-2\beta)a_n \\ \tilde v_{n+1} &= v_n + (1-\gamma)\Delta a_n \\ (M+\gamma\Delta C+\beta \Delta^2 K)a_{n+1} &= F_{n+1} - C\tilde v_{n+1} - K\tilde d_{n+1}\\ d_{n+1} &= \tilde d_{n+1} + \beta \Delta^2 a_{n+1}\\ v_{n+1} &= \tilde v_{n+1} + \gamma \Delta ta_n \end{aligned}\]
For unconditional stability $2\beta \geq \gamma \geq \frac12$
For an explicit form, see ExplicitNewmark
, which corresponds to $\beta = 0, \gamma = \frac12$.
A common choice for Newmark algorithm is $\beta = \frac14, \gamma = \frac12$ (a.k.a., average acceleration, or Trapezoidal rule), which is unconditionally stable, has second order of accuracy.
Internally, $M+\gamma\Delta C+\beta \Delta^2 K$ is represented by S
. Users can elect to factorize $S$ (default $=$ true).
To perform time marching, use
step(nm::Newmark, d::PyObject, v::PyObject, a::PyObject,
F::Union{Array{Float64, 1}, PyObject})
ADCME.TR_BDF2
— TypeTR_BDF2(D0::Union{SparseTensor, SparseMatrixCSC},
D1::Union{SparseTensor, SparseMatrixCSC},
Δt::Float64)
Constructs a TR-BDF2 (the Trapezoidal Rule with Second Order Backward Difference Formula) handler for the DAE
\[D_1 \dot y + D_0 y = f\]
The struct is a functor, which performs one step simulation
(tr::TR_BDF2)(y::Union{PyObject, Array{Float64, 1}},
f1::Union{PyObject, Array{Float64, 1}},
f2::Union{PyObject, Array{Float64, 1}},
f3::Union{PyObject, Array{Float64, 1}})
Here f1
, f2
, and f3
correspond to the right hand side at time step $n$, $n+\frac12$, and $n+1$.
Or we can pass a batched F
defined as a (2NT+1) × DOF
array
(tr::TR_BDF2)(y0::Union{PyObject, Array{Float64, 1}},
F::Union{PyObject, Array{Float64, 2}})
The output will be the entire solution of size (NT+1) × DOF
.
The scheme takes the following form for n = 0, 1, ... $\begin{aligned} D_1(y^{n+\frac12}-y^n) = \frac12\frac{\Delta t}{2}\left(f^{n+\frac12} + f^n - D_0 \left(y^{n+\frac12} + y^n\right)\right)\\ \left(\frac{\Delta t}{2}\right)^{-1} D_1 \left(\frac32y^{n+1} - 2y^{n+\frac12} + \frac12 y^n\right) + D_0 y^{n+1} = f^{n+1}\end{aligned}$
ADCME.constant
— Methodconstant(tr::TR_BDF2)
Converts tr
to a symbolic solver.
ADCME.ode45
— Methodode45(y::Union{PyObject, Float64, Array{Float64}}, T::Union{PyObject, Float64},
NT::Union{PyObject,Int64}, f::Function, θ::Union{PyObject, Missing}=missing)
Solves
\[\frac{dy}{dt} = f(y, t, \theta)\]
with six-stage, fifth-order, Runge-Kutta method.
ADCME.rk4
— Methodrk4(y::Union{PyObject, Float64, Array{Float64}}, T::Union{PyObject, Float64},
NT::Union{PyObject,Int64}, f::Function, θ::Union{PyObject, Missing}=missing)
Solves
\[\frac{dy}{dt} = f(y, t, \theta)\]
with Runge-Kutta (order 4) method.
ADCME.runge_kutta
— Functionrunge_kutta(f::Function, T::Union{PyObject, Float64},
NT::Union{PyObject,Int64}, y::Union{PyObject, Float64, Array{Float64}}, θ::Union{PyObject, Missing}=missing; method::String="rk4")
Solves
\[\frac{dy}{dt} = f(y, t, \theta)\]
with Runge-Kutta method.
For example, the default solver, RK4
, has the following numerical scheme per time step
\[\begin{aligned} k_1 &= \Delta t f(t_n, y_n, \theta)\\ k_2 &= \Delta t f(t_n+\Delta t/2, y_n + k_1/2, \theta)\\ k_3 &= \Delta t f(t_n+\Delta t/2, y_n + k_2/2, \theta)\\ k_4 &= \Delta t f(t_n+\Delta t, y_n + k_3, \theta)\\ y_{n+1} &= y_n + \frac{k_1}{6} +\frac{k_2}{3} +\frac{k_3}{3} +\frac{k_4}{6} \end{aligned}\]
ADCME.αscheme
— Methodαscheme(M::Union{SparseTensor, SparseMatrixCSC},
C::Union{SparseTensor, SparseMatrixCSC},
K::Union{SparseTensor, SparseMatrixCSC},
Force::Union{Array{Float64}, PyObject},
d0::Union{Array{Float64, 1}, PyObject},
v0::Union{Array{Float64, 1}, PyObject},
a0::Union{Array{Float64, 1}, PyObject},
Δt::Array{Float64};
solve::Union{Missing, Function} = missing,
extsolve::Union{Missing, Function} = missing,
ρ::Float64 = 1.0)
Generalized α-scheme. $M u_{tt} + C u_{t} + K u = F$
Force
must be an array of size n
×p
, where d0
, v0
, and a0
have a size p
Δt
is an array (variable time step).
The generalized α scheme solves the equation by the time stepping
\[\begin{aligned} \bf d_{n+1} &= \bf d_n + h\bf v_n + h^2 \left(\left(\frac{1}{2}-\beta_2 \right)\bf a_n + \beta_2 \bf a_{n+1} \right)\\ \bf v_{n+1} &= \bf v_n + h((1-\gamma_2)\bf a_n + \gamma_2 \bf a_{n+1})\\ \bf F(t_{n+1-\alpha_{f_2}}) &= M \bf a _{n+1-\alpha_{m_2}} + C \bf v_{n+1-\alpha_{f_2}} + K \bf{d}_{n+1-\alpha_{f_2}} \end{aligned}\]
where
\[\begin{aligned} \bf d_{n+1-\alpha_{f_2}} &= (1-\alpha_{f_2})\bf d_{n+1} + \alpha_{f_2} \bf d_n\\ \bf v_{n+1-\alpha_{f_2}} &= (1-\alpha_{f_2}) \bf v_{n+1} + \alpha_{f_2} \bf v_n \\ \bf a_{n+1-\alpha_{m_2} } &= (1-\alpha_{m_2}) \bf a_{n+1} + \alpha_{m_2} \bf a_n\\ t_{n+1-\alpha_{f_2}} & = (1-\alpha_{f_2}) t_{n+1 + \alpha_{f_2}} + \alpha_{f_2}t_n \end{aligned}\]
Here the parameters are computed using
\[\begin{aligned} \gamma_2 &= \frac{1}{2} - \alpha_{m_2} + \alpha_{f_2}\\ \beta_2 &= \frac{1}{4} (1-\alpha_{m_2}+\alpha_{f_2})^2 \\ \alpha_{m_2} &= \frac{2\rho_\infty-1}{\rho_\infty+1}\\ \alpha_{f_2} &= \frac{\rho_\infty}{\rho_\infty+1} \end{aligned}\]
∘ solve
: users can provide a solver function, solve(A, rhs)
for solving Ax = rhs
∘ extsolve
: similar to solve
, but the signature has the form
extsolve(A, rhs, i)
This provides the users with more control, e.g., (time-dependent) Dirichlet boundary conditions. See Generalized α Scheme for details.
In the case $u$ has a nonzero essential boundary condition $u_b$, we let $\tilde u=u-u_b$, then $M \tilde u_{tt} + C \tilde u_t + K u = F - K u_b - C \dot u_b$
ADCME.αscheme_time
— Methodαscheme_time(Δt::Array{Float64}; ρ::Float64 = 1.0)
Returns the integration time $t_{i+1-\alpha_{f_2}}$ between $[t_i, t_{i+1}]$ using the alpha scheme. If $\Delta t$ has length $n$, the output will also have length $n$.
Function Approximators
ADCME.RBF2D
— Typefunction RBF2D(xc::Union{PyObject, Array{Float64, 1}}, yc::Union{PyObject, Array{Float64, 1}};
c::Union{PyObject, Array{Float64, 1}, Missing} = missing,
eps::Union{PyObject, Array{Float64, 1}, Real, Missing} = missing,
d::Union{PyObject, Array{Float64, 1}} = zeros(0),
kind::Int64 = 0)
Constructs a radial basis function representation on a 2D domain
\[f(x, y) = \sum_{i=1}^N c_i \phi(r; \epsilon_i) + d_0 + d_1 x + d_2 y\]
Here d
can be either 0, 1 (only $d_0$ is present), or 3 ($d_0$, $d_1$, and $d_2$ are all present).
kind
determines the type of radial basis functions
- 0:Gaussian
\[\phi(r; \epsilon) = e^{-(\epsilon r)^2}\]
- 1:Multiquadric
\[\phi(r; \epsilon) = \sqrt{1+(\epsilon r)^2}\]
- 2:Inverse quadratic
\[\phi(r; \epsilon) = \frac{1}{1+(\epsilon r)^2}\]
- 3:Inverse multiquadric
\[\phi(r; \epsilon) = \frac{1}{\sqrt{1+(\epsilon r)^2}}\]
Returns a callable struct, i.e. to evaluates the function at locations $(x, y)$ (x
and y
are both vectors), run
rbf(x, y)
ADCME.RBF3D
— TypeRBF3D(xc::Union{PyObject, Array{Float64, 1}}, yc::Union{PyObject, Array{Float64, 1}},
zc::Union{PyObject, Array{Float64, 1}};
c::Union{PyObject, Array{Float64, 1}, Missing} = missing,
eps::Union{PyObject, Array{Float64, 1}, Real, Missing} = missing,
d::Union{PyObject, Array{Float64, 1}} = zeros(0),
kind::Int64 = 0)
Constructs a radial basis function representation on a 3D domain
\[f(x, y, z) = \sum_{i=1}^N c_i \phi(r; \epsilon_i) + d_0 + d_1 x + d_2 y + d_3 z\]
Here d
can be either 0, 1 (only $d_0$ is present), or 4 ($d_0$, $d_1$, $d_2$, and $d_3$ are all present).
kind
determines the type of radial basis functions
- 0:Gaussian
\[\phi(r; \epsilon) = e^{-(\epsilon r)^2}\]
- 1:Multiquadric
\[\phi(r; \epsilon) = \sqrt{1+(\epsilon r)^2}\]
- 2:Inverse quadratic
\[\phi(r; \epsilon) = \frac{1}{1+(\epsilon r)^2}\]
- 3:Inverse multiquadric
\[\phi(r; \epsilon) = \frac{1}{\sqrt{1+(\epsilon r)^2}}\]
Returns a callable struct, i.e. to evaluates the function at locations $(x, y, z)$ (x
, y
, and z
are all vectors), run
rbf(x, y, z)
ADCME.interp1
— Functioninterp1(x::Union{Array{Float64, 1}, PyObject},v::Union{Array{Float64, 1}, PyObject},xq::Union{Array{Float64, 1}, PyObject})
returns interpolated values of a 1-D function at specific query points using linear interpolation. Vector x contains the sample points, and v contains the corresponding values, v(x). Vector xq contains the coordinates of the query points.
x
should be sorted in ascending order.
Example
x = sort(rand(10))
y = constant(@. x^2 + 1.0)
z = [x[1]; x[2]; rand(5) * (x[end]-x[1]) .+ x[1]; x[end]]
u = interp1(x,y,z)
Optimal Transport
ADCME.dtw
— Functiondtw(s::Union{PyObject, Array{Float64}}, t::Union{PyObject, Array{Float64}},
use_fast::Bool = false)
Computes the dynamic time wrapping (DTW) distance between two time series s
and t
. Returns the distance and path. use_fast
specifies whether fast algorithm is used. Note fast algorithm may not be accurate.
ADCME.emd
— Methodemd(a::Union{PyObject, Array{Float64}}, b::Union{PyObject, Array{Float64}}, M::Union{PyObject, Array{Float64}};
iter::Int64 = 1000, tol::Float64 = 1e-9, returnall::Bool=false)
Computes the Earth Mover's Distance, which is defined as
\[D(M) = \sum_{i=1}^m \sum_{j=1}^n M_{ij} d_{ij}\]
Here $M \in \mathbb{R}^{m\times n}$ is the ground distance matrix. The algorithm solves the following optimization problem
\[\begin{aligned}\min_{M} &\ D(M)\\\text{s.t.} & \ \sum_{i=1}^m M_{ij} = b_j\\ &\ \sum_{j=1}^n M_{ij} = a_i \end{aligned}\]
The internal solver for the optimization problem is a netflow solver. The algorithm requires $\sum_i a_i = \sum_j b_j = 1$.
ADCME.empirical_emd
— Methodempirical_emd(x::Union{PyObject, Array{Float64}}, y::Union{PyObject, Array{Float64}};
iter::Int64 = 1000, tol::Float64 = 1e-9, dist::Union{Integer,Function}=2, returnall::Bool=false)
Same as empirical_sinkhorn
, except that the Earth Mover Distance is computed.
ADCME.empirical_sinkhorn
— Methodempirical_sinkhorn(x::Union{PyObject, Array{Float64}}, y::Union{PyObject, Array{Float64}};
reg::Union{PyObject,Float64} = 1.0, iter::Int64 = 1000, tol::Float64 = 1e-9, method::String="sinkhorn", dist::Function=dist, returnall::Bool=false)
Computes the empirical Sinkhorn distance with sinkhorn algorithm. Here $x$ and $y$ are samples from two distributions.
reg
(default = 1.0): entropy regularization parametertol
(default = 1e-9),iter
(default = 1000): tolerance and max iterations for the Sinkhorn algorithmdist
(default = 2): Integer or Function, the distance function between two samples; ifdist
is integer, $L-dist$ norm is used.returnall
: returns (TransportMatrix
,Loss
) if true; otherwise, onlyLoss
is returned.
The implementation are adapted from https://github.com/rflamary/POT.
ADCME.ot_dist
— Functionot_dist(x::Union{PyObject, Array{Float64}}, y::Union{PyObject, Array{Float64}}, order::Union{Int64, PyObject}=2)
Computes the distance function with norm order
. dist
returns a $n\times m$ matrix, where $x\in \mathbb{R}^{n\times d}$ and $y\in \mathbb{R}^{m\times d}$, and the return $M\in \mathbb{R}^{n\times m}$
\[M_{ij} = ||x_i - y_j||_{o}\]
ADCME.ot_plot1D
— Methodot_plot1D(a::Array{Float64, 1}, b::Array{Float64, 1}, M::Array{Float64, 2})
Plots the optimal transport matrix for 1D distributions.
ADCME.sinkhorn
— Methodsinkhorn(a::Union{PyObject, Array{Float64}}, b::Union{PyObject, Array{Float64}}, M::Union{PyObject, Array{Float64}};
reg::Float64 = 1.0, iter::Int64 = 1000, tol::Float64 = 1e-9, method::String="sinkhorn")
Computes the optimal transport with Sinkhorn algorithm. The mathematical formulation is
\[\begin{aligned} \arg\min_P &\ \left(P, M\right) + \lambda \Omega(\Gamma)\\ \text{s.t.} &\ \Gamma 1 = a\\ &\ \Gamma^T 1 = b\\ & \Gamma \geq 0 \end{aligned}\]
Here $\Omega$ is the entropic regularization. Note if $\lambda$ is very small, the algorithm may encounter numerical instabilities.
The implementation are adapted from https://github.com/rflamary/POT.
MPI
ADCME.mpi_SparseTensor
— Typemutable struct mpi_SparseTensor
rows::PyObject
ncols::PyObject
cols::PyObject
values::PyObject
ilower::Int64
iupper::Int64
N::Int64
oplibpath::String
end
A structure to hold local data of a sparse matrix. The global matrix is assumed to be a $M\times N$ square matrix. The current processor owns rows from ilower
to iupper
(inclusive). The data is specified by
rows
: an array indicating the rows that contain nonzero values. Noterows ≥ ilower
.ncols
: an array indicating the number of nonzero values for each row inrows
.cols
: the column indices for nonzero values. Its length is $\sum_{i=1}^{\mathrm{ncols}} \mathrm{ncols}_i$vals
: the nonzero values corresponding to each column index incols
oplibpath
: the backend library (returned byADCME.load_plugin_MPITensor
)
All data structure are 0-based. Note if we work with a linear solver, $M=N$.
For example, consider the sparse matrix
[ 1 0 0 1 ]
[ 0 1 2 1 ]
We have
rows = Int32[0;1]
ncols = Int32[2;3]
cols = Int32[0;3;1,2,3]
values = [1.;1.;1.;2.;1.]
iupper = ilower + 2
ADCME.mpi_SparseTensor
— Typempi_SparseTensor(sp::Union{SparseTensor, SparseMatrixCSC{Float64,Int64}},
ilower::Union{Int64, Missing} = missing,
iupper::Union{Int64, Missing} = missing)
Constructing mpi_SparseTensor
from a SparseTensor
or a sparse Array.
ADCME.mpi_SparseTensor
— Methodmpi_SparseTensor(rows::Union{Array{Int32,1}, PyObject}, ncols::Union{Array{Int32,1}, PyObject}, cols::Union{Array{Int32,1}, PyObject},
vals::Union{Array{Float64,1}, PyObject}, ilower::Int64, iupper::Int64, N::Int64)
Create a $N\times N$ distributed sparse tensor A
for the current MPI processor. The current MPI processor owns rows with indices [ilower, iupper]
. The submatrix is specified using the CSR format.
rows
: an array indicating the rows that contain nonzero values. Noterows ≥ ilower
.ncols
: an array indicating the number of nonzero values for each row inrows
.cols
: the column indices for nonzero values. Its length is $\sum_{i=1}^{\mathrm{ncols}} \mathrm{ncols}_i$vals
: the nonzero values corresponding to each column index incols
Note that by default the indices are zero-based.
ADCME.mpi_barrier
— Methodmpi_barrier(o::PyObject)
Returns 0. Impose a barrier for object o
.
using ADCME
mpi_init()
s = constant(rand(10))
t = mpi_bcast(s)
t = t + 1.0 + mpi_barrier(t)
q = mpi_bcast(t)
sess = Session(); init(sess)
run(sess, q)
ADCME.mpi_bcast
— Functionmpi_bcast(a::Union{Array{Float64}, Float64, PyObject}, root::Int64 = 0)
Broadcast a
from processor root
to all other processors.
ADCME.mpi_finalize
— Methodmpi_finalize()
Finalize the MPI call.
ADCME.mpi_finalized
— Methodmpi_finalized()
Returns a boolean indicating whether the current MPI session is finalized.
ADCME.mpi_gather
— Functionmpi_gather(u::Union{Array{Float64, 1}, PyObject}, deps::Union{Missing, PyObject} = missing)
Gathers all the vectors from different processes to the root process. The function returns a long vector which concatenates of local vectors in the order of process IDs.
ADCME.mpi_halo_exchange
— Methodmpi_halo_exchange(u::Union{Array{Float64, 2}, PyObject},m::Int64,n::Int64; deps::Union{Missing, PyObject} = missing,
fill_value::Float64 = 0.0, tag::Union{PyObject, Int64} = 0)
Perform Halo exchnage on u
(a $k \times k$ matrix). The output has a shape $(k+2)\times (k+2)$
fill_value
: value used for the boundariestag
: message tagdeps
: a scalar tensor; it can be used to serialize the MPI calls
ADCME.mpi_halo_exchange2
— Methodmpi_halo_exchange2(u::Union{Array{Float64, 2}, PyObject},m::Int64,n::Int64; deps::Union{Missing, PyObject} = missing,
fill_value::Float64 = 0.0, tag::Union{PyObject, Int64} = 0)
Similar to mpi_halo_exchange
, but the reach is 2, i.e., for a $N\times N$ matrix $u$, the output will be a $(N+4)\times (N+4)$ matrix.
ADCME.mpi_init
— Methodmpi_init()
Initialized the MPI session. mpi_init
must be called before any run(sess, ...)
.
ADCME.mpi_initialized
— Methodmpi_initialized()
Returns a boolean indicating whether the current MPI session is initialized.
ADCME.mpi_optimize
— Methodmpi_optimize(_f::Function, _g!::Function, x0::Array{Float64};
method::String = "LBFGS", options = missing)
Performs the MPI-enabled optimization.
Example
mpirun -n 4 julia adcme_inverse.jl
You can find the file adcme_inverse.jl
here.
ADCME.mpi_rank
— Methodmpi_rank()
Returns the rank of current MPI process (rank 0 based).
ADCME.mpi_recv
— Functionmpi_recv(a::Union{Array{Float64}, Float64, PyObject}, src::Int64, tag::Int64 = 0)
Receives an array from processor src
. mpi_recv
requires an input for gradient backpropagation. Typically we can write
r = mpi_rank()
a = constant(Float64(r))
if r==1
a = mpi_send(a, 0)
end
if r==0
a = mpi_recv(a, 1)
end
Then a=1
on both processor 0 and processor 1.
ADCME.mpi_send
— Functionmpi_send(a::Union{Array{Float64}, Float64, PyObject}, dest::Int64,root::Int64 = 0)
Sends a
to processor dest
. a
itself is returned so that the send action can be added to the computational graph.
ADCME.mpi_sendrecv
— Functionmpi_sendrecv(a::Union{Array{Float64}, Float64, PyObject}, dest::Int64, src::Int64, tag::Int64=0)
A convenient wrapper for mpi_send
followed by mpi_recv
.
ADCME.mpi_size
— Methodmpi_size()
Returns the size of MPI world.
ADCME.mpi_sum
— Functionmpi_sum(a::Union{Array{Float64}, Float64, PyObject}, root::Int64 = 0)
Sum a
on the MPI processor root
.
ADCME.mpi_sync!
— Functionmpi_sync!(message::Array{Int64,1}, root::Int64 = 0)
mpi_sync!(message::Array{Float64,1}, root::Int64 = 0)
Sync message
across all MPI processors.
ADCME.require_mpi
— Methodrequire_mpi()
Throws an error if mpi_init()
has not been called.
Base.adjoint
— Methodadjoint(A::mpi_SparseTensor)
Returns the adjoint of A
, i.e., A'
. Each MPI rank owns the same number of rows.
Toolchain
ADCME.change_directory
— Functionchange_directory(directory::Union{Missing, AbstractString})
Change the current working directory to directory
. If directory
does not exist, it is made.
If directory
is missing, the default is ADCME.PREFIXDIR
.
ADCME.copy_file
— Methodcopy_file(src::String, dest::String)
Copy file src
to dest
ADCME.get_conda
— Methodget_conda()
Returns the conda executable location.
ADCME.get_gfortran
— Methodget_gfortran()
Install a gfortran compiler if it does not exist.
ADCME.get_library
— Methodget_library(filename::AbstractString)
Returns a valid library file. For example, for filename = "adcme"
, we have
- On MacOS, the function returns
libadcme.dylib
- On Linux, the function returns
libadcme.so
- On Windows, the function returns
adcme.dll
ADCME.get_library_name
— Methodget_library_name(filename::AbstractString)
Returns the OS-dependent library name
Example
get_library_name("mylibrary")
- Windows:
mylibrary.dll
- MacOS:
libmylibrary.dylib
- Linux:
libmylibrary.so
ADCME.get_pip
— Methodget_pip()
Returns the location for pip
ADCME.git_repository
— Methodgit_repository(url::AbstractString, file::AbstractString)
Clone a repository url
and rename it to file
.
ADCME.http_file
— Methodhttp_file(url::AbstractString, file::AbstractString)
Download a file from url
and rename it to file
.
ADCME.link_file
— Methodlink_file(target::AbstractString, link::AbstractString)
Make a symbolic link link
-> target
ADCME.make_directory
— Methodmake_directory(directory::AbstractString)
Make a directory if it does not exist.
ADCME.read_with_env
— Functionread_with_env(cmd::Cmd, env::Union{Missing, Dict} = missing)
Similar to run_with_env
, but returns a string containing the output.
ADCME.require_cmakecache
— Functionrequire_cmakecache(func::Function, DIR::String = ".")
Check if cmake
has output something. If not, func
is executed.
ADCME.require_file
— Methodrequire_file(f::Function, file::Union{String, Array{String}})
If any of the files/links/directories in file
does not exist, execute f
.
ADCME.require_import
— Methodrequire_import(s::Symbol)
Checks whether the package s
is imported in the Main namespace. Returns the package handle.
ADCME.require_library
— Methodrequire_library(func::Function, filename::AbstractString)
If the library file filename
does not exist, func
is executed.
ADCME.run_with_env
— Functionrun_with_env(cmd::Cmd, env::Union{Missing, Dict} = missing)
Running the command with the default environment and an extra environment variables env
ADCME.uncompress
— Functionuncompress(zipfile::AbstractString, file::AbstractString)
Uncompress a zip file zipfile
to file
(a directory). Note this function does not check that the uncompressed content has the name file
. It is used as a hint to skip uncompress
action.
Users may use mv uncompress_file file
to enforce the consistency.
ADCME.get_gpu
— Methodget_gpu()
Returns the compiler information for GPUs. An examplary output is
(NVCC = "/usr/local/cuda/bin/nvcc", LIB = "/usr/local/cuda/lib64", INC = "/usr/local/cuda/include", TOOLKIT = "/home/kailaix/.julia/adcme/pkgs/cudatoolkit-10.0.130-0/lib", CUDNN = "/home/kailaix/.julia/adcme/pkgs/cudnn-7.6.5-cuda10.0_0/lib")
ADCME.gpu_info
— Methodgpu_info()
Returns the CUDA and GPU information. An examplary output is
- NVCC: /usr/local/cuda/bin/nvcc
- CUDA library directories: /usr/local/cuda/lib64
- Latest supported version of CUDA: 11000
- CUDA runtime version: 10010
- CUDA include_directories: /usr/local/cuda/include
- CUDA toolkit directories: /home/kailaix/.julia/adcme/pkgs/cudatoolkit-10.0.130-0/lib:/home/kailaix/.julia/adcme/pkgs/cudnn-7.6.5-cuda10.0_0/lib
- Number of GPUs: 7
ADCME.has_gpu
— Methodhas_gpu()
Check if the TensorFlow backend is using CUDA GPUs. Operators that have GPU implementations will be executed on GPU devices. See also get_gpu
ADCME will use GPU automatically if GPU is available. To disable GPU, set the environment variable ENV["CUDA_VISIBLE_DEVICES"]=""
before importing ADCME