Operators
!shape
!shape
(tensor &optional (nth nil))
Returns the shape of tensor when nth=nil.nth
indicates the index of shape, !shape return specified value.
Example:
(setq a (!randn `(10 10 10)))
(!shape a) ; => (10 10 10)
(!shape a 0) ;=> 10
!dims
!dims
(tensor)
Returns the total length of a given tensor's dims
Example:
(!dims (!zeros '(10 10 10))) ; => 3
!size
!size
(tensor)
Returns the total size of a tensor
Example:
(!size (!zeros '(10 10 10))) ; => 1000
!zeros
!zeros
(shape)
Initializing constant tensor with given shape, where initial elements are zero.
Input: shape (cons)
Output: Tensor (which is constant)
Example:
(!zeros `(10 10))
;#Const(((0.0 0.0 ~ 0.0 0.0)
; ...
; (0.0 0.0 ~ 0.0 0.0)) :mgl t :shape (10 10))
!ones
!ones
(shape)
The same as !zeros but initial element is one.
Example:
(!ones `(10 10))
;#Const(((1.0 1.0 ~ 1.0 1.0)
; ...
; (1.0 1.0 ~ 1.0 1.0)) :mgl t :shape (10 10))
!fill
!fill
(shape element)
The same as !zeros, !ones but initial element is given element.
Note: the argument element
coerced into mgl-mat:*default-mat-ctype*
Example:
(!fill '(10 10) 10)
;#Const(((10.0 10.0 ~ 10.0 10.0)
; ...
; (10.0 10.0 ~ 10.0 10.0)) :mgl t :shape (10 10))
!arange
!arange
(&rest args)
Like numpy's arange, arange can be called with a varying number of positional arguments:
(!arange stop)
(!arange 10)
;#Const((0.0 1.0 ~ 8.0 9.0) :mgl t :shape (10))
(!arange start stop)
(!arange 3 10)
;=>#Const((3.0 4.0 ~ 8.0 9.0) :mgl t :shape (7))
(!arange start stop step)
(!arange 3 10 2)
;#Const((3.0 5.0 7.0 9.0) :mgl t :shape (4))
!random
!random
(dims limit)
Initialize an tensor of dims (cons)
!random can be called with a varying number of type of arguments:
When limit=fixnum
init within the range of 0<=x<limit
;#Const(((1.0 2.0 ~ 2.0 1.0)
; ...
; (2.0 2.0 ~ 2.0 2.0)) :mgl t :shape (10 10))
When limit=single-float
init within the range of0<=x<limit
(!random '(10 10) 3.0)
;#Const(((0.152... 2.203... ~ 2.360... 2.216...)
; ...
; (1.003... 2.257... ~ 2.305... 2.025...)) :mgl t :shape (10 10))
When limit=(cons single-float1 single-float2)
init with single-float1<=x<single-float2, where each element is single-float.(!random '(10 10) '(1.0 3.0))
;#Const(((1.982... 1.526... ~ 1.388... 1.312...)
; ...
; (1.829... 2.676... ~ 1.226... 2.980...)) :mgl t :shape (10 10))
Return: WaffeTensor
!random-with
!random-with
(dims f)
Initializes the tensor of dims. Each element is initialized with f
where f is a lambda exp and called with index.
Warning: Using mref and slow algorithm, it is so slow.
Example:
(!random-with '(10 10) #'(lambda (n) n))
;#Const(((0.0 1.0 ~ 8.0 9.0)
; ...
; (90.0 91.0 ~ 98.0 99.0)) :mgl t :shape (10 10))
See also: !init-with which is alias for !random-with.
!init-with
!init-with
(dims f)
!normal
!normal
(dims &optional (mean 2.0) (var 1.0))
!randn
!randn
(dims)
Initializes tensor with normal distribution in a faster way where mean=0.0, var=1.0.
Example:
(!randn `(10 10))
;#Const(((0.677... 0.054... ~ 0.257... 0.261...)
; ...
; (0.063... 0.607... ~ 0.460... 0.730...)) :mgl t :shape (10 10))
!uniform-random
!uniform-random
.
!beta
!beta
(dims alpha beta)
Initializes tensor with samples of beta distribution in a faster way.
Algorithm: https://dl.acm.org/doi/pdf/10.1145/359460.359482
x=[0,1]
a = min(alpha, beta)
b = max(alpha, beta)
PDF: fX(x)=x^a−1*(1−x)*b−1/B(a,b)
where B(a,b)=∫1,0{x^a−1(1−x)^b−1}dx
(time (!beta '(200) 5.0 1.0))
;Evaluation took:
; 0.000 seconds of real time
; 0.000063 seconds of total run time (0.000063 user, 0.000000 system)
; 100.00% CPU
; 143,846 processor cycles
; 0 bytes consed
;#Const((0.813... 0.832... ~ 0.865... 0.787...) :mgl t :shape (200))
!gamma
!gamma
(dims k &optional (theta 1.0))
Initialize tensor with samples of gamma distribution.
Todo: Use fast algorithms and approximations in response to k
.
Example:
(!gamma '(10 10) 1.0)
;#Const(((2.155... 3.374... ~ 1.274... 0.147...)
; ...
; (0.194... 0.081... ~ 0.816... 0.209...)) :mgl t :shape (10 10))
!chisquare
!chisquare
(dims df)
!bernoulli
!bernoulli
(dims rate)
Init a tensor of dims with bernoulli
rate is single-float, and [0 1]
See also: !binomial
, alias for it.
Example:
(!binomial '(10 10) 0.5)
;#Const(((1.0 0.0 ~ 1.0 1.0)
; ...
; (0.0 1.0 ~ 1.0 0.0)) :mgl t :shape (10 10))
!binomial
!binomial
(dims rate)
!shape
!shape
(tensor &optional (nth nil))
Returns the shape of tensor when nth=nil.nth
indicates the index of shape, !shape return specified value.
Example:
(setq a (!randn `(10 10 10)))
(!shape a) ; => (10 10 10)
(!shape a 0) ;=> 10
!dims
!dims
(tensor)
Returns the total length of a given tensor's dims
Example:
(!dims (!zeros '(10 10 10))) ; => 3
!size
!size
(tensor)
Returns the total size of a tensor
Example:
(!size (!zeros '(10 10 10))) ; => 1000
!zeros-like
!zeros-like
(tensor)
Return a const where the shape is the same as tensor but elements are zero.
Example:
(setq a (!randn `(10 10)))
(!zeros-like a)
;#Const(((0.0 0.0 ~ 0.0 0.0)
; ...
; (0.0 0.0 ~ 0.0 0.0)) :mgl t :shape (10 10))
!ones-like
!ones-like
(tensor)
(setq a (!randn `(10 10)))
(!ones-like a)
;#Const(((1.0 1.0 ~ 1.0 1.0)
; ...
; (1.0 1.0 ~ 1.0 1.0)) :mgl t :shape (10 10))
!full-like
!full-like
(tensor element)
element
.
Example:
(setq a (!randn `(10 10)))
(!full-like a 3)
;#Const(((3.0 3.0 ~ 3.0 3.0)
; ...
; (3.0 3.0 ~ 3.0 3.0)) :mgl t :shape (10 10))
!add
!add
(x y)
Adds x and y.
In the case when x or y is not a tensor, automatically creates a new tensor.
Destructive mode: (!!add x y)
It supports:
- Broadcasting shapes
- JIT
Examples
(setq a (!randn `(3 3)))
(setq b (!randn `(3 3)))
(setq c (!randn `(3 1)))
(!add 1 1)
;=> Const(2)
(!add (const 1)(const 1))
;=> Const(2)
(!add a b)
;#Const(((3.418... 1.974... 0.177...)
; ...
; (-1.30... 0.987... 1.917...)) :mgl t :shape (3 3))
(!add a c)
;#Const(((1.426... 2.129... 1.050...)
; ...
; (-0.64... 0.269... 0.303...)) :mgl t :shape (3 3))
!!add
(target-x y)
Adds target-x and y in a destructive way.
target-x is always substituted for the result
y is not subject to side effects unless target-x is not a mat.
See also: Destructive Operations
!sub
!sub
(x y)
Subtract x by y.
In the case when x or y is not a tensor, automatically creates a new tensor.
It supports:
- Broadcasting shapes
- JIT
Examples
(setq a (!randn `(3 3)))
(setq b (!randn `(3 3)))
(setq c (!randn `(3 1)))
(!sub 1 1)
;=> Const(0)
(!sub (const 1)(const 1))
;=> Const(0)
(!sub a b)
;#Const(((-0.86... 1.413... 1.139...)
; ...
; (0.017... -0.44... -1.31...)) :mgl t :shape (3 3))
(!sub a c)
;#Const(((1.128... 1.258... 0.267...)
; ...
; (-0.64... 0.269... 0.303...)) :mgl t :shape (3 3))
!!sub
(target-x y)
Substracts target-x by y in a destructive way.
target-x is always substituted for the result.
y is not subject to side effects unless target-x is not a mat.
See also: Destructive Operations
!mul
!mul
(x y)
Multiply x and y with element-wise.
In the case when x or y is not a tensor, automatically creates a new tensor.
It supports:
- Broadcasting shapes
- JIT
Examples
(setq a (!randn `(3 3)))
(setq b (!randn `(3 3)))
(setq c (!randn `(3 1)))
(!mul 1 1)
;=> Const(1)
(!mul (const 1)(const 1))
;=> Const(1)
(!mul a b)
;#Const(((2.734... 0.475... -0.31...)
; ...
; (0.426... 0.193... 0.490...)) :mgl t :shape (3 3))
(!mul a c)
;#Const(((2.734... 0.475... -0.31...)
; ...
; (0.426... 0.193... 0.490...)) :mgl t :shape (3 3))
!!mul
(target-x y)
Multiplys target-x and y in a destructive way.
target-x is always substituted for the result
y is not subject to side effects unless target-x is not a mat.
See also: Destructive Operations
!div
!div
(x y)
Divides x by y.
In the case when x or y is not a tensor, automatically creates a new tensor.
It supports:
- Broadcasting shapes
- JIT
Examples
(setq a (!randn `(3 3)))
(setq b (!ones `(3 3)))
(setq c (!ones `(3 1)))
(!div 2 1)
;=> Const(2)
(!div (const 2)(const 1))
;=> Const(2)
(!div a b)
;#Const(((1.734... 0.475... -0.31...)
; ...
; (0.426... 0.193... 0.490...)) :mgl t :shape (3 3))
(!div a c)
;#Const(((2.734... 0.475... -0.31...)
; ...
; (0.426... 0.193... 0.490...)) :mgl t :shape (3 3))
!!div
(target-x target-y)
Divides target-x by target-y in a destructive way.
target-x and target-y are always substituted for the result
See also: Destructive Operations
!dot
!dot
(x y)
Computes the dot product of x and y where x and y are 1d Tensor.
🗒Note: Unlike Numpy's dot, !dot only supports for 1d tensors with the same number of elements and the tensor of which dims is larger than 1, regarded as 1d tensors.
Example
(setq a (!randn `(10)))
(setq b (!randn `(10)))
(!dot a b)
;=> #Const(1.0842022e-19)
!sum
!sum
(x &optional (axis nil) (keepdims nil))
Sum up x where x is a cl-waffe tensor.
For nd tensors...
- 1D
- unsqueeze x with 1, and call !sum again.
- 2D and more.
- Sum up all elements of X
arguments
- axis
- a dimension to reduce
- keepdims
- When t, the returning tensor is repeated with
axis
Example
(setq a (!randn `(10)))
(!sum a)
;=>#Const(4.74653)
(setq a (!randn `(10 10)))
(!sum a)
;=>#Const(1.5428619)
(!sum a 0)
;=>#Const(((-2.07... 0.463... ~ 1.778... 1.695...)) :mgl t :shape (1 10))
(!sum a 1)
;#Const(((0.967...)
; ...
; (2.774...)) :mgl t :shape (10 1))
(!sum a 0 t)
;#Const(((-2.07... 0.463... ~ 1.778... 1.695...)
; ...
; (-2.07... 0.463... ~ 1.778... 1.695...)) :mgl t :shape (10 10))
!mean
!mean
(x &optional (axis nil) (keepdims nil))
Example
(setq a (!ones '(10 10)))
;#Const(((1.0 1.0 ~ 1.0 1.0)
; ...
; (1.0 1.0 ~ 1.0 1.0)) :mgl t :shape (10 10))
(!mean a)
;=>Const(1.0)
!exp
!exp
(x)
Example
(setq a (!randn `(10 10)))
;#Const(((0.624... 0.807... ~ 0.500... 0.937...)
; ...
; (0.662... 0.299... ~ 0.761... 0.729...)) :mgl t :shape (10 10))
(!exp a)
;#Const(((1.866... 2.242... ~ 1.650... 2.553...)
; ...
; (1.939... 1.349... ~ 2.140... 2.073...)) :mgl t :shape (10 10))
!pow
!pow
(x n)
x
with n, returning a new sysconst.Example
(setq a (!ones `(10 10)))
(!pow a 3)
;#Const(((1.0 1.0 ~ 1.0 1.0)
; ...
; (1.0 1.0 ~ 1.0 1.0)) :mgl t :shape (10 10))
!sqrt
!sqrt
(x)
x
with 1/2, creating new sysconst and nodes.Example
(setq a (!ones `(10 10)))
(!sqrt a 3)
;#Const(((1.0 1.0 ~ 1.0 1.0)
; ...
; (1.0 1.0 ~ 1.0 1.0)) :mgl t :shape (10 10))
!log
!log
(x)
Returns a new tensor with the natural logarithm of the elements of input.
yi = log(e xi)
Example
(setq a (!ones '(10 10)))
(!log a)
;#Const(((0.0 0.0 ~ 0.0 0.0)
; ...
; (0.0 0.0 ~ 0.0 0.0)) :mgl t :shape (10 10))
!sin
!sin
(x)
Example
(setq a (!randn `(5)))
;=>#Const((0.638... 0.527... 0.515... 0.495... 0.912...) :mgl t :shape (5))
(!sin a)
;=>#Const((-0.44... -0.64... -0.66... -0.70... -0.09...) :mgl t :shape (5))
!cos
!cos
(x)
Example
(setq a (!randn `(5)))
;=>#Const((0.638... 0.527... 0.515... 0.495... 0.912...) :mgl t :shape (5))
(!cos a)
;=>#Const((0.803... 0.864... 0.870... 0.879... 0.611...) :mgl t :shape (5))
!tan
!tan
(x)
Example
(setq a (!randn `(5)))
;=>#Const((0.638... 0.527... 0.515... 0.495... 0.912...) :mgl t :shape (5))
(!tan a)
;=>#Const((0.741... 0.582... 0.566... 0.540... 1.293...) :mgl t :shape (5))
!asin
!asin
(x)
!acos
!acos
(x)
!atan
!atan
(x)
!sinh
!sinh
(x)
Example
(setq a (!randn `(5)))
;=>#Const((0.638... 0.527... 0.515... 0.495... 0.912...) :mgl t :shape (5))
(!sinh a)
;=>#Const((0.682... 0.551... 0.538... 0.516... 1.044...) :mgl t :shape (5))
!cosh
!cosh
(x)
Example
(setq a (!randn `(5)))
;=>#Const((0.638... 0.527... 0.515... 0.495... 0.912...) :mgl t :shape (5))
(!cosh a)
;=>#Const((1.210... 1.142... 1.135... 1.125... 1.446...) :mgl t :shape (5))
!tanh
!tanh
(x)
!asinh
!asinh
(x)
!acosh
!acosh
(x)
!atanh
!atanh
(x)
!matmul
!matmul
(x y)
Multiplying matrices x
and y
.
!matmul has many behaviours depends on the dimensionality of the tensors as follows:
- x and y are 1D
- The dot-product is returned.
(setq a (!randn `(10))) (setq b (!randn `(10))) (!matmul a b) ;=>#Const(-2.0)
- x and y are both 2D
- The matrix-matrix product is returned.
(setq a (!randn `(3 10))) (setq b (!randn `(10 3))) (!matmul a b) ;#Const(((2.309... 2.223... 3.630...) ; ... ; (2.334... 2.850... 3.678...)) :mgl t :shape (3 3))
- x is 2D and y is 3D.
- The matrix and y's each matrix are multiplied and is returned.
(setq a (!randn `(3 10))) (setq b (!randn `(5 10 3))) (!matmul a b) ;(!aref b 0) ~ (!aref b 4) is multiplied with a ;#Const((((3.257... 2.731... 1.670...) ; ... ; (2.523... 2.251... 1.276...)) ; ... ; ((2.610... 2.764... 2.415...) ; ... ; (2.080... 2.204... 1.751...))) :mgl t :shape (5 3 3))
- x is 3D and y is 2D.
- The matrix and x's each matrix are multiplied and is returned.
(setq a (!randn `(5 3 10))) (setq b (!randn `(10 3))) (!matmul a b) ;(!aref a 0) ~ (!aref a 4) is multiplied with b ;#Const((((2.309... 2.204... 1.556...) ; ... ; (3.746... 3.869... 3.091...)) ; ... ; ((3.260... 3.200... 2.847...) ; ... ; (3.008... 2.186... 2.376...))) :mgl t :shape (5 3 3))
- x is 3D and y is 3D.
- The Batch Filtered Matrix-Matrix product is returned.
(setq a (!randn `(5 3 10))) (setq b (!randn `(5 10 3))) ; The returned mat is comprised of: ; (!matmul (!aref a 0)(!aref b 0)) ; (!matmul (!aref a 1)(!aref b 1)) ; (!matmul (!aref a 2)(!aref b 2)) ; (!matmul (!aref a 3)(!aref b 3)) (!matmul a b) ;#Const((((6.621... -5.61... 2.898...) ; ... ; (-2.96... -4.26... -3.99...)) ; ... ; ((-0.02... 2.707... 5.989...) ; ... ; (-3.35... 3.561... -3.90...))) :mgl t :shape (5 3 3))
- Otherwise
- Currently not implemented. In the near future for more will be added.
!concatenate
!concatenate
(axis &rest tensors)
tensors
in the given axis
. All tensors must have the same shape.Example
(setq a (!randn `(3 3 3)))
;#Const((((1.000... -0.00... -0.25...)
; ...
; (1.473... -0.44... 1.680...))
; ...
; ((0.569... 0.852... 0.405...)
; ...
; (0.024... 0.756... 0.383...))) :mgl t :shape (3 3 3))
(!concatenate 0 a a a)
;#Const((((1.000... -0.00... -0.25...)
; ...
; (1.473... -0.44... 1.680...))
; ...
; ((0.569... 0.852... 0.405...)
; ...
; (0.024... 0.756... 0.383...))) :mgl t :shape (9 3 3))
(mgl-mat:M= (data (!aref * '(0 3)))
(data (!aref * '(3 6))))
;T
!stack
!stack
(axis &rest tensors)
Stacks the given tensors
in the specified axis
.
Internally, !stack adds 1 to the specified axis before calling !concatenate.
Note: Currently, when unsqueezing given tensors, !stack creates copies every time in order to prevent side effects. To avoid this, !concatenate is recommended to use. (TO FIX)
Example
(setq a (!randn `(2 2 2)))
;#Const((((-0.83... -1.74...)
; (0.119... 0.162...))
; ((-1.81... 0.907...)
; (-0.50... -0.96...))) :mgl t :shape (2 2 2))
(!stack 0 a a a)
;#Const(((((-0.83... -1.74...)
; (0.119... 0.162...))
; ((-1.81... 0.907...)
; (-0.50... -0.96...)))
; ...
; (((-0.83... -1.74...)
; (0.119... 0.162...))
; ((-1.81... 0.907...)
; (-0.50... -0.96...)))) :mgl t :shape (3 2 2 2))
(mgl-mat:M= (data (!aref * 0))(data (!aref * 1)))
; T
!split
!split
(tensor split-size &key (axis 0))
Splits the tensor into chunks in the specified axis
. Each chunk is a copy of original tensor.
split-size indicates the strides of each chunk, that is, tensor
will be split into equalliy size of split-size
.
split-size must be fixnum.
Note: currently !split's backward returns error. this is because there's a room to optimize backward and until optimized them i wont make it.
Alternatively, !aref, (setf !aref) is available.
Example
(setq a (!randn `(4 2 2)))
;#Const((((-0.48... -1.22...)
; (0.251... 0.476...))
; ...
; ((-0.66... 1.045...)
; (-0.44... 1.592...))) :mgl t :shape (4 2 2))
(!split a 2)
;(#Const((((-0.48... -1.22...)
; (0.251... 0.476...))
; ((0.864... -0.93...)
; (-0.43... 0.346...))) :mgl t :shape (2 2 2))
; #Const((((-1.91... -0.63...)
; (-0.08... 0.867...))
; ((-0.66... 1.045...)
; (-0.44... 1.592...))) :mgl t :shape (2 2 2)))
; the rests are filled with 0.0
(!split a 3)
;(#Const((((-0.48... -1.22...)
; (0.251... 0.476...))
; ...
; ((-1.91... -0.63...)
; (-0.08... 0.867...))) :mgl t :shape (3 2 2))
; #Const((((-0.66... 1.045...)
; (-0.44... 1.592...))
; ...
; ((0.0 0.0)
; (0.0 0.0))) :mgl t :shape (3 2 2)))
!vstack
!vstack
(&rest tensors)
!hstack
!hstack
(&rest tensors)
!unsqueeze
!unsqueeze
(x &optional (dim 0) (count 1))
Returns a new tensor with a dimension of size one inserted at the specified position.
dim indicates the position, when dim=-1, it indicates a last dimension of x
.
Example
(setq a (!randn `(10 10)))
;#Const(((0.685... 0.827... ~ 0.076... 0.102...)
; ...
; (0.802... 0.571... ~ 0.207... 0.283...)) :mgl t :shape (10 10))
(!unsqueeze a)
;#Const((((0.685... 0.827... ~ 0.076... 0.102...)
; ...
; (0.802... 0.571... ~ 0.207... 0.283...))) :mgl t :shape (1 10 10))
(!unsqueeze a -1)
;#Const((((0.685...)
; ...
; (0.102...))
; ...
; ((0.802...)
; ...
; (0.283...))) :mgl t :shape (10 10 1))
(!unsqueeze a 2)
;#Const(((0.685... 0.827... ~ 0.076... 0.102...)
; ...
; (0.802... 0.571... ~ 0.207... 0.283...)) :mgl t :shape (10 10 1 1))
!squeeze
!squeeze
(x &optional (dim nil))
Returns a new tensor with a dimension of size one removed at the specified position.
When dim=nil or -1, the last position of dim will be removed.
If the specified position of a tensor isn't one, !squeeze is skipped.
Example
(setq a (!randn `(10 1 10)))
;#Const((((0.928... 0.556... ~ 0.697... 0.973...))
; ...
; ((0.368... 0.995... ~ 0.589... 0.716...))) :mgl t :shape (10 1 10))
(!squeeze a 1)
;#Const(((0.928... 0.556... ~ 0.697... 0.973...)
; ...
; (0.368... 0.995... ~ 0.589... 0.716...)) :mgl t :shape (10 10))
(!squeeze a -1)
;#Const((((0.928... 0.556... ~ 0.697... 0.973...))
; ...
; ((0.368... 0.995... ~ 0.589... 0.716...))) :mgl t :shape (10 1 10))
(setq a (!randn `(10 10 1)))
;#Const(((0.991... 0.248... ~ 0.610... 0.289...)
; ...
; (0.593... 0.177... ~ 0.374... 0.668...)) :mgl t :shape (10 10))
!transpose
!transpose
(x &optional result)
Transpose x where x is a 2d tensor.
Transposed x is lazy evaluated until called by !matmul.
Todo: implement 3d, 4d version...
Example
(setq a (!randn `(3 5)))
(setq a (!transpose a))
;#Const(#<FUNCTION (LABELS CL-WAFFE.BACKENDS.MGL::LAZYTRANSPOSE :IN CL-WAFFE.BACKENDS.MGL::LAZY-EVAL-TRANSPOSE) {10038CBADB}>)
(!matmul a (!randn '(3 5)))
;#Const(((0.653... 0.400... 0.471... 0.705... 0.623...)
; ...
; (1.220... 0.760... 0.975... 1.360... 1.029...)) :mgl t :shape (5 5))
!transpose1
!transpose1
(x &rest result)
Transpose x but doesn't produce lazy-eval.
Todo: Numcl's operation couldm't optimized well. i need to reimplement it by myself.
Example
(setq a (!randn `(10 5 3)))
(!transpose1 a)
;#Const((((-0.47... -0.03... ~ -0.17... 0.328...)
; ...
; (0.210... -1.80... ~ 1.648... 0.135...))
; ...
; ((-0.52... 1.509... ~ 0.643... 0.258...)
; ...
; (-0.26... -1.14... ~ -1.08... 1.126...))) :mgl t :shape (3 5 10))
!repeats
!repeats
(x axis repeats)
Repeats x
along specified axis
by repeats
, creating new sysconst.
x can be: mat or tensor.
Example
(setq a (!randn '(1 3 3)))
;#Const((((0.333... 0.914... 0.260...)
; ...
; (0.611... 0.110... 0.113...))) :mgl t :shape (1 3 3))
(!repeats a 0 3)
;#Const((((0.333... 0.914... 0.260...)
; ...
; (0.611... 0.110... 0.113...))
; ...
; ((0.333... 0.914... 0.260...)
; ...
; (0.611... 0.110... 0.113...))) :mgl t :shape (3 3 3))
(!repeats (const 10.0) 3 10)
;#Const(((((10.0 10.0 ~ 10.0 10.0)))) :mgl t :shape (1 1 1 10))
!reshape
!reshape
(x dim)
Return a new sysconst with changing its shape. x won't be modified.
If dims has the element of t
, t is automatically inferred from the remaining dimensions and the number of elements in dim. (count t dim) must be 1 (Todo: Fix).
The total size of tensor must not be changed before or after the call to reshape.
See also: nil
Example
(setq a (!randn `(10 10 10)))
(!reshape a '(1 10 100))
;#Const((((0.454... 0.277... ~ 0.536... 0.135...)
; ...
; (0.857... 0.714... ~ 0.169... 0.279...))) :mgl t :shape (1 10 100))
(!reshape a '(1 1 t))
;#Const((((0.454... 0.277... ~ 0.169... 0.279...))) :mgl t :shape (1 1 1000))
!abs
!abs
(x)
Computes the absolute value of each element in x
.
Example:
(setq a (!random `(10 10) '(-1.0 1.0)))
;#Const(((0.048... 0.805... ~ 0.769... 0.252...)
; ...
; (0.159... -0.66... ~ -0.55... -0.23...)) :mgl t :shape (10 10))
(!abs a)
;#Const(((0.048... 0.805... ~ 0.769... 0.252...)
; ...
; (0.159... 0.667... ~ 0.553... 0.239...)) :mgl t :shape (10 10))
!where
!where
(condition tensor then else)
Return a tensor of elements selected from either x or y, depending on condition.condition
is given as a lambda expression, which called with an value of (aref tensor index).
!where defined asout = if (condition(tensor[i]), then, else)
Return: A tensor of shape that equal to the condition.
Example
(setq a (!random `(10 10) '(-1.0 1.0)))
;#Const(((0.042... -0.36... ~ 0.250... 0.967...)
; ...
; (-0.21... 0.962... ~ -0.32... 0.215...)) :mgl t :shape (10 10))
(!where #'(lambda (x)(> x 0)) a 1.0 0.0)
;#Const(((1.0 0.0 ~ 1.0 1.0)
; ...
; (0.0 1.0 ~ 0.0 1.0)) :mgl t :shape (10 10))
; works as ReLU
(!mul a (!where #'(lambda (x)(> x 0)) a 1.0 0.0))
;#Const(((0.042... 0.0... ~ 0.250... 0.967...)
; ...
; (0.0... 0.962... ~ 0.0... 0.215...)) :mgl t :shape (10 10))
!index
!index
nil
!filter
!filter
(tensor lambda)
lambda
, it returns an tensor which comprised of the lambda
's returned values.- tensor
- an tensor that to be refered to
- lambda
- an function that returns elements at position
x
(setq tensor (!randn `(10 10)))
(!filter tensor #'(lambda (x)(if (> x 0) x 1.0)))
;#Const(((0.802... 1.331... ~ 0.998... 1.994...)
; ...
; (1.0 0.005... ~ 0.296... 0.358...)) :mgl t :shape (10 10))
!argmax
!argmax
(tensor &key (dim -1) (keepdims nil) (max nil))
Returns the indices of the maximum value of all elements in the input tensor.
If max=t, retures the maximun value of dim.
- dim
- The dimension to reduce. If nil, the argmax of the flattened input is returned.
- keepdims
- whether the output tensor has dim retained or not. Ignored if dim=-1
Example
(setq a (!randn `(5)))
;#Const((0.933... 0.158... 0.822... 0.881... 0.831...) :mgl t :shape (5))
(!argmax a)
;#Const((0.0) :mgl t :shape (1))
(setq a (!randn `(10 10 10)))
;#Const((((0.393... 0.658... ~ 0.003... 0.609...)
; ...
; (0.394... 0.252... ~ 0.688... 0.057...))
; ...
; ((0.325... 0.794... ~ 0.540... 0.381...)
; ...
; (0.310... 0.035... ~ 0.280... 0.431...))) :mgl t :shape (10 10 10))
(!argmax a :dim 2)
;#Const(((5.0 9.0 ~ 0.0 4.0)
; ...
; (2.0 0.0 ~ 2.0 5.0)) :mgl t :shape (10 10))
(!argmax a :dim 2 :keepdims t)
;#Const((((5.0 5.0 ~ 5.0 5.0)
; ...
; (4.0 4.0 ~ 4.0 4.0))
; ...
; ((2.0 2.0 ~ 2.0 2.0)
; ...
; (5.0 5.0 ~ 5.0 5.0))) :mgl t :shape (10 10 10))
!argmin
!argmin
(tensor &key (dim -1) (keepdims nil) (min nil))
Returns the indices of the minimum value of all elements in the input tensor.
If min=t, argmin returns the minimum value of dim.
- dim
- The dimension to reduce. If nil, the argmax of the flattened input is returned.
- keepdims
- whether the output tensor has dim retained or not. Ignored if dim=-1.
Example
(setq a (!randn `(5)))
;=>#Const((0.635... 0.101... 0.864... 0.563... 0.481...) :mgl t :shape (5))
(!argmin a)
;=>#Const((1.0) :mgl t :shape (1))
(setq a (!randn `(10 10 10)))
;#Const((((0.267... 0.113... ~ 0.142... 0.208...)
; ...
; (0.174... 0.948... ~ 0.232... 0.462...))
; ...
; ((0.454... 0.361... ~ 0.605... 0.731...)
; ...
; (0.099... 0.816... ~ 0.729... 0.996...))) :mgl t :shape (10 10 10))
(!argmin a)
;#Const((415.0...) :mgl t :shape (1))
!<=
!<=
nil
!>=
!>=
nil
!einsum
!einsum
.!ravel
!ravel
nil
!flatten
!flatten
(tensor)
Flattens input by reshaping it into a one-dimensional tensor.
The operation is the same as (!reshape tensor '(t))
Example:
(setq a (!randn `(10 10)))
;#Const(((0.688... 0.580... ~ 0.013... 0.461...)
; ...
; (0.214... 0.248... ~ 0.540... 0.416...)) :mgl t :shape (10 10))
(!flatten a)
;#Const((0.688... 0.580... ~ 0.540... 0.416...) :mgl t :shape (100))
!aref
!aref
(tensor &rest dims)
!aref creates a new tensor from the area specified by dims
from the given tensor
.
This function is setfable and both function produces the computation nodes.
dims is consisted of list, and each dimension is described as follow formats:
- t
- t means (0~max-len) in the dimension.
- fixnum
- copies the index of fixnum in the dimension.
- list
- list must be of (start stop), copying tensors from start to stop in the dimension. that is, the result in the dimension is the copy of: start<=x<stop.
Using t as
stop
means: t is the last element in the dimension.
The fixnum used in dims
is not only positive numbers but also negative numbers.
For example, -1 is interpreted as (+ maxlen -1), -2 is interpreted as (+ maxlen -2)...
Note: (setf !aref) overwrites the given tensor's mat but won't overwrites its computation node. in order to update nodes, you must write it like: (setq a (setf (!aref a ...) ...))... See Example for the details.
Tensor cut-outs act on:
- When is not setf
- act on the given tensor.
- When is setf
- act on the target tensor. (e.g.: (setf (!aref target-tensor ...) input-tensor))
Example:
(setq a (!randn `(10 5 3)))
;#Const((((0.621... -1.15... 2.396...)
; ...
; (0.157... 0.389... 1.084...))
; ...
; ((1.123... -0.58... -0.28...)
; ...
; (0.506... -0.44... -0.26...))) :mgl t :shape (10 5 3))
(!aref a '(0 3)) ; interpreted as (!aref a '(0 3) t t)
;#Const((((0.621... -1.15... 2.396...)
; ...
; (0.157... 0.389... 1.084...))
; ...
; ((0.694... 0.954... 1.210...)
; ...
; (0.884... 0.059... 0.190...))) :mgl t :shape (3 5 3))
(!aref a '(1 3))
;#Const((((0.657... 0.834... -2.01...)
; ...
; (1.194... 0.517... 0.356...))
; ((0.694... 0.954... 1.210...)
; ...
; (0.884... 0.059... 0.190...))) :mgl t :shape (2 5 3))
(!aref a '(1 0)) ; When (cdr dims) <= 0, interpreted as (- (!shape tensor dim)(cdr dims))
; In this Example, this is the same as (!aref a '(1 10))
;#Const((((0.657... 0.834... -2.01...)
; ...
; (1.194... 0.517... 0.356...))
; ...
; ((1.123... -0.58... -0.28...)
; ...
; (0.506... -0.44... -0.26...))) :mgl t :shape (9 5 3))
(!aref a '(1 -1))
;#Const((((0.657... 0.834... -2.01...)
; ...
; (1.194... 0.517... 0.356...))
; ...
; ((-2.29... -1.12... -0.68...)
; ...
; (-1.74... 0.489... 1.519...))) :mgl t :shape (8 5 3))
(!aref a t '(0 2))
;Tensors in lower dimensions can also be clipped.
;If 0th dim isn't needed to be cut, place t.
;#Const((((0.621... -1.15... 2.396...)
; (0.642... 0.029... 1.334...))
; ...
; ((1.123... -0.58... -0.28...)
; (-2.43... -0.29... 0.882...))) :mgl t :shape (10 2 3))
(!aref a '(0 2) '(1 2) '(1 3))
;#Const((((0.029... 1.334...))
; ((-1.41... -0.32...))) :mgl t :shape (2 1 2))
; This function is setfable, but currently I won't come up with the best solution to update computation node.
; I know it is very ugly but additional setq is required after setf.
; Also, note that (setf !aref). overwrites a.
(setq a (setf (!aref a '(0 3) '(0 3))(!zeros '(3 3))))
;#Const((((0.0 0.0 0.0)
; ...
; (0.157... 0.389... 1.084...))
; ...
; ((1.123... -0.58... -0.28...)
; ...
; (0.506... -0.44... -0.26...))) :mgl t :shape (10 5 3))
(!aref a 0 0)
;#Const((((0.0 0.0 0.0))) :mgl t :shape (1 1 3))
!dotensors
!set-batch
!set-batch
(dataset start-row-index batch-size)
Set batch where dataset is a 2d mat.
Todo: Backward.
!softmax
!softmax
(x &key (avoid-overflow t))
Applying softmax to x. !softmax has three behaviours depending on the number of dimensions.
The number of dims is...
- 1
- Softmax is applied to dim=0
(setq a (!randn `(10))) (!softmax a) ;#Const((0.910... 0.886... ~ 0.802... 0.616...) :mgl t :shape (10))
- 2
- Softmax is applied to dim=0
(setq a (!randn `(10 10))) ;#Const(((-0.29... -1.99... ~ -0.36... 1.725...) ; ... ; (0.695... -0.94... ~ 1.179... 0.655...)) :mgl t :shape (10 10)) (!softmax a) ;#Const(((0.064... 0.011... ~ 0.060... 0.489...) ; ... ; (0.129... 0.024... ~ 0.209... 0.124...)) :mgl t :shape (10 10))
- 3
- Softmax is applied to dim=0
(setq a (!randn `(10 10 10))) ;#Const((((2.585... 0.517... ~ 0.428... 0.059...) ; ... ; (-2.11... 0.308... ~ -0.91... 0.649...)) ; ... ; ((-0.75... 1.030... ~ 0.656... -0.00...) ; ... ; (-0.37... -0.52... ~ 1.589... -0.10...))) :mgl t :shape (10 10 10)) (!softmax a) ;#Const((((0.374... 0.047... ~ 0.043... 0.029...) ; ... ; (0.010... 0.115... ~ 0.033... 0.162...)) ; ... ; ((0.029... 0.172... ~ 0.118... 0.061...) ; ... ; (0.048... 0.041... ~ 0.345... 0.063...))) :mgl t :shape (10 10 10))
- 4
- Todo: currently, it returns error.
!sigmoid
!sigmoid
(x)
Applyong sigmoid to x, return a new sysconst with making nodes.
Input: x where x is waffe supported data type.
Output: Tensor
!relu
!relu
(x)
Applying relu to x, return a new sysconst with making nodes.
Relu(x) = { 0 (x < 0), x (x > 0) }
Input: x where x is waffe supported data type.
Output: Tensor
!gelu
!gelu
(x &key (approximate t))
Applying gelu to x, returning a new sysconst.
Paper: https://arxiv.org/abs/1606.08415.
TOOD: Improve its performance
GeLU(x) = x * s(x)
When approximate is t:
s(x) = x/2 * [1 + tanh(sqrt(2/pi * (x + 0.044715 * x^3)))]
When is nil:
Not implemented (TODO)
(setq x (!randn `(10 10)))
(!gelu x)
;#Const(((0.201... 0.038... ~ 0.158... 0.040...)
; ...
; (0.300... 1.395... ~ 0.030... 0.029...)) :mgl t :shape (10 10))
!leakey-relu
!leakey-relu
(x &optional (alpha 0.01))
Applying Leakey-relu to x, returning a new sysconst.
Leakey-ReLU is defined as out = {alpha (x < 0), x (x >= 0)}
Example:
(setq x (!randn `(10 10)))
#Const(((0.635... -0.56... ~ -1.15... -1.50...)
...
(0.775... 1.258... ~ -1.29... 0.240...)) :mgl t :shape (10 10))
(!leakey-relu x)
#Const(((0.635... 0.003... ~ 0.013... 0.022...)
...
(0.775... 1.258... ~ 0.016... 0.240...)) :mgl t :shape (10 10))
!swish
!swish
(x &key (beta (const 1.0)))
Applying swish to each element of x
Swish is defined as out = (/ 1 (+ 1 (exp (* beta -1 x))))
In default beta is 1.0, if you want to use trainable one, Swish
is available as a waffe model.
Note that beta must begin given as a waffetensor.
(setq x (!randn `(10 10)))
#Const(((0.635... -0.56... ~ -1.15... -1.50...)
...
(0.775... 1.258... ~ -1.29... 0.240...)) :mgl t :shape (10 10))
(!swish x)
;#Const(((0.415... -0.20... ~ -0.27... -0.27...)
; ...
; (0.531... 0.980... ~ -0.27... 0.134...)) :mgl t :shape (10 10))
(call (Swish :beta 1.0) x) ; its beta is trainable by backpropgating.
;#Const(((0.415... -0.20... ~ -0.27... -0.27...)
; ...
; (0.531... 0.980... ~ -0.27... 0.134...)) :mgl t :shape (10 10))