- Func
- Func Class
- [class] Func
- [generic] lower
- [generic] forward
- [generic] backward
- Differentiable Ops (built_in)
- [function] !identity
- [function] !view
- [function] !permute
- [function] !t
- [function] !transpose
- [function] !contiguous
- [function] !copy
- [function] !reshape
- [function] !flatten
- [function] !uprank
- [function] !repeat
- [function] !expand
- [function] !move
- [function] !assign
- [function] !add
- [function] !sub
- [function] !mul
- [function] !div
- [function] !mod
- [function] !idiv
- [function] !maximum
- [function] !minimum
- [function] !gcd
- [function] !lcm
- [function] !exp
- [function] !log
- [function] !sqrt
- [function] !neg
- [function] !recip
- [function] !square
- [function] !rsqrt
- [function] !signum
- [function] !gid
- [function] !normalize-axis
- [function] !abs
- [function] !>
- [function] !<
- [function] !>=
- [function] !<=
- [function] !eq
- [function] !neq
- [function] !and
- [function] !xor
- [function] !or
- [function] !where
- [function] !const
- [generic] !index-components
Func
Func
Func Class
[class] Func
A CLOS class that represents a computation.
Func (as the base class) is syntactic sugar for generating lowered instructions defined in the caten/aasm package.
To properly lower the respective Func, you need to implement the following three methods:
- lower: Lower the
Funcinto a list ofcaten/air:node. This should returncaten/air:graph. - forward: Create the type for the Tensor after computation. Be aware of its lazy evaluation nature; do not perform the actual computation.
ShapeTrackermight help you. (use thestmacro) - backward: Create the graph for the backward computation of op given prev-grad. Return:
(values input_1.grad input_2.grad ...).
[generic] lower
Lowers the Func into a list of caten/air:node. This should return caten/air:graph.
- op[Func] Func to lower.
- nodes[list] list of previous nodes (each position corresponds to the position of the variables in the Func).
[generic] forward
Create the type for the Tensor after computation. Be aware of its lazy evaluation nature; do not perform the actual computation. Use the st macro to create a new tensor.
- op[Func] Func to forward.
- tensors[list] list of input tensors.
[generic] backward
Create the graph for the backward computation of op given prev-grad. Return: (values input_1.grad input_2.grad ...).
save-for-backward is determined automatically, so you do not have to consider about in-place operation.
- op[Func] Func to backward.
- prev-grad[Tensor] previous gradient tensor.
Differentiable Ops (built_in)
[function] !identity
Equivalent to #'identity, but it is used to create a lazy computation node.
Result
[function] !view
Create a view node from the base tensor and subscripts.
We refer to VIEW as a node creating tensor whose buffers are shared with the base tensor, but shapes, strides, dtypes, offsets, or dtypes are different.
Subscripts has the following notation:
tkeep using the base view.fixnumrefers to the specified element. e.g.:A[3](a b)slices in the range of[a, b)(a b c)slices in the range of[a, b)with stepc.ccan be negative. In that case, b must be larger than a. For example:(10 0 -1)to reverse the elements in the axis.(:~ n)to broadcast the axis, with the size ofn
It is supported to compose multiple views; the viewed tensors can be created from the viewed tensors.
Result
[function] !permute
Returns a tensor that is a permutation of the original tensor. The new tensor has the same data as the original tensor but with the dimensions permuted according to the order specified. order can be passed as a list or separated arguments. That is, both of (!permute x 0 1) or (!permute x (list 0 1)) are valid.
Result
{Tensor{LISPBUFFER}[float32] :shape (10 10) :id STC104445
((0.0 10.0 20.0 30.0 40.0 50.0 60.0 70.0 80.0 90.0)
(1.0 11.0 21.0 31.0 41.0 51.0 61.0 71.0 81.0 91.0)
(2.0 12.0 22.0 32.0 42.0 52.0 62.0 72.0 82.0 92.0)
(3.0 13.0 23.0 33.0 43.0 53.0 63.0 73.0 83.0 93.0)
(4.0 14.0 24.0 34.0 44.0 54.0 64.0 74.0 84.0 94.0)
(5.0 15.0 25.0 35.0 45.0 55.0 65.0 75.0 85.0 95.0)
(6.0 16.0 26.0 36.0 46.0 56.0 66.0 76.0 86.0 96.0)
(7.0 17.0 27.0 37.0 47.0 57.0 67.0 77.0 87.0 97.0)
(8.0 18.0 28.0 38.0 48.0 58.0 68.0 78.0 88.0 98.0)
(9.0 19.0 29.0 39.0 49.0 59.0 69.0 79.0 89.0 99.0))
:op #<PROCEEDNODE {10097017C3}>
:requires-grad NIL
:variables (TID104262)
:tracker #<TRACKER :order={row(1 0)} :shape=(10 10) :contiguous-p=T>}
[function] !t
Transposes the last two axes of the tensor
Result
{Tensor{LISPBUFFER}[float32] :shape (10 10) :id STC104636
((0.0 10.0 20.0 30.0 40.0 50.0 60.0 70.0 80.0 90.0)
(1.0 11.0 21.0 31.0 41.0 51.0 61.0 71.0 81.0 91.0)
(2.0 12.0 22.0 32.0 42.0 52.0 62.0 72.0 82.0 92.0)
(3.0 13.0 23.0 33.0 43.0 53.0 63.0 73.0 83.0 93.0)
(4.0 14.0 24.0 34.0 44.0 54.0 64.0 74.0 84.0 94.0)
(5.0 15.0 25.0 35.0 45.0 55.0 65.0 75.0 85.0 95.0)
(6.0 16.0 26.0 36.0 46.0 56.0 66.0 76.0 86.0 96.0)
(7.0 17.0 27.0 37.0 47.0 57.0 67.0 77.0 87.0 97.0)
(8.0 18.0 28.0 38.0 48.0 58.0 68.0 78.0 88.0 98.0)
(9.0 19.0 29.0 39.0 49.0 59.0 69.0 79.0 89.0 99.0))
:op #<PROCEEDNODE {100153AEC3}>
:requires-grad NIL
:variables (TID104453)
:tracker #<TRACKER :order={row(1 0)} :shape=(10 10) :contiguous-p=T>}
[function] !transpose
Transposes dim0 and dim1.
Result
{Tensor{LISPBUFFER}[float32] :shape (10 10) :id STC104827
((0.0 10.0 20.0 30.0 40.0 50.0 60.0 70.0 80.0 90.0)
(1.0 11.0 21.0 31.0 41.0 51.0 61.0 71.0 81.0 91.0)
(2.0 12.0 22.0 32.0 42.0 52.0 62.0 72.0 82.0 92.0)
(3.0 13.0 23.0 33.0 43.0 53.0 63.0 73.0 83.0 93.0)
(4.0 14.0 24.0 34.0 44.0 54.0 64.0 74.0 84.0 94.0)
(5.0 15.0 25.0 35.0 45.0 55.0 65.0 75.0 85.0 95.0)
(6.0 16.0 26.0 36.0 46.0 56.0 66.0 76.0 86.0 96.0)
(7.0 17.0 27.0 37.0 47.0 57.0 67.0 77.0 87.0 97.0)
(8.0 18.0 28.0 38.0 48.0 58.0 68.0 78.0 88.0 98.0)
(9.0 19.0 29.0 39.0 49.0 59.0 69.0 79.0 89.0 99.0))
:op #<PROCEEDNODE {100170AB43}>
:requires-grad NIL
:variables (TID104644)
:tracker #<TRACKER :order={row(1 0)} :shape=(10 10) :contiguous-p=T>}
[function] !contiguous
If the tensor is viewed, then creates a copy of tensor with contiguous memory. Otherwise, return the original tensor. If force is set to T, it always creates a copy.
Result
{Tensor{LISPBUFFER}[float32] :shape (10 10) :id STC104926
((0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0)
(10.0 11.0 12.0 13.0 14.0 15.0 16.0 17.0 18.0 19.0)
(20.0 21.0 22.0 23.0 24.0 25.0 26.0 27.0 28.0 29.0)
(30.0 31.0 32.0 33.0 34.0 35.0 36.0 37.0 38.0 39.0)
(40.0 41.0 42.0 43.0 44.0 45.0 46.0 47.0 48.0 49.0)
(50.0 51.0 52.0 53.0 54.0 55.0 56.0 57.0 58.0 59.0)
(60.0 61.0 62.0 63.0 64.0 65.0 66.0 67.0 68.0 69.0)
(70.0 71.0 72.0 73.0 74.0 75.0 76.0 77.0 78.0 79.0)
(80.0 81.0 82.0 83.0 84.0 85.0 86.0 87.0 88.0 89.0)
(90.0 91.0 92.0 93.0 94.0 95.0 96.0 97.0 98.0 99.0))
:op #<PROCEEDNODE {1001736E43}>
:requires-grad NIL
:variables (STC104834)
:tracker #<TRACKER :order={row(0 1)} :shape=(10 10) :contiguous-p=T>}
[function] !copy
Creates a copy of the tensor. In Caten, the in-place operations are automatically determined, so in general, you do not have to consider using it.Result
{Tensor{LISPBUFFER}[float32] :shape (10 10) :id STC105056
((0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0)
(10.0 11.0 12.0 13.0 14.0 15.0 16.0 17.0 18.0 19.0)
(20.0 21.0 22.0 23.0 24.0 25.0 26.0 27.0 28.0 29.0)
(30.0 31.0 32.0 33.0 34.0 35.0 36.0 37.0 38.0 39.0)
(40.0 41.0 42.0 43.0 44.0 45.0 46.0 47.0 48.0 49.0)
(50.0 51.0 52.0 53.0 54.0 55.0 56.0 57.0 58.0 59.0)
(60.0 61.0 62.0 63.0 64.0 65.0 66.0 67.0 68.0 69.0)
(70.0 71.0 72.0 73.0 74.0 75.0 76.0 77.0 78.0 79.0)
(80.0 81.0 82.0 83.0 84.0 85.0 86.0 87.0 88.0 89.0)
(90.0 91.0 92.0 93.0 94.0 95.0 96.0 97.0 98.0 99.0))
:op #<PROCEEDNODE {100176ED43}>
:requires-grad NIL
:variables (STC104938)
:tracker #<TRACKER :order={row(0 1)} :shape=(10 10) :contiguous-p=T>}
[function] !reshape
Returns a same tensor but shape is changed. shape can be passed as a list or separated arguments. That is, both of (!reshape x '(1 2 3)) or (!reshape x 1 2 3) are valid.
Shape is a list of integers, symbols, or tensors.
If x is a viewed tensor, it creates a copy of the tensor with contiguous memory (but later JIT will try to eliminate this).
Result
{Tensor{LISPBUFFER}[float32] :shape (5 20) :id STC105253
((0.0 1.0 2.0 3.0 4.0 ~ 15.0 16.0 17.0 18.0 19.0)
(20.0 21.0 22.0 23.0 24.0 ~ 35.0 36.0 37.0 38.0 39.0)
(40.0 41.0 42.0 43.0 44.0 ~ 55.0 56.0 57.0 58.0 59.0)
(60.0 61.0 62.0 63.0 64.0 ~ 75.0 76.0 77.0 78.0 79.0)
(80.0 81.0 82.0 83.0 84.0 ~ 95.0 96.0 97.0 98.0 99.0))
:op #<PROCEEDNODE {10018003E3}>
:requires-grad NIL
:variables (TID105064)
:tracker #<TRACKER :order={row(0 1)} :shape=(5 20) :contiguous-p=T>}
[function] !flatten
Flattens the input tensor into a 2D matrix. If input tensor has shape (d_0, d_1, ... d_n) then the output will have shape (d_0 X d_1 ... d_(axis-1), d_axis X d_(axis+1) ... X dn).
Result
{Tensor{LISPBUFFER}[float32] :shape (3 27) :id STC105558
((0.0 0.0 0.0 0.0 0.0 ~ 0.0 0.0 0.0 0.0 0.0)
(0.0 0.0 0.0 0.0 0.0 ~ 0.0 0.0 0.0 0.0 0.0)
(0.0 0.0 0.0 0.0 0.0 ~ 0.0 0.0 0.0 0.0 0.0))
:op #<PROCEEDNODE {100189F3B3}>
:requires-grad NIL
:variables (TID105294)
:tracker #<TRACKER :order={row(0 1)} :shape=(3 27) :contiguous-p=T>}
[function] !uprank
Returns a tensor with one is inserted at the beginning of the shape of x for n times.
Result
{Tensor{LISPBUFFER}[float32] :shape (1 1 10 10) :id STC105821
((((0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0)
(10.0 11.0 12.0 13.0 14.0 15.0 16.0 17.0 18.0 19.0)
(20.0 21.0 22.0 23.0 24.0 25.0 26.0 27.0 28.0 29.0)
(30.0 31.0 32.0 33.0 34.0 35.0 36.0 37.0 38.0 39.0)
(40.0 41.0 42.0 43.0 44.0 45.0 46.0 47.0 48.0 49.0)
(50.0 51.0 52.0 53.0 54.0 55.0 56.0 57.0 58.0 59.0)
(60.0 61.0 62.0 63.0 64.0 65.0 66.0 67.0 68.0 69.0)
(70.0 71.0 72.0 73.0 74.0 75.0 76.0 77.0 78.0 79.0)
(80.0 81.0 82.0 83.0 84.0 85.0 86.0 87.0 88.0 89.0)
(90.0 91.0 92.0 93.0 94.0 95.0 96.0 97.0 98.0 99.0))))
:op #<PROCEEDNODE {1001932503}>
:requires-grad NIL
:variables (TID105566)
:tracker #<TRACKER :order={row(0 1 2 3)} :shape=(1 1 10 10) :contiguous-p=T>}
[function] !repeat
Returns a tensor with the shape of x broadcasted by repeats.
Result
{Tensor{LISPBUFFER}[float32] :shape (10 10) :id STC108308
<<Error during rendering: Invalid index 10 for (SIMPLE-ARRAY SINGLE-FLOAT (10)), should be a non...>>
:op #<PROCEEDNODE {1002DAE553}>
:requires-grad NIL
:variables (VID106191)
:tracker #<TRACKER :order={row(0 1)} :shape=(10(0 10 1) 10(0 10 1)) :contiguous-p=NIL>}
[function] !expand
Returns a tensor that is expanded to the shape that is specified. Expand can also increase the number of dimensions that a tensor has.
Result
[function] !move
Moves the element of b into a, returning a. If reduce is T, it will reduce the result. (Broadcast)
Result
{Tensor{LISPBUFFER}[float32] :shape (10 10) :id STC109114
((2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0)
(2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0)
(2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0)
(2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0)
(2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0)
(2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0)
(2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0)
(2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0)
(2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0)
(2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0))
:op #<PROCEEDNODE {1003084BE3}>
:requires-grad NIL
:variables (STC108961)
:tracker #<TRACKER :order={row(0 1)} :shape=(10 10) :contiguous-p=T>}
[function] !assign
Equivalent to doing(!move a b :reduce t). Useful when you want to the value of lazy ops to an pre-allocated buffer, like KV-Cache.
Result
{Tensor{LISPBUFFER}[float32] :shape (10 10) :id STC109282
((2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0)
(2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0)
(2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0)
(2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0)
(2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0)
(2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0)
(2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0)
(2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0)
(2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0)
(2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0))
:op #<PROCEEDNODE {10030D3E83}>
:requires-grad NIL
:variables (STC109129)
:tracker #<TRACKER :order={row(0 1)} :shape=(10 10) :contiguous-p=T>}
[function] !add
Adds a and b. If reduce is T, it will reduce the result. (Broadcast)
Result
{Tensor{LISPBUFFER}[float32] :shape (10 10) :id STC109451
((0.0 2.0 4.0 6.0 8.0 10.0 12.0 14.0 16.0 18.0)
(20.0 22.0 24.0 26.0 28.0 30.0 32.0 34.0 36.0 38.0)
(40.0 42.0 44.0 46.0 48.0 50.0 52.0 54.0 56.0 58.0)
(60.0 62.0 64.0 66.0 68.0 70.0 72.0 74.0 76.0 78.0)
(80.0 82.0 84.0 86.0 88.0 90.0 92.0 94.0 96.0 98.0)
(100.0 102.0 104.0 106.0 108.0 110.0 112.0 114.0 116.0 118.0)
(120.0 122.0 124.0 126.0 128.0 130.0 132.0 134.0 136.0 138.0)
(140.0 142.0 144.0 146.0 148.0 150.0 152.0 154.0 156.0 158.0)
(160.0 162.0 164.0 166.0 168.0 170.0 172.0 174.0 176.0 178.0)
(180.0 182.0 184.0 186.0 188.0 190.0 192.0 194.0 196.0 198.0))
:op #<PROCEEDNODE {1003121E03}>
:requires-grad NIL
:variables (STC109297)
:tracker #<TRACKER :order={row(0 1)} :shape=(10 10) :contiguous-p=T>}
[function] !sub
Subtracts b from a. If reduce is T, it will reduce the result. (Broadcast)
Result
{Tensor{LISPBUFFER}[float32] :shape (10 10) :id STC109626
((0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0)
(0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0)
(0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0)
(0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0)
(0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0)
(0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0)
(0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0)
(0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0)
(0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0)
(0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0))
:op #<PROCEEDNODE {100316B753}>
:requires-grad NIL
:variables (STC109467)
:tracker #<TRACKER :order={row(0 1)} :shape=(10 10) :contiguous-p=T>}
[function] !mul
Multiplies a and b. If reduce is T, it will reduce the result. (Broadcast)
Result
{Tensor{LISPBUFFER}[float32] :shape (10 10) :id STC109801
((0.0 1.0 4.0 9.0 16.0 25.0 36.0 49.0 64.0 81.0)
(100.0 121.0 144.0 169.0 196.0 225.0 256.0 289.0 324.0 361.0)
(400.0 441.0 484.0 529.0 576.0 625.0 676.0 729.0 784.0 841.0)
(900.0 961.0 1024.0 1089.0 1156.0 1225.0 1296.0 1369.0 1444.0 1521.0)
(1600.0 1681.0 1764.0 1849.0 1936.0 2025.0 2116.0 2209.0 2304.0 2401.0)
(2500.0 2601.0 2704.0 2809.0 2916.0 3025.0 3136.0 3249.0 3364.0 3481.0)
(3600.0 3721.0 3844.0 3969.0 4096.0 4225.0 4356.0 4489.0 4624.0 4761.0)
(4900.0 5041.0 5184.0 5329.0 5476.0 5625.0 5776.0 5929.0 6084.0 6241.0)
(6400.0 6561.0 6724.0 6889.0 7056.0 7225.0 7396.0 7569.0 7744.0 7921.0)
(8100.0 8281.0 8464.0 8649.0 8836.0 9025.0 9216.0 9409.0 9604.0 9801.0))
:op #<PROCEEDNODE {10031E37D3}>
:requires-grad NIL
:variables (STC109641)
:tracker #<TRACKER :order={row(0 1)} :shape=(10 10) :contiguous-p=T>}
[function] !div
Divides a by b. If reduce is T, it will reduce the result. (Broadcast)
Result
{Tensor{LISPBUFFER}[float32] :shape (10 10) :id STC109991
((1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0)
(1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0)
(1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0)
(1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0)
(0.99999994 1.0 1.0 1.0 1.0 1.0 0.99999994 1.0 1.0 1.0)
(1.0 1.0 1.0 1.0 0.99999994 1.0 1.0 1.0 1.0 1.0)
(0.99999994 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0)
(1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0)
(1.0 0.99999994 0.99999994 1.0 1.0 1.0 1.0 1.0 1.0 1.0)
(1.0 1.0 1.0 0.99999994 1.0 1.0 0.99999994 1.0 1.0 1.0))
:op #<PROCEEDNODE {10032B3363}>
:requires-grad NIL
:variables (STC109817)
:tracker #<TRACKER :order={row(0 1)} :shape=(10 10) :contiguous-p=T>}
[function] !mod
Computes the remainder of the division ofa by b. If reduce is T, it will reduce the result. (Broadcast)
CATEN-USER> (proceed (!mod (ax+b `(10 10) 1 1 :dtype :int32) (ax+b `(10 10) 1 1 :dtype :int32)))
Result
{Tensor{LISPBUFFER}[int32] :shape (10 10) :id STC110164
((0 0 0 0 0 0 0 0 0 0)
(0 0 0 0 0 0 0 0 0 0)
(0 0 0 0 0 0 0 0 0 0)
(0 0 0 0 0 0 0 0 0 0)
(0 0 0 0 0 0 0 0 0 0)
(0 0 0 0 0 0 0 0 0 0)
(0 0 0 0 0 0 0 0 0 0)
(0 0 0 0 0 0 0 0 0 0)
(0 0 0 0 0 0 0 0 0 0)
(0 0 0 0 0 0 0 0 0 0))
:op #<PROCEEDNODE {1004C46483}>
:requires-grad NIL
:variables (STC110006)
:tracker #<TRACKER :order={row(0 1)} :shape=(10 10) :contiguous-p=T>}
[function] !idiv
Assuming both of a and b are the integer, divides a by b and returns the integer part. If reduce is T, it will reduce the result. (Broadcast)
CATEN-USER> (proceed (!idiv (ax+b `(10 10) 3 1 :dtype :uint32) (ax+b `(10 10) 0 2 :dtype :uint32)))
Result
{Tensor{LISPBUFFER}[uint32] :shape (10 10) :id STC110328
((0 2 3 5 6 8 9 11 12 14)
(15 17 18 20 21 23 24 26 27 29)
(30 32 33 35 36 38 39 41 42 44)
(45 47 48 50 51 53 54 56 57 59)
(60 62 63 65 66 68 69 71 72 74)
(75 77 78 80 81 83 84 86 87 89)
(90 92 93 95 96 98 99 101 102 104)
(105 107 108 110 111 113 114 116 117 119)
(120 122 123 125 126 128 129 131 132 134)
(135 137 138 140 141 143 144 146 147 149))
:op #<PROCEEDNODE {1004CCCC63}>
:requires-grad NIL
:variables (STC110179)
:tracker #<TRACKER :order={row(0 1)} :shape=(10 10) :contiguous-p=T>}
[function] !maximum
Returns the maximum ofa and b. If reduce is T, it will reduce the result. (Broadcast)
Result
{Tensor{LISPBUFFER}[float32] :shape (3 3) :id STC134985
((0.49818084 0.6545696 1.1272388)
(0.71888524 0.86670333 0.97324157)
(0.71676797 0.63761955 0.62686044))
:op #<PROCEEDNODE {100754EF03}>
:requires-grad NIL
:variables (STC134264)
:tracker #<TRACKER :order={row(0 1)} :shape=(3 3) :contiguous-p=T>}
[function] !minimum
Returns the minimum ofa and b. If reduce is T, it will reduce the result. (Broadcast)
Result
{Tensor{LISPBUFFER}[float32] :shape (3 3) :id STC135737
((0.037297387 -2.037279 -0.49524468)
(0.083971575 0.3244131 0.39060554)
(-0.52718425 -1.4125385 0.8568455))
:op #<PROCEEDNODE {10081D7FA3}>
:requires-grad NIL
:variables (STC135001)
:tracker #<TRACKER :order={row(0 1)} :shape=(3 3) :contiguous-p=T>}
[function] !gcd
Returns the greatest common divisor ofa and b. If reduce is T, it will reduce the result. (Broadcast)
a, b are expected to be integer scalars. (dedicated to the view computation)
Result
[function] !lcm
Returns the least common multiple ofa and b.
a, b are expected to be integer scalars.
Result
[function] !exp
Computes (exp x).
Result
{Tensor{LISPBUFFER}[float32] :shape (10 10) :id STC135942
((1.0 1.0100502 1.0202013 1.0304545 1.0408108 1.0512711 1.0618366 1.0725082 1.0832871 1.0941743)
(1.105171 1.116278 1.1274968 1.1388284 1.1502738 1.1618342 1.1735109 1.1853049 1.1972173 1.2092496)
(1.2214028 1.2336781 1.2460767 1.2586 1.2712492 1.2840254 1.2969301 1.3099644 1.3231298 1.3364275)
(1.3498588 1.3634251 1.3771278 1.3909681 1.4049476 1.4190675 1.4333293 1.4477346 1.4622846 1.4769808)
(1.4918246 1.5068177 1.5219616 1.5372574 1.5527072 1.5683122 1.5840739 1.5999942 1.6160744 1.6323161)
(1.6487212 1.6652912 1.6820276 1.6989322 1.7160068 1.733253 1.7506725 1.768267 1.7860384 1.8039883)
(1.8221188 1.8404315 1.8589281 1.8776106 1.8964808 1.9155407 1.9347923 1.9542372 1.9738777 1.9937155)
(2.0137527 2.033991 2.054433 2.0750806 2.0959353 2.1169999 2.138276 2.1597662 2.181472 2.2033963)
(2.2255409 2.2479079 2.2704997 2.2933185 2.316367 2.3396468 2.3631606 2.386911 2.4108996 2.4351294)
(2.4596028 2.4843223 2.5092902 2.5345092 2.5599813 2.5857096 2.6116965 2.6379445 2.664456 2.691234))
:op #<PROCEEDNODE {10084051F3}>
:requires-grad NIL
:variables (STC135828)
:tracker #<TRACKER :order={row(0 1)} :shape=(10 10) :contiguous-p=T>}
[function] !log
Computes (log x).
Result
{Tensor{LISPBUFFER}[float32] :shape (10 10) :id STC136053
((-6.9077554 -4.50986 -3.8632329 -3.4737678 -3.1941833 -2.97593 -2.7968814 -2.6450753 -2.5133061 -2.396896)
(-2.2926347 -2.198225 -2.1119647 -2.032558 -1.9589953 -1.8904755 -1.8263509 -1.7660917 -1.7092583 -1.6554819)
(-1.6044505 -1.555897 -1.5095925 -1.4653376 -1.4229584 -1.3823024 -1.343235 -1.3056365 -1.2694006 -1.2344321)
(-1.2006451 -1.1679624 -1.1363143 -1.105637 -1.0758728 -1.046969 -1.0188774 -0.99155325 -0.964956 -0.93904775)
(-0.9137939 -0.8891622 -0.8651225 -0.84164727 -0.81871045 -0.796288 -0.7743574 -0.7528972 -0.73188806 -0.7113112)
(-0.6911492 -0.6713857 -0.6520053 -0.63299334 -0.6143361 -0.59602046 -0.5780344 -0.5603661 -0.5430046 -0.52593935)
(-0.5091604 -0.49265832 -0.47642422 -0.46044946 -0.44472587 -0.42924568 -0.41400146 -0.39898625 -0.38419297 -0.36961547)
(-0.35524744 -0.3410829 -0.3271162 -0.3133419 -0.29975465 -0.28634965 -0.27312195 -0.26006696 -0.24718018 -0.23445737)
(-0.22189441 -0.20948724 -0.19723219 -0.18512551 -0.17316367 -0.1613432 -0.14966084 -0.1381133 -0.12669767 -0.11541088)
(-0.10425006 -0.093212426 -0.082295306 -0.07149601 -0.060812157 -0.050241243 -0.039780907 -0.029428856 -0.019182874 -0.009040808))
:op #<PROCEEDNODE {100846D7C3}>
:requires-grad NIL
:variables (STC135950)
:tracker #<TRACKER :order={row(0 1)} :shape=(10 10) :contiguous-p=T>}
[function] !sqrt
Computes(sqrt x).
Result
{Tensor{LISPBUFFER}[float32] :shape (10 10) :id STC136333
((0.0 0.1 0.14142135 0.17320508 0.2 0.2236068 0.24494897 0.26457512 0.2828427 0.29999998)
(0.31622776 0.33166248 0.34641016 0.3605551 0.37416574 0.38729832 0.4 0.41231057 0.42426407 0.4358899)
(0.4472136 0.45825756 0.4690416 0.47958314 0.48989794 0.5 0.50990194 0.51961523 0.52915025 0.53851646)
(0.5477225 0.55677646 0.5656854 0.5744563 0.5830952 0.591608 0.59999996 0.60827625 0.61644137 0.6244998)
(0.6324555 0.64031243 0.64807403 0.65574384 0.66332495 0.67082036 0.67823297 0.6855655 0.6928203 0.7)
(0.70710677 0.71414286 0.7211102 0.72801095 0.7348469 0.7416199 0.7483315 0.7549834 0.7615773 0.76811457)
(0.77459663 0.781025 0.7874008 0.7937254 0.8 0.8062258 0.8124038 0.81853527 0.82462114 0.83066237)
(0.83666 0.84261495 0.84852815 0.85440034 0.86023253 0.8660254 0.8717798 0.8774964 0.8831761 0.8888194)
(0.8944272 0.9 0.9055385 0.91104335 0.9165151 0.92195445 0.92736185 0.9327379 0.9380832 0.9433981)
(0.94868326 0.9539392 0.9591663 0.96436507 0.96953595 0.9746794 0.9797959 0.98488575 0.98994946 0.9949874))
:op #<PROCEEDNODE {1008519B53}>
:requires-grad NIL
:variables (STC136061)
:tracker #<TRACKER :order={row(0 1)} :shape=(10 10) :contiguous-p=T>}
[function] !neg
!neg computes the negative value of the tensor.Result
{Tensor{LISPBUFFER}[float32] :shape (10 10) :id STC136436
((-0.0 -0.01 -0.02 -0.03 -0.04 -0.049999997 -0.06 -0.07 -0.08 -0.089999996)
(-0.099999994 -0.11 -0.12 -0.13 -0.14 -0.14999999 -0.16 -0.17 -0.17999999 -0.19)
(-0.19999999 -0.21 -0.22 -0.22999999 -0.24 -0.25 -0.26 -0.26999998 -0.28 -0.29)
(-0.29999998 -0.31 -0.32 -0.32999998 -0.34 -0.35 -0.35999998 -0.37 -0.38 -0.39)
(-0.39999998 -0.41 -0.42 -0.42999998 -0.44 -0.45 -0.45999998 -0.47 -0.48 -0.48999998)
(-0.5 -0.51 -0.52 -0.53 -0.53999996 -0.55 -0.56 -0.57 -0.58 -0.59)
(-0.59999996 -0.61 -0.62 -0.63 -0.64 -0.65 -0.65999997 -0.66999996 -0.68 -0.69)
(-0.7 -0.71 -0.71999997 -0.72999996 -0.74 -0.75 -0.76 -0.77 -0.78 -0.78999996)
(-0.79999995 -0.81 -0.82 -0.83 -0.84 -0.84999996 -0.85999995 -0.87 -0.88 -0.89)
(-0.9 -0.90999997 -0.91999996 -0.93 -0.94 -0.95 -0.96 -0.96999997 -0.97999996 -0.98999995))
:op #<PROCEEDNODE {10085519B3}>
:requires-grad NIL
:variables (STC136341)
:tracker #<TRACKER :order={row(0 1)} :shape=(10 10) :contiguous-p=T>}
[function] !recip
!recip computes the reciprocal of the tensor.Result
{Tensor{LISPBUFFER}[float32] :shape (10 10) :id STC136547
((10.0 9.090909 8.333333 7.692308 7.142857 6.6666665 6.25 5.882353 5.5555553 5.263158)
(5.0000005 4.7619047 4.5454545 4.3478265 4.1666665 4.0 3.846154 3.7037036 3.5714285 3.448276)
(3.3333335 3.2258065 3.125 3.0303032 2.9411764 2.857143 2.777778 2.702703 2.631579 2.5641026)
(2.5000002 2.4390244 2.3809524 2.3255816 2.2727273 2.2222223 2.1739132 2.1276596 2.0833335 2.0408163)
(2.0000002 1.9607843 1.923077 1.8867925 1.8518518 1.8181818 1.7857143 1.754386 1.724138 1.6949153)
(1.6666666 1.6393442 1.6129032 1.5873016 1.5625 1.5384614 1.5151515 1.4925373 1.4705882 1.4492754)
(1.4285715 1.4084506 1.3888888 1.369863 1.3513514 1.3333334 1.3157895 1.2987013 1.2820512 1.2658228)
(1.25 1.2345679 1.2195122 1.2048193 1.1904762 1.1764705 1.1627907 1.1494253 1.1363636 1.1235955)
(1.1111112 1.098901 1.0869565 1.0752689 1.0638298 1.0526316 1.0416667 1.0309278 1.0204082 1.010101)
(1.0 0.990099 0.98039216 0.97087383 0.9615385 0.952381 0.9433963 0.9345795 0.92592597 0.91743124))
:op #<PROCEEDNODE {1008597313}>
:requires-grad NIL
:variables (STC136444)
:tracker #<TRACKER :order={row(0 1)} :shape=(10 10) :contiguous-p=T>}
[function] !square
Computes the x*xResult
{Tensor{LISPBUFFER}[float32] :shape (10 10) :id STC136655
((0.010000001 0.0121 0.014400001 0.0169 0.0196 0.0225 0.0256 0.028900001 0.0324 0.0361)
(0.039999995 0.044100005 0.0484 0.052899994 0.057600003 0.0625 0.0676 0.072900005 0.0784 0.08409999)
(0.08999999 0.0961 0.1024 0.10889999 0.115600005 0.122499995 0.12959999 0.13689998 0.1444 0.15209998)
(0.15999998 0.1681 0.17639999 0.18489999 0.1936 0.20249999 0.21159998 0.2209 0.2304 0.24009998)
(0.24999997 0.26009998 0.2704 0.28089997 0.29160002 0.3025 0.3136 0.3249 0.33639997 0.34809998)
(0.36 0.37210003 0.3844 0.3969 0.4096 0.42250004 0.43560004 0.4489 0.46240002 0.4761)
(0.48999998 0.5041 0.5184 0.53290004 0.54760003 0.5625 0.5776 0.5929 0.60840005 0.6241)
(0.64000005 0.6561 0.6724 0.6889 0.7056001 0.7225 0.7396 0.7569 0.7744 0.79209995)
(0.80999994 0.8281 0.8464 0.8649 0.8836 0.9025 0.9216 0.9409 0.96040004 0.98010004)
(1.0 1.0201 1.0403999 1.0609 1.0816 1.1024998 1.1235999 1.1448998 1.1663998 1.1880999))
:op #<PROCEEDNODE {10085DC333}>
:requires-grad NIL
:variables (STC136555)
:tracker #<TRACKER :order={row(0 1)} :shape=(10 10) :contiguous-p=T>}
[function] !rsqrt
Computes the reciprocal of sqrt x.Result
{Tensor{LISPBUFFER}[float32] :shape (10 10) :id STC136949
((3.1622777 3.0151134 2.8867514 2.7735012 2.6726124 2.5819888 2.5 2.4253561 2.3570225 2.2941573)
(2.236068 2.1821787 2.1320071 2.0851443 2.0412414 2.0 1.9611614 1.924501 1.8898224 1.8569535)
(1.825742 1.7960529 1.7677671 1.7407765 1.7149858 1.6903085 1.6666667 1.6439899 1.6222143 1.6012815)
(1.5811388 1.5617375 1.5430336 1.5249858 1.5075567 1.490712 1.4744196 1.4586499 1.4433757 1.4285715)
(1.4142135 1.40028 1.3867506 1.3736057 1.3608276 1.3483996 1.3363062 1.3245324 1.3130643 1.3018892)
(1.2909944 1.2803688 1.2700013 1.2598816 1.25 1.2403474 1.2309148 1.2216945 1.2126781 1.2038586)
(1.1952286 1.1867816 1.1785113 1.1704115 1.1624764 1.1547005 1.1470786 1.1396058 1.132277 1.1250879)
(1.118034 1.1111112 1.1043153 1.0976427 1.0910894 1.0846523 1.0783278 1.0721126 1.0660036 1.0599979)
(1.0540926 1.0482849 1.0425721 1.0369517 1.0314213 1.0259783 1.0206207 1.0153462 1.0101525 1.0050378)
(1.0 0.99503714 0.99014753 0.98532933 0.9805807 0.9759001 0.97128594 0.96673656 0.9622505 0.95782626))
:op #<PROCEEDNODE {1008699753}>
:requires-grad NIL
:variables (STC136664)
:tracker #<TRACKER :order={row(0 1)} :shape=(10 10) :contiguous-p=T>}
[function] !signum
Returns the sign of the tensor. If the tensor is positive, it returns 1. If the tensor is negative, it returns -1. If the tensor is zero, it returns 0. Note that this function is not differentiable.
Result
{Tensor{LISPBUFFER}[float32] :shape (10 10) :id STC140110
((-1.0 -1.0 -1.0 -1.0 -1.0 -1.0 1.0 1.0 1.0 1.0)
(1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0)
(1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0)
(1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0)
(1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0)
(1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0)
(1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0)
(1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0)
(1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0)
(1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0))
:op #<PROCEEDNODE {1008F01D13}>
:requires-grad NIL
:variables (STC137549)
:tracker #<TRACKER :order={row(0 1)} :shape=(10 10) :contiguous-p=T>}
[function] !gid
Finds therank th index components of the tensor.
For example (!gid x 1) for (3 3) tensor is a `(0 1 2). (As a hint) by combining !gid with !where, you can implement a pseudo random access of the tensor. For example:
Result
[function] !normalize-axis
Creates a tensor graph which normalizes the axis. If the axis is negative, then it will be normalized to the positive axis.Result
[function] !abs
Returns the absolute value of the tensor.
Result
{Tensor{LISPBUFFER}[float32] :shape (10 10) :id STC143961
((0.1 0.08 0.060000002 0.040000003 0.020000003 7.450581e-9 0.019999996 0.04 0.059999995 0.07999999)
(0.09999999 0.12 0.13999999 0.16 0.18 0.19999999 0.22 0.24000001 0.26 0.28)
(0.29999998 0.32 0.34 0.35999998 0.38 0.4 0.42 0.43999997 0.46 0.48)
(0.49999997 0.52 0.53999996 0.55999994 0.58 0.59999996 0.61999995 0.64 0.65999997 0.67999995)
(0.6999999 0.71999997 0.73999995 0.75999993 0.78 0.79999995 0.81999993 0.84 0.85999995 0.87999994)
(0.9 0.91999996 0.93999994 0.9599999 0.9799999 1.0 1.02 1.04 1.06 1.0799999)
(1.0999999 1.12 1.14 1.16 1.18 1.1999999 1.2199999 1.2399999 1.26 1.28)
(1.3 1.3199999 1.3399999 1.3599999 1.38 1.4 1.42 1.4399999 1.4599999 1.4799999)
(1.4999999 1.52 1.54 1.56 1.5799999 1.5999999 1.6199999 1.64 1.66 1.68)
(1.6999999 1.7199999 1.7399999 1.76 1.78 1.8 1.8199999 1.8399999 1.8599999 1.8799999))
:op #<PROCEEDNODE {1002F68B33}>
:requires-grad NIL
:variables (STC141392)
:tracker #<TRACKER :order={row(0 1)} :shape=(10 10) :contiguous-p=T>}
[function] !>
Compares x and y element-wise and returns the result as a boolean tensor.
Result
[function] !<
Compares x and y element-wise and returns the result as a boolean tensor.
Result
[function] !>=
Compares x and y element-wise and returns the result as a boolean tensor.
Result
[function] !<=
Compares x and y element-wise and returns the result as a boolean tensor.
Result
[function] !eq
Compares x and y element-wise and returns the result as a boolean tensor.
Result
[function] !neq
Compares x and y element-wise and returns the result as a boolean tensor.
Result
[function] !and
Computes the logical/bitwise and of the tensor.
Result
[function] !xor
Computes the logical/bitwise xor of the tensor.
Result
[function] !or
Computes the logical/bitwise or of the tensor.
Result
[function] !where
Selects elements from x or y based on the condition. If the condition is true, it selects the element from x, otherwise from y.
Result
[function] !const
Creates a constant tensor with the specified value from the tensor.Result
[generic] !index-components
Returns the index components of the tensor. object can be either of tensor or list.