cast:
- change dimension of userdata buffer for 2d operations
buffer=userdata("u8",4096) -- get a 8x8 sprite out of it sprite=cast(buffer,"u8",8,8,1040) |
note: not sure how GC can track these chunks - ok to have that as weak references only.
-
matmul with stride
align with other userdata operations - all ops to allow for source index
use case: array of vectors referenced by index
big_array=userdata(‘f64’,4,500) …. indices = userdata(‘i32’, 4) indices:set(0, 3,89,0,75) out=big_array:matmul(m, indices) |
-
min/max
note: can be done with sort (but overkill) - dot:
cannot operate on partial userdata
A few others:
- Index by userdata. Perhaps something like:
indices=userdata("i32",480,270) indices:set(...) values=lut:get(indices) -- values is a userdata with shape of indices but values from lut
- Fast small 2D convolutions - so many interesting things need a Laplacian, but you have to hand-roll the setup right now. (Would also be interesting in combination with a 3D WxHxC userdata type)
:convert()
to/from f64- Fast shift operations. Either this or f64 conversion could fill somewhat the same role: fast math with fast conversion to blit-able buffers
- abs, sqrt, trig - in about that priority order.
abs
is emulatable for the integer types with arithmetic shifts plus existing ops, I believe. For sqrt, axis-wise magnitude or distance could also work. - For min/max, all of the following are interesting: element-wise binop, axis-wise unary op, and unary op producing a scalar. But the first is mostly what I've been looking for. One possible alternative for the binary form would be comparison binops (
:gt()
etc.) along with something likenumpy.where()
. And for the unary forms - returning both values and their indices would be useful. The binop version would be useful for clamping indices for sampling/interpolation/LUT type use cases. - Support three-operand form of userdata ops where u0 != u2 (unclear whether the current state is a bug)
- Allow
matmul2d
andmatmul3d
to treat the LHS as a stack of vectors, where extra elements are ignored, but all vectors are transformed. (In contrast to current behavior of only using upper-left block.) matmul
for integer types
[Please log in to post a comment]