-
Updated
Mar 18, 2022 - C++
#
mlir
Here are 33 public repositories matching this topic...
ncnn is a high-performance neural network inference framework optimized for the mobile platform
android
ios
caffe
deep-learning
neural-network
mxnet
tensorflow
vulkan
keras
inference
pytorch
artificial-intelligence
simd
riscv
darknet
arm-neon
high-preformance
ncnn
onnx
mlir
silvasean
commented
Mar 15, 2022
In the following IR, %optional could be replaced by %none, because the op torch.aten.arange.start implements the AllowsTypeRefinement trait. We could add a canonicalization pattern that replaces all uses by ops that allow type refinement with the operand (i.e., the more refined value).
func @aten.arange.start$int64_dtype(%start: !torch.int, %end: !torch.int) -> !torch.vtensor {
lgeiger
commented
May 20, 2021
hanchenye
commented
Oct 17, 2021
In test/create-cores/test_dma1.mlir, -aie-lower-memcpy convert
AIE.memcpy @token0(1, 2) (%t11 : <%buf0, 0, 256>, %t22 : <%buf1, 0, 256>) : (memref<256xi32>, memref<256xi32>)
AIE.memcpy @token1(1, 2) (%t11 : <%buf0, 0, 256>, %t33 : <%buf2, 0, 256>) : (memref<256xi32>, memref<256xi32>)
to (only shows the %t11 side)
%2 = AIE.mem(%0) {
%15 = AIE.dmaStart(MM2S0, ^bb1
C++ compiler for heterogeneous quantum-classical computing built on Clang and XACC
-
Updated
Jan 5, 2022 - C++
a Halide language To MLIR compiler.
-
Updated
Aug 30, 2021 - C++
A tree-walker && virtual-machine && JIT interpreter for Lox language
c
interpreter
bytecode
compiler
cpp
virtual-machine
llvm
jit
lox-language
lox
tree-walker
lox-interpreter
mlir
-
Updated
Feb 20, 2022 - C++
Guide for converting tensorflow model to ncnn
-
Updated
Dec 23, 2021 - Python
Improve this page
Add a description, image, and links to the mlir topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the mlir topic, visit your repo's landing page and select "manage topics."


@mikeurbach and I think it would be useful to have an import/export format for scheduling problem instances, e.g. for writing test cases and benchmarking independent from the concrete (and potentially proprietary!) synthesis flow.
At its core, the problem model in CIRCT consists of a bunch of maps, indexed by MLIR operations and attributes. To that end, it seems appropriate to define a new dial