Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Propagate type annotations in the exact matcher #22

Open
wants to merge 133 commits into
base: 3la-pldi-push-main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
133 commits
Select commit Hold shift + click to select a range
0191bd4
[ add ] barebone hook for dense ops
AD1024 Oct 14, 2020
e71bc60
[ add ] placeholder for vta codegen test
AD1024 Oct 14, 2020
37634dd
Add line about config.cmake to readme
slyubomirsky Oct 14, 2020
7aa916c
Add vta_matmul to __init__.py, which fixes the automatic partitioning…
slyubomirsky Oct 15, 2020
7a287af
Add support for the JSON dumping facility
slyubomirsky Oct 23, 2020
1f04c0f
Add in an example of the dumping mode and some description of the cha…
slyubomirsky Oct 23, 2020
1c20a7d
Make VTA dump target path configurable
slyubomirsky Oct 28, 2020
b762aae
Add ILA program fragment conversion
slyubomirsky Oct 30, 2020
6c3bd03
Minor format compatiblity tweaks to ILA converter
slyubomirsky Nov 26, 2020
954a175
move runtime code to codegen.cc
AD1024 Dec 2, 2020
d253df7
[ add ] vta matmul test
AD1024 Dec 2, 2020
86a0d52
[ refine ] rm comment & move import to top
AD1024 Dec 2, 2020
02346a9
Add note about test to the readme
slyubomirsky Dec 3, 2020
90202fa
Add additional dependencies into README
slyubomirsky Dec 8, 2020
68752d4
[FIX] BYOC compilation error due to missing files (#1)
AD1024 Dec 9, 2020
7d9593b
Update ILA conversion indexing scheme
slyubomirsky Dec 16, 2020
f95084e
Use VTA simulator with output dumping
slyubomirsky Dec 17, 2020
0903879
[hotfix] Typo in ila_converter
slyubomirsky Dec 21, 2020
9038420
Rebase fixes
slyubomirsky Jan 5, 2021
ee4c57c
end-to-end codegen for ILA-VTA
AD1024 Feb 14, 2021
d2ac372
[ add ] tests
AD1024 Feb 14, 2021
95e384b
ignore instr_log
AD1024 Feb 14, 2021
7079760
tweak PR
AD1024 Feb 19, 2021
2075057
get rid of warnings
AD1024 Feb 19, 2021
90892de
remove logging in pattern matching
AD1024 Feb 19, 2021
8572ba2
[ add ] instruction for running the end-to-end test script
AD1024 Feb 19, 2021
49ac15f
[ init ] bias add codegen
AD1024 Feb 22, 2021
800ff28
add data loading
AD1024 Feb 22, 2021
6537f7b
[ add ] bias add test case
AD1024 Feb 22, 2021
ef4eda2
[ fix ] datatype conversion
AD1024 Feb 22, 2021
b94a642
change to int8 inputs
AD1024 Feb 22, 2021
be9be8c
[ init ] relu runtime
AD1024 Feb 26, 2021
4e65ebc
[ add ] relu runtime code
AD1024 Feb 26, 2021
d21f880
Add exact matcher
slyubomirsky Feb 26, 2021
1925cb4
fix data loading
AD1024 Feb 26, 2021
afc12bc
[ update ] tests
AD1024 Feb 26, 2021
d2a07db
[ refactor ] code
AD1024 Mar 2, 2021
4bbb0cf
[ add ] comments
AD1024 Mar 2, 2021
9220796
Simplify implementation, correct pattern bugs, and add more tests
slyubomirsky Mar 4, 2021
000010e
Correct inaccurate comment
slyubomirsky Mar 5, 2021
704390e
Reformat comment
slyubomirsky Mar 5, 2021
c6d5970
Throw in refs because why not
slyubomirsky Mar 5, 2021
f25e21a
Need to visit matched args to find all matches
slyubomirsky Mar 5, 2021
5f530ef
Move utility checkers to exact_matcher file (they will come in handy …
slyubomirsky Mar 5, 2021
a38d112
Add test scaling the pattern matching to the speech-to-text model
slyubomirsky Mar 5, 2021
32e822b
Unused function
slyubomirsky Mar 5, 2021
9d61fb3
Add test case of not matching free var in match block
slyubomirsky Mar 5, 2021
7abb9fd
Incorrect dimension
slyubomirsky Mar 8, 2021
d297250
Correct attribute names and also include primitive
slyubomirsky Mar 22, 2021
7b9edac
[REFACTOR] Compile to ILA Asm (#11)
AD1024 Apr 2, 2021
78a58f3
[ hotfix ] add quantized model
AD1024 Apr 2, 2021
8c7e334
[ add ] placeholder for flexnlp codegen
Bo-Yuan-Huang Jan 27, 2021
0b4aac6
LSTM layer smoke test
slyubomirsky Feb 22, 2021
aa28529
TODO: fill in ILA assembly translation pipeline
Bo-Yuan-Huang Jan 27, 2021
2bfe49e
Change pattern to dense
Bo-Yuan-Huang Jan 28, 2021
f78a531
flexnlp linear layer prototype done
LeeOHzzZ Jan 29, 2021
e067ac4
LSTM layer smoke test with manual annotation
slyubomirsky Feb 22, 2021
522a1df
flexnlp lstm backend driver is completed, end-to-end testflow passed …
LeeOHzzZ Mar 5, 2021
aa22497
Set ila python driver as external source
LeeOHzzZ Mar 9, 2021
57f8ab7
fixing driver_dir path
Mar 18, 2021
f7976a3
relay exact matcher; speech-to-text end-to-end supported.
LeeOHzzZ Mar 30, 2021
2c9463b
Op name passed to python driver (#8)
LeeOHzzZ Apr 1, 2021
983a491
merged from steve's hlscnn
LeeOHzzZ Apr 5, 2021
bbd5920
ilacnn runtime update, wait for result returned
LeeOHzzZ Apr 6, 2021
ba5d9c1
ilacnn runtime passed match_conv2d test
LeeOHzzZ Apr 7, 2021
b297b4c
[ fix ] match new version (ctx is not needed)
AD1024 May 24, 2021
eee75b5
[ add ] compile time wall clock
AD1024 Apr 7, 2021
2e36f84
[ add ] runtime wallclock
AD1024 Apr 7, 2021
ada43c5
fix
AD1024 Apr 7, 2021
cd67280
save changes to api calls
AD1024 May 18, 2021
d8fe84c
uncomment sim call
AD1024 May 24, 2021
a26d764
fix submodules
AD1024 May 24, 2021
572af71
[ add ] record time
AD1024 Jun 1, 2021
52f1366
[ init ] conv1d
AD1024 Jun 24, 2021
e0c61cc
[ fix ] data layout
AD1024 Jun 24, 2021
1b0318c
[ finish ] conv1d codegen
AD1024 Jun 25, 2021
ca1d6c9
[ fix ] shape mismatch after matcher rewrite
AD1024 Jul 10, 2021
117df96
[ add ] attention on flexnlp
AD1024 Jul 13, 2021
2989767
save changes
AD1024 Jul 23, 2021
344c9a1
disable printing the command
AD1024 Jul 23, 2021
bdd9c6c
[ add ] env setup guide
AD1024 Aug 9, 2021
370b8f7
Merge branch 'conv1d-codegen' of github.com:uwsampl/3la-tvm into conv…
AD1024 Aug 9, 2021
cb0e2ad
[ modified ] readme change to latest dir
AD1024 Aug 9, 2021
b8e65d2
[ add ] accelerator call operator
AD1024 Sep 10, 2021
e7b6a97
added conv1d operator to rust bindings
vcanumalla Sep 30, 2021
9aa3530
[ fix ] conflict (acutally not)
AD1024 Aug 24, 2021
040b7d5
remove dep
AD1024 Sep 30, 2021
765b161
Point to Max's tvm-build which fixes hanging build
gussmith23 Oct 4, 2021
42c20ca
Add dilation to maxpool bindings
gussmith23 Oct 6, 2021
bb6c378
[ sync ] rust binding
AD1024 Oct 12, 2021
b9563ec
Restore VTA dependency
slyubomirsky Oct 26, 2021
2d2cf70
Add TOpPattern flag to accelerator_call op (#17)
slyubomirsky Oct 27, 2021
71b4854
Fix bug
gussmith23 Oct 28, 2021
3af6173
Use a callback in the exact matcher (#18)
slyubomirsky Oct 29, 2021
6b05e67
Initial work adding windows operator
gussmith23 Nov 2, 2021
25a4bab
Finish windows operator
gussmith23 Nov 2, 2021
3ebc927
[ add ] PaddAttrs and format
AD1024 Nov 3, 2021
278494d
Merge branch '3la-pldi-push-main' of github.com:uwsampl/3la-tvm into …
AD1024 Nov 3, 2021
2c733bd
[ attempt ] intimm for integer
AD1024 Nov 3, 2021
f336b15
attempt to fix
AD1024 Nov 3, 2021
75c62ad
[ add ] output data type annotation for accelerator calls
AD1024 Nov 10, 2021
1de5fad
fix attrs
AD1024 Nov 14, 2021
b9ca10b
revert changes
AD1024 Nov 17, 2021
9b043fd
fix code use alu op number directly
AD1024 Nov 17, 2021
15f0314
[ add ] vta quantization
AD1024 Nov 17, 2021
0de800e
Rust binding updates
gussmith23 Nov 17, 2021
3fde3a6
StridedSliceAttrs rust bindings
gussmith23 Nov 17, 2021
3e70c88
Add batch matmul attrs
gussmith23 Nov 17, 2021
12bb7d9
Layer norm bindings
gussmith23 Nov 18, 2021
ffe822e
Fix
gussmith23 Nov 18, 2021
ec7313b
Add specific revision of tvm-build
gussmith23 Nov 18, 2021
0698a7f
Add specific branch for tvm-build
gussmith23 Nov 18, 2021
8185bdf
revert
gussmith23 Nov 18, 2021
4eb3abd
fix
AD1024 Nov 18, 2021
8d22947
Merge branch '3la-pldi-push-main' of github.com:uwsampl/3la-tvm into …
AD1024 Nov 18, 2021
8d913ca
use 16-bit imm
AD1024 Nov 19, 2021
57e400c
try to use test_lib.cc code
AD1024 Nov 19, 2021
e08a20f
init block matmul
AD1024 Nov 19, 2021
a915604
fix
AD1024 Nov 19, 2021
c3a44eb
add ref print
AD1024 Nov 19, 2021
4f2d41b
try output
AD1024 Nov 19, 2021
20ab7bb
add vta calls
AD1024 Nov 19, 2021
185f86f
fix calls
AD1024 Nov 19, 2021
2c3a719
use new
AD1024 Nov 19, 2021
f585984
try fix
AD1024 Nov 19, 2021
b1918e0
try fix
AD1024 Nov 19, 2021
91e7505
use previous codegen
AD1024 Nov 19, 2021
813f2a9
fix
AD1024 Nov 19, 2021
feaf6fc
fix
AD1024 Nov 19, 2021
8c21875
try fix segfault
AD1024 Nov 19, 2021
b0845a4
Propagate type annotations in the exact matcher
slyubomirsky Dec 7, 2021
bca20c9
Add structural hashes of arguments to annotated regions as annotations
slyubomirsky Dec 7, 2021
0d63906
Merge branch '3la-pldi-push-main' into exact-matcher-annotations
gussmith23 Dec 15, 2021
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions .gitmodules
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,10 @@
url = https://github.com/dmlc/dlpack
[submodule "3rdparty/rang"]
path = 3rdparty/rang
url = https://github.com/agauniyal/rang
[submodule "3rdparty/vta-hw"]
path = 3rdparty/vta-hw
url = https://github.com/apache/incubator-tvm-vta
url = https://github.com/agauniyal/rang
[submodule "3rdparty/libbacktrace"]
path = 3rdparty/libbacktrace
url = https://github.com/tlc-pack/libbacktrace.git
url = https://github.com/tlc-pack/libbacktrace.git
[submodule "3rdparty/vta-hw"]
path = 3rdparty/vta-hw
url = git@github.com:uwsampl/3la-vta.git
2 changes: 1 addition & 1 deletion 3rdparty/vta-hw
5 changes: 5 additions & 0 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,8 @@ tvm_option(USE_BLAS "The blas library to be linked" none)
tvm_option(USE_MKL "MKL root path when use MKL blas" OFF)
tvm_option(USE_MKLDNN "Build with MKLDNN" OFF)
tvm_option(USE_DNNL_CODEGEN "Enable MKLDNN (DNNL) codegen" OFF)
tvm_option(USE_ILAVTA_CODEGEN "Enable ILA codegen for VTA" OFF)
tvm_option(USE_ILAFLEX_CODEGEN "Enable ILA codegen for FlexNLP" OFF)
tvm_option(USE_CUDNN "Build with cuDNN" OFF)
tvm_option(USE_CUBLAS "Build with cuBLAS" OFF)
tvm_option(USE_THRUST "Build with Thrust" OFF)
Expand Down Expand Up @@ -378,6 +380,9 @@ include(cmake/modules/contrib/EthosN.cmake)
include(cmake/modules/contrib/BLAS.cmake)
include(cmake/modules/contrib/CODEGENC.cmake)
include(cmake/modules/contrib/DNNL.cmake)
include(cmake/modules/contrib/ILAVTA.cmake)
include(cmake/modules/contrib/ILAFlex.cmake)
include(cmake/modules/contrib/ILACNN.cmake)
include(cmake/modules/contrib/Random.cmake)
include(cmake/modules/contrib/Posit.cmake)
include(cmake/modules/contrib/MicroStandaloneRuntime.cmake)
Expand Down
37 changes: 35 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,41 @@
<!--- specific language governing permissions and limitations -->
<!--- under the License. -->

<img src=https://raw.githubusercontent.com/apache/tvm-site/main/images/logo/tvm-logo-small.png width=128/> Open Deep Learning Compiler Stack
==============================================
This is a fork of TVM for adding BYOC integrations for the 3LA project.

Right now we have a VTA integration in `src/relay/backend/contrib/ilavta`. Note that you have to include the line `SET(USE_ILAVTA_CODEGEN ON)` in `build/config.cmake` before building TVM to support this (other flags that should be on: `USE_LLVM`, `USE_VTA_FSIM`). We have a test of this backend in `tests/python/relay/test_external_codegen.py` (see `test_extern_vta()`).

This version also uses a fork of the VTA repo meant to dump logs.
Try `vta/python/integration/matmul_tutorial.py` to use the dumping facility.
VTA can be set into dumping mode by calling `vta.testing.simulator.dump_mode(True)`.
You can specify the location at which the dump will be deposited using `vta.testing.simulator.dump_target(path)`; the default is `./vta_sim_dump.json`.
See the readme at [the VTA fork](https://github.com/uwsampl/3la-vta) to see a description of the dumping mode and the dumping format.

You can use `vta.testing.ila_converter.convert(dump_file, dest_file)` to convert a VTA simulator dump into an ILA program fragment.

# 3LA environment setup

## Docker setup
Please follow the instruction [here](https://github.com/PrincetonUniversity/3la-integrate) to set up the 3LA integrated docker container.

To attach to the container, run `sudo docker exec -it <name of the container> /bin/bash`

Before running any 3LA related test, `source init.sh` under `/root` first.

## 3LA tvm setup
Please follow the steps [here](https://tvm.apache.org/docs/install/from_source.html#developers-get-source-from-github). Note to replace the github repo link to this repo. Then switch to `conv1d-codegen` or `3la-rebase-complete` branch.

Before running `cmake`, please add the following lines to `config.cmake`
```cmake
set(USE_ILAVTA_CODEGEN ON)
set(USE_ILACNN_CODEGEN ON)
set(USE_ILAFLEX_CODEGEN ON)
```
and then set `USE_LLVM` to `ON`.

Before installing the python interface of this variant of tvm, you probably need to uninstall the tvm that was installed when building the docker image (to do so, run `pip uninstall <package>`).

<img src=https://raw.githubusercontent.com/apache/incubator-tvm-site/main/images/logo/tvm-logo-small.png width=128/> Open Deep Learning Compiler Stack
[Documentation](https://tvm.apache.org/docs) |
[Contributors](CONTRIBUTORS.md) |
[Community](https://tvm.apache.org/community) |
Expand Down
9 changes: 9 additions & 0 deletions cmake/modules/contrib/ILACNN.cmake
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
if(USE_ILACNN_CODEGEN STREQUAL "ON")
add_definitions(-DUSE_ILACNN_RUNTIME=1)
file(GLOB ILACNN_RELAY_CONTRIB_SRC src/relay/backend/contrib/ilacnn/*.cc)
list(APPEND COMPILER_SRCS ${ILACNN_RELAY_CONTRIB_SRC})
list(APPEND COMPILER_SRCS ${JSON_RELAY_CONTRIB_SRC})

file(GLOB ILACNN_CONTRIB_SRC src/runtime/contrib/ilacnn/ilacnn_runtime.cc)
list(APPEND RUNTIME_SRCS ${ILACNN_CONTRIB_SRC})
endif()
9 changes: 9 additions & 0 deletions cmake/modules/contrib/ILAFlex.cmake
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
if(USE_ILAFLEX_CODEGEN STREQUAL "ON")
add_definitions(-DUSE_ILAFLEX_RUNTIME=1)
file(GLOB ILAFLEX_RELAY_CONTRIB_SRC src/relay/backend/contrib/ilaflex/*.cc)
list(APPEND COMPILER_SRCS ${ILAFLEX_RELAY_CONTRIB_SRC})
list(APPEND COMPILER_SRCS ${JSON_RELAY_CONTRIB_SRC})

file(GLOB ILAFLEX_CONTRIB_SRC src/runtime/contrib/ilaflex/ilaflex_runtime.cc)
list(APPEND RUNTIME_SRCS ${ILAFLEX_CONTRIB_SRC})
endif()
34 changes: 34 additions & 0 deletions cmake/modules/contrib/ILAVTA.cmake
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
if(USE_ILAVTA_CODEGEN STREQUAL "ON")
include_directories(BEFORE SYSTEM ${VTA_HW_PATH}/include)
add_definitions(-DUSE_ILAVTA_RUNTIME=1)
file(GLOB ILAVTA_RELAY_CONTRIB_SRC src/relay/backend/contrib/ilavta/*.cc)
list(APPEND COMPILER_SRCS ${ILAVTA_RELAY_CONTRIB_SRC})
list(APPEND COMPILER_SRCS ${JSON_RELAY_CONTRIB_SRC})

file(GLOB ILAVTA_CONTRIB_SRC src/runtime/contrib/ilavta/ilavta_runtime.cc)
list(APPEND ILAVTA_CONTRIB_SRC src/runtime/contrib/ilavta/ilavta_helpers.cc)
file(GLOB VTA_RUNTIME_SRCS ${VTA_HW_PATH}/src/*.cc)
list(APPEND VTA_RUNTIME_SRCS ${VTA_HW_PATH}/src/sim/sim_driver.cc)
list(APPEND VTA_RUNTIME_SRCS ${VTA_HW_PATH}/src/sim/sim_tlpp.cc)
list(APPEND VTA_RUNTIME_SRCS ${VTA_HW_PATH}/src/vmem/virtual_memory.cc)

list(APPEND RUNTIME_SRCS ${ILAVTA_CONTRIB_SRC})
list(APPEND RUNTIME_SRCS ${VTA_RUNTIME_SRCS})

set(VTA_CONFIG ${PYTHON} ${VTA_HW_PATH}/config/vta_config.py)

if(EXISTS ${CMAKE_CURRENT_BINARY_DIR}/vta_config.json)
message(STATUS "Use VTA config " ${CMAKE_CURRENT_BINARY_DIR}/vta_config.json)
set(VTA_CONFIG ${PYTHON} ${VTA_HW_PATH}/config/vta_config.py
--use-cfg=${CMAKE_CURRENT_BINARY_DIR}/vta_config.json)
endif()
execute_process(COMMAND ${VTA_CONFIG} --target OUTPUT_VARIABLE VTA_TARGET OUTPUT_STRIP_TRAILING_WHITESPACE)
message(STATUS "Build VTA runtime with target: " ${VTA_TARGET})
execute_process(COMMAND ${VTA_CONFIG} --defs OUTPUT_VARIABLE __vta_defs)
string(REGEX MATCHALL "(^| )-D[A-Za-z0-9_=.]*" VTA_DEFINITIONS "${__vta_defs}")

foreach(__def ${VTA_DEFINITIONS})
string(SUBSTRING ${__def} 3 -1 __strip_def)
add_definitions(-D${__strip_def})
endforeach()
endif()
2 changes: 1 addition & 1 deletion include/tvm/relay/attrs/nn.h
Original file line number Diff line number Diff line change
Expand Up @@ -1097,7 +1097,7 @@ struct UpSampling3DAttrs : public tvm::AttrsNode<UpSampling3DAttrs> {
/*! \brief Attributes used for the padding operator */
struct PadAttrs : public tvm::AttrsNode<PadAttrs> {
Array<Array<Integer>> pad_width;
std::string pad_mode;
tvm::String pad_mode;

TVM_DECLARE_ATTRS(PadAttrs, "relay.attrs.PadAttrs") {
TVM_ATTR_FIELD(pad_width).describe(
Expand Down
31 changes: 29 additions & 2 deletions include/tvm/relay/attrs/transform.h
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,20 @@
namespace tvm {
namespace relay {

struct WindowsAttrs : public tvm::AttrsNode<WindowsAttrs> {
int axis;
Array<Integer> window_shape;
Array<Integer> strides;
TVM_DECLARE_ATTRS(WindowsAttrs, "relay.attrs.WindowsAttrs") {
TVM_ATTR_FIELD(axis).describe(
"What axis the windows begin forming over.");
TVM_ATTR_FIELD(window_shape).describe(
"The window shape to form over the input.");
TVM_ATTR_FIELD(strides).describe(
"How to stride the windows.");
}
};

/*! \brief data type cast */
struct CastAttrs : public tvm::AttrsNode<CastAttrs> {
DataType dtype;
Expand Down Expand Up @@ -154,7 +168,7 @@ struct GatherNDAttrs : public tvm::AttrsNode<GatherNDAttrs> {
struct TakeAttrs : public tvm::AttrsNode<TakeAttrs> {
Integer batch_dims;
Integer axis;
std::string mode;
tvm::String mode;

TVM_DECLARE_ATTRS(TakeAttrs, "relay.attrs.TakeAttrs") {
TVM_ATTR_FIELD(batch_dims)
Expand Down Expand Up @@ -302,7 +316,7 @@ struct StridedSliceAttrs : public tvm::AttrsNode<StridedSliceAttrs> {
Optional<Array<Integer>> begin;
Optional<Array<Integer>> end;
Optional<Array<Integer>> strides;
std::string slice_mode;
tvm::String slice_mode;

TVM_DECLARE_ATTRS(StridedSliceAttrs, "relay.attrs.StridedSliceAttrs") {
TVM_ATTR_FIELD(begin).describe("Indices for begin of slice, begin index is also inclusive");
Expand Down Expand Up @@ -478,6 +492,19 @@ struct UniqueAttrs : public tvm::AttrsNode<UniqueAttrs> {
}
}; // struct UniqueAttrs

/*! \brief Attributes for calling accelerators */
struct AcceleratorCallAttrs : public tvm::AttrsNode<AcceleratorCallAttrs> {
std::string func_name;
Array<Integer> output_shape;
DataType output_dtype;
TVM_DECLARE_ATTRS(AcceleratorCallAttrs, "relay.attrs.AcceleratorCallAttrs") {
TVM_ATTR_FIELD(func_name).describe("The name of the accelerator function").set_default("unknown");
TVM_ATTR_FIELD(output_shape).describe("Inferred output shape for the accelerator call");
TVM_ATTR_FIELD(output_dtype).describe("Inferred / annotated output data type of the accelerator call")
.set_default(NullValue<DataType>());
}
}; // struct AcceleratorCallAttrs

} // namespace relay
} // namespace tvm
#endif // TVM_RELAY_ATTRS_TRANSFORM_H_
Loading