Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PLDI work #24

Draft
wants to merge 2,228 commits into
base: main
Choose a base branch
from
Draft

PLDI work #24

wants to merge 2,228 commits into from
This pull request is big! We’re only showing the most recent 250 commits.

Commits on Jan 13, 2022

  1. Configuration menu
    Copy the full SHA
    8959d68 View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    f9d8c2b View commit details
    Browse the repository at this point in the history
  3. [Fix Bug] fix the bug of pool_impl_nd when computing avgpool_nd whith…

    … ceil_mode and count_include_pad are True. (#9835)
    
    * Added the offset[i] for getting the correct  boundary
    * Added corresponding test case
    xiaolong18 authored Jan 13, 2022
    Configuration menu
    Copy the full SHA
    cc9d2f4 View commit details
    Browse the repository at this point in the history
  4. [skip ci] Fix missing pack_lib in Jenkinsfile (#9924)

    This was inadvertently removed by #9554
    
    Co-authored-by: driazati <driazati@users.noreply.github.com>
    driazati and driazati authored Jan 13, 2022
    Configuration menu
    Copy the full SHA
    2c1ed59 View commit details
    Browse the repository at this point in the history
  5. Configuration menu
    Copy the full SHA
    5f828a6 View commit details
    Browse the repository at this point in the history
  6. std::string -> tvm::String for Conv1DAttrs (#9921)

    This is necessary to make the Rust bindings work.
    gussmith23 authored Jan 13, 2022
    Configuration menu
    Copy the full SHA
    424821a View commit details
    Browse the repository at this point in the history
  7. [TIR][Schedule] Annotate allows array as annotaton value (#9920)

    Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
    Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>
    Co-authored-by: Ruihang Lai <lairuihangdongdong@qq.com>
    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    Co-authored-by: Xiyou Zhou <xiyou@octoml.ai>
    
    Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
    Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>
    Co-authored-by: Ruihang Lai <lairuihangdongdong@qq.com>
    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    Co-authored-by: Xiyou Zhou <xiyou@octoml.ai>
    7 people authored Jan 13, 2022
    Configuration menu
    Copy the full SHA
    7485413 View commit details
    Browse the repository at this point in the history

Commits on Jan 14, 2022

  1. fix icelake target for avx512 and vnni (#9928)

    Matthew Brookhart authored Jan 14, 2022
    Configuration menu
    Copy the full SHA
    46da676 View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    220b122 View commit details
    Browse the repository at this point in the history
  3. Configuration menu
    Copy the full SHA
    0a159c4 View commit details
    Browse the repository at this point in the history
  4. [CI] Fix pip cache config bug (#9933)

    * Update Dockerfile.ci_arm
    
    * Update Dockerfile.ci_cpu
    
    * Update Dockerfile.ci_gpu
    
    * Update Dockerfile.ci_i386
    
    * Update Dockerfile.ci_lint
    
    * Update Dockerfile.ci_qemu
    
    * Update Dockerfile.ci_wasm
    mshr-h authored Jan 14, 2022
    Configuration menu
    Copy the full SHA
    670de9b View commit details
    Browse the repository at this point in the history
  5. Configuration menu
    Copy the full SHA
    79c59fe View commit details
    Browse the repository at this point in the history
  6. [USMP] Hill Climb allocator (#9704)

    * [USMP] Hill Climb allocator
    
    This PR adds HillClimb allocator "tir.usmp.algo.hill_climb"
    to the memory allocation algorithm set.
    
    Change-Id: Ib7485df93757eb512da040528ec86c920db8d03b
    
    * requested changes
    
    Change-Id: I6700a24c1608d92f87be7dde33cc24f5de1f7063
    
    * Conda-related linter small fixes
    
    Change-Id: I0dac5c6d75ade8f813b077c8708aad59d2722933
    
    * Moved implementation from greedy.h to greedy.cc
    
    Change-Id: If8ed159eceef32d3f22b51e0252161d09222eb1e
    
    * Integrated into test_tir_usmp_algo.py unit test
    
    Added "hill_climb" into test_tir_usmp_algo.py
    Amended sorting to be consistent with "greedy" family
    
    Change-Id: I8e9f5282f15baaab71d6d129aeb9643376b14763
    d-smirnov authored Jan 14, 2022
    Configuration menu
    Copy the full SHA
    89ae603 View commit details
    Browse the repository at this point in the history
  7. Configuration menu
    Copy the full SHA
    4419241 View commit details
    Browse the repository at this point in the history
  8. [Relay/Frontend][TFLite] Change the output shape calculation based on…

    … keep_dim option in fully connected (#9840)
    
    * Support -> Change the output shape calculation based on keep_dim option
    
    * Support -> Change the output shape calculation based on keep_dim option
    
    * Support -> Change the output shape calculation based on keep_dim option
    
    * Support -> Change the output shape calculation based on keep_dim option
    
    * Change the output shape calculation based on keep_dim option in fully connected
    
    * TODO : Need to construct a fc op with (keep_num_dims == True)
    
    * TODO : Need to construct a fc op with (keep_num_dims == True)
    blackkker authored Jan 14, 2022
    Configuration menu
    Copy the full SHA
    e7c8141 View commit details
    Browse the repository at this point in the history

Commits on Jan 15, 2022

  1. [TIR] Encode conditional accesses info into block read/write regions …

    …(#9880)
    
    * encode conditional accesses info into block read/write regions
    
    * compare ir after simplify
    wrongtest-intellif authored Jan 15, 2022
    Configuration menu
    Copy the full SHA
    6f6fc68 View commit details
    Browse the repository at this point in the history
  2. [Int8] Support cublas on e2e int8 models (also tried cudnn but doesn'…

    …t work) (#9898)
    
    * fixed int8 dense offload for cublas
    
    * support OHWI kernel layout in qnn.conv2d
    
    * fixed reduction axis
    
    * add cublas int8 qnn test
    
    * lint
    masahi authored Jan 15, 2022
    Configuration menu
    Copy the full SHA
    b3c6625 View commit details
    Browse the repository at this point in the history
  3. Configuration menu
    Copy the full SHA
    1b1cfb3 View commit details
    Browse the repository at this point in the history
  4. [ONNX] Fix onnx convtranspose error (#9938)

    * fix mix up of channels with conv2d-transpose
    
    * add grouped convtranspose tests
    
    * turn off groups for non-llvm test
    AndrewZhaoLuo authored Jan 15, 2022
    Configuration menu
    Copy the full SHA
    84ee90c View commit details
    Browse the repository at this point in the history
  5. [Fix] relay onnx frontend bug when [A, B, M, N] * [1, B, N, K] (#9911)

    * [Fix] relay onnx frontend bug when [A, B, M, N] * [1, B, N, K]
    
    * fix line
    
    Co-authored-by: tomoyazhang <tomoyazhang@tencent.com>
    willzhang4a58 and tomoyazhang authored Jan 15, 2022
    Configuration menu
    Copy the full SHA
    6eb4ed8 View commit details
    Browse the repository at this point in the history

Commits on Jan 17, 2022

  1. [Caffe Frontend] supporting group > 1 cases for Deconv op (#8260)

    * [Caffe Frontend] supporting group > 1 cases for Deconv op
    
    - Handling group > 1 cases, assuming group == output channels
    - Simply decomposed into Relay split, conv2d_transposed, and multi-leveled concatenate ops
    - Added some test cases
    
    Signed-off-by: zotanika <zotanika@gmail.com>
    
    * [Caffe Frontend] amending a test case for Deconv op
    
    Signed-off-by: zotanika <zotanika@gmail.com>
    
    * explicit importing tvm.testing
    
    * changing split axis to 0, according to PR #9336
    zotanika authored Jan 17, 2022
    Configuration menu
    Copy the full SHA
    be0677d View commit details
    Browse the repository at this point in the history
  2. [Caffe Frontend] extending Eltwise to handle multiple inputs (#8136)

    * [Caffe Frontend] adding Reduction op
    
    * reformatting Reduction op test script
    
    * reformatting Reduction test script
    
    * [Caffe frontend] Reduction op
    - adding more test cases; handling '0 < axis < num_axes - 1' case to give the result equivalent to Caffe framework
    - skipping Relay multiplication if coeff is 1
    
    Signed-off-by: zotanika <zotanika@gmail.com>
    
    * linting test script
    
    * linting
    
    * [Caffe Frontend] Supporting multiple grouped(channel-wise) Deconv op
    
    * Handling group > 1 cases, assuming group == output channels
    * Decomposed into Relay split, transposed conv, and multi-leveled concatenation.
    * Added some test cases.
    
    Signed-off-by: zotanika <zotanika@gmail.com>
    
    * [Caffe Frontend] supporting variable number of inputs for Eltwise
    
    * extra handling of rest inputs for PROD, SUM, MAX operations
    * extra testcases
    
    Signed-off-by: zotanika <zotanika@gmail.com>
    
    * formatting fix
    
    * [Caffe Frontend] reverting codes related Reduction for splitting PR
    
    * Revert "[Caffe Frontend] Supporting multiple grouped(channel-wise) Deconv op"
    
    This reverts commit 43e25e552b790ce9a38fdbcfb3ddf2075c253e20.
    
    * instant fix against docker format error
    
    * instant fix against docker format error
    
    * instant fix against docker format error
    zotanika authored Jan 17, 2022
    Configuration menu
    Copy the full SHA
    3c8de42 View commit details
    Browse the repository at this point in the history
  3. [MetaSchedule] Schedule Rule: Auto Inline (#9943)

    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    Co-authored-by: Xiyou Zhou <xiyou@octoml.ai>
    Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
    Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>
    Co-authored-by: Wuwei Lin <wuwei@apache.org>
    
    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    Co-authored-by: Xiyou Zhou <xiyou@octoml.ai>
    Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
    Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>
    Co-authored-by: Wuwei Lin <wuwei@apache.org>
    7 people authored Jan 17, 2022
    Configuration menu
    Copy the full SHA
    596333b View commit details
    Browse the repository at this point in the history
  4. [microNPU] Remove remaining UnsupportedLayout checks (#9791)

    * [microNPU] Remove remaining UnsupportedLayout checks
    
    In #9508 the decision was made to remove the UnsupportedLayout exception
    and the checks that throw it, this PR is cleaning up some that remained.
    
    Change-Id: I83bfe233381b83af886343c9569db753e33f9059
    
    * fix lint
    
    Change-Id: I67c1a5371f0b2e51b6cd39435ef4073d8d17af51
    lhutton1 authored Jan 17, 2022
    Configuration menu
    Copy the full SHA
    24bccd2 View commit details
    Browse the repository at this point in the history
  5. [microNPU][2c] Add performance modelling to cascader (#9778)

    * [microNPU][2c] Initial Performance Model
    
    * Added the pre-computed performance modelling per block.
    * Added the aggregation of cycles given a stripe config.
    * Implemented the op-specific performance code for conv2d.
    * Created a DeviceConfig class to hold constant performance related data
    that is dependent on the accelerator configuration
    * Added generation of all valid block configs. This is pre-computed and
    given as an argument when constructing EthosuParts.
    * Implemented selection of the block config that gives the least amount
    of data read given a StripeConfig.
    
    * Add test guards
    
    * Extended block config testing
    jacobbohlin authored Jan 17, 2022
    Configuration menu
    Copy the full SHA
    133bb9c View commit details
    Browse the repository at this point in the history
  6. [MetaSchedule] random compute location (#9940)

    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    Co-authored-by: Xiyou Zhou <xiyou@octoml.ai>
    Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
    Co-authored-by: Ruihang Lai <lairuihangdongdong@qq.com>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>
    Co-authored-by: Wuwei Lin <wuwei@apache.org>
    
    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    Co-authored-by: Xiyou Zhou <xiyou@octoml.ai>
    Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
    Co-authored-by: Ruihang Lai <lairuihangdongdong@qq.com>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>
    Co-authored-by: Wuwei Lin <wuwei@apache.org>
    7 people authored Jan 17, 2022
    Configuration menu
    Copy the full SHA
    77c66f0 View commit details
    Browse the repository at this point in the history
  7. Configuration menu
    Copy the full SHA
    1e5373f View commit details
    Browse the repository at this point in the history

Commits on Jan 18, 2022

  1. [CUDNN] Refactor descriptor initialization, remove `cudnn.conv.output…

    …_shape_from_cudnn` (#9948)
    
    * Introduce SetConvdescriptors to refactor cudnn/conv_forward.cc
    
    * more refactor
    
    * remove cudnn get output
    
    * cpplint
    masahi authored Jan 18, 2022
    Configuration menu
    Copy the full SHA
    211291f View commit details
    Browse the repository at this point in the history
  2. [microNPU] Add support for scalar values (#9794)

    * [microNPU] Add support for scalar values
    
    PR #9515 enabled support for scalar constants, but didn't consider the
    case of a scalar value where the underlying constant data does not have
    a shape i.e. `constant.shape == []`. See the test case for a visual
    differece when the scalar value is 1.
    
    Change-Id: Id7a238cb5bf999dd5a8428c097202f9fb940a5f0
    
    * Fix failing test by removing constant
    
    Before this PR scalar constants were handled differently so this test
    was able to pass. Now that scalar constants are handled in the same
    manner as tensor constants, the test fails since unexpected tir is
    produced in the compilation pipeline. Since the relay used in this test
    case is not expected to be produced by higher levels of the compiler,
    removing this constant for now.
    
    Change-Id: I4ea5155778809041339e6faac05af3f72c3e3ea5
    
    * clean up finding tensor from inputs
    
    Change-Id: Ideccf84f8c9149148ff23e2406229cf637c982a3
    lhutton1 authored Jan 18, 2022
    Configuration menu
    Copy the full SHA
    364e2db View commit details
    Browse the repository at this point in the history
  3. Configuration menu
    Copy the full SHA
    31de5bc View commit details
    Browse the repository at this point in the history
  4. Enable NPU and CMSIS in ci_qemu (#9957)

    These are required for running the demos under ci_qemu in combination with Zephyr
    Mousius authored Jan 18, 2022
    Configuration menu
    Copy the full SHA
    4f29562 View commit details
    Browse the repository at this point in the history
  5. [Runtime][Pipeline executor] Global parameters group name and runtime…

    … modules parameters map. (#9846)
    
    * [Runtime][Pipeline executor] Global parameters group name and runtime
    modules parameters map.
    
    Solution:
    To support on the fly parameters setting for each runtime module
    in pipeline executor, we create a feature that use global parameters
    group name to map the runtime module parameter, after such map
    relation get created user can do the on the fly parameters setting
    by using the parameters group name.
    
    trigger build.
    
    fix ut issue.
    
    polish comments.
    
    Update python/tvm/contrib/pipeline_executor.py
    
    Co-authored-by: Cody Yu <comaniac0422@gmail.com>
    
    Update python/tvm/contrib/pipeline_executor.py
    
    Co-authored-by: Cody Yu <comaniac0422@gmail.com>
    
    Update python/tvm/contrib/pipeline_executor.py
    
    Co-authored-by: Cody Yu <comaniac0422@gmail.com>
    
    Update python/tvm/contrib/pipeline_executor.py
    
    Co-authored-by: Cody Yu <comaniac0422@gmail.com>
    
    Update src/runtime/pipeline/pipeline_executor.h
    
    Co-authored-by: Cody Yu <comaniac0422@gmail.com>
    
    Update src/runtime/pipeline/pipeline_struct.h
    
    Co-authored-by: Cody Yu <comaniac0422@gmail.com>
    
    Update python/tvm/contrib/pipeline_executor.py
    
    Co-authored-by: Cody Yu <comaniac0422@gmail.com>
    
    address review comments.
    
    * Update python/tvm/contrib/pipeline_executor.py
    
    Co-authored-by: Cody Yu <comaniac0422@gmail.com>
    
    * fix plint issue.
    
    Co-authored-by: Cody Yu <comaniac0422@gmail.com>
    huajsj and comaniac authored Jan 18, 2022
    Configuration menu
    Copy the full SHA
    b1bd18e View commit details
    Browse the repository at this point in the history

Commits on Jan 19, 2022

  1. [CI] Upgrade ONNX (#9965)

    * jenkinsfile and one test
    
    * formatting
    
    * swtich to proper repo for docker
    
    * fix missing - with _
    
    * jostle
    
    * upgrade to latest images
    
    * jenkinsfile and one test
    
    * formatting
    
    * swtich to proper repo for docker
    
    * fix missing - with _
    
    * upgrade to latest images
    
    * jostle ci
    
    * update with official images
    
    * jostle ci
    AndrewZhaoLuo authored Jan 19, 2022
    Configuration menu
    Copy the full SHA
    904b4ae View commit details
    Browse the repository at this point in the history
  2. [TOPI,x86] Improve performance on int8 conv2d on x86 (#9966)

    Appended fused operations in cov2d for int8 were computed in a separate
    loop from the main conv2d computation:
    ```
    for i in ... parallel
      for j in ...
        accumulator = 0
        for k in ..
          vectorized_multiply_add(accumulator, data, kernel)
        out = accumulator
      for k in ..
        out = out + fused subsequent ops
    ```
    This patch moves the fused ops one more loop nesting inwards to get
    ```
    for i in ... parallel
      for j in ...
        accumulator = 0
        for k in ..
          vectorized_multiply_add(accumulator, data, kernel)
        out = accumulator + fused subsequent ops
    ```
    On quantized mobilenetv2, this results in approximately a 30% speedup.
    Tristan Konolige authored Jan 19, 2022
    Configuration menu
    Copy the full SHA
    19717aa View commit details
    Browse the repository at this point in the history
  3. [Hexagon] Return pathlib.Path from get_hexagon_rpc_path() (#9969)

    Type annotations don't do anything, the type conversion needs to be
    explicit.
    Krzysztof Parzyszek authored Jan 19, 2022
    Configuration menu
    Copy the full SHA
    da3b63e View commit details
    Browse the repository at this point in the history
  4. [Hexagon] Add missing #include <iterator> (#9968)

    This fixes compilation error with libstdc++.
    Krzysztof Parzyszek authored Jan 19, 2022
    Configuration menu
    Copy the full SHA
    928be03 View commit details
    Browse the repository at this point in the history
  5. [Doc][Fix] Fix qnn op parameters hint order (#9622)

    As the following parameters are both Expr.
    
    ```
    zero_point : tvm.relay.Expr
    scale : tvm.relay.Expr
    ```
    
    It's really get me confused when I followed the python arg hints to create a relay and without success.
    mhyang-pllab authored Jan 19, 2022
    Configuration menu
    Copy the full SHA
    6d68184 View commit details
    Browse the repository at this point in the history
  6. Propagate ssh-agent authentication socket (#9926)

    In case that SSH_AUTH_SOCK is defined, two items will be added to the
    docker run command:
    1) A propageted ssh authentication socket value , to support
       an underlying ssh calls inside the running contianer.
    2) A mounted volume for ssh channel.
    
    Co-authored-by: Samuel Panijel <samuel.panijel@arm.com>
    spanijel and spanijelatarm authored Jan 19, 2022
    Configuration menu
    Copy the full SHA
    ac1a43b View commit details
    Browse the repository at this point in the history
  7. [TIR][USMP] Integrating USMP to AoT Executor (#9565)

    This commit integrates USMP with the AoT executor codegen. Additionally, this commit introduces two PassContext options to disable_usmp and disable_storage_rewrite.
    
    Moved PrintType from codegen_c.cc to codegen_source_base.cc to be accessible by source_module.cc
    
    Moved runtime::metadata to be ExecutorCodegeMetadata as it contains metadata produced by ExecutorCodegen for actual code generation (not a runtime component).
    manupak authored Jan 19, 2022
    Configuration menu
    Copy the full SHA
    c3ace20 View commit details
    Browse the repository at this point in the history
  8. Disallow copy to/from external HexagonBuffer (#9930)

    * Disallow copy to/from external HexagonBuffer
    
    * change nbytes -> allocation_nbytes for clarity
    
    * retrigger ci
    adstraw authored Jan 19, 2022
    Configuration menu
    Copy the full SHA
    f1501d0 View commit details
    Browse the repository at this point in the history
  9. [Fix] Fix flaky test of #9952 (#9958)

    * fix to stablize the var orders when solve bounds in region analysis
    
    * change to std::find_if since num of vars is generally small
    wrongtest-intellif authored Jan 19, 2022
    Configuration menu
    Copy the full SHA
    611b430 View commit details
    Browse the repository at this point in the history
  10. Add contribute page about CI (#9906)

    * Add contribute page about CI
    
    This adds some docs with a description of the TVM CI and some usage instructions to both make contributing more friendly and educate existing developers about how CI runs. Bikeshedding on content is welcome
    
    Note: The TODOs are blocked on some other PRs and will be done before landing
    
    * docker instructions
    
    * Comments
    
    * Rebase
    
    Co-authored-by: driazati <driazati@users.noreply.github.com>
    driazati and driazati authored Jan 19, 2022
    Configuration menu
    Copy the full SHA
    1351ede View commit details
    Browse the repository at this point in the history
  11. [bugfix] Fix the behavior of TVMScript printer (#9974)

    * upd
    
    * lint
    yzh119 authored Jan 19, 2022
    Configuration menu
    Copy the full SHA
    6286ac1 View commit details
    Browse the repository at this point in the history
  12. Configuration menu
    Copy the full SHA
    a9b1d5b View commit details
    Browse the repository at this point in the history
  13. [Relay] Add conv2d_backward_weight op (without topi) (#9954)

    * python plumbing
    
    * add cpp def
    
    * legalize worked
    
    * clean up
    
    * layout conversion doesnt work
    
    * extract wgrad body
    
    * fix convert layout
    
    * black
    
    * fix kernel size
    
    * revert irrelevant change
    
    * add doc, clarify the meanings of parameters
    
    * update layout convert
    
    * test passed
    
    * fixed layout conversion
    
    * update convert layout
    
    * remove print
    
    * remove layout convert for now
    
    * minor fix
    
    * removed unused import
    
    * add wgrad python reference
    
    * add test stub
    
    * add doc
    
    * test other stride and pad
    
    * tweak
    
    * more pylint filter
    
    * fix typo in doc
    
    * swap arg order (data, grad) to be consistent with conv2d_transpose(dgrad)
    masahi authored Jan 19, 2022
    Configuration menu
    Copy the full SHA
    fd5915a View commit details
    Browse the repository at this point in the history
  14. [MetaSchedule] Schedule Rule: Add RFactor (#9975)

    * add rfactor
    
    * format
    
    * fix ci
    jinhongyii authored Jan 19, 2022
    Configuration menu
    Copy the full SHA
    f9171f1 View commit details
    Browse the repository at this point in the history
  15. Add Action to add cc'ed people as reviewers (#9934)

    * Add action to label mergeable PRs
    
    Developers often have to ping a committer once their PRs are both passing in CI and are approved. This helps facilitate this process by marking such PRs with a label `ready-for-merge` so committers can easily filter for outstanding PRs that need attention.
    
    * Fix lint and add tests
    
    * Add Action to add cc'ed people as reviewers
    
    This provides a mechanism for non-triager/reviewer/committer PR authors to request reviews through GitHub. Anyone that is referenced by `cc @username` in a PR body will be added as a reviewer (GitHub will limit the reviewers to those with actual permissions to leave reviews so the script to add can be simple).
    
    * remove merge bot stuff
    
    * Fix target triggers
    
    Co-authored-by: driazati <driazati@users.noreply.github.com>
    driazati and driazati authored Jan 19, 2022
    Configuration menu
    Copy the full SHA
    bae144c View commit details
    Browse the repository at this point in the history
  16. Add runtime.ModuleGetFormat method enabling export of BYOC generated …

    …sources which require a .cpp/.cc file extension (#9243)
    
    * Allow export of C++ kernels using correct file extension
    
    * [WIP] Set module_key=c for CSourceCrtMetadataModuleNode to temporarily fix failing tests
    
    I realized that the module format `cc` is currently already used by the `CSourceCrtMetadataModuleNode` declared in `src/target/source/source_module.cc`.
    This needs to be discussed first to decide if either the module_key should be changed or the test cases expecting the systemlib kernel (e.g. `default_lib0.c`) to have a `.c` extension.
    
    * Update Makefiles used by tests/python/relay/aot/ to support C++ file extensions
    
    AOT: Add c++ support to aot_test.mk
    AOT: Add c++ support to corstone300.mk
    
    * Add missing definition of GetFormat to cmsisnn and ethosn codegens (WIP)
    
    * Resolve PR comments
    
    * lint python/tvm/runtime/module.py
    
    * fix EthosUModuleNode for CI
    
    * Fix: detect empty module.format
    
    * Add error message to assertion
    
    * Lint python/tvm/runtime/module.py
    PhilippvK authored Jan 19, 2022
    Configuration menu
    Copy the full SHA
    cc5382e View commit details
    Browse the repository at this point in the history
  17. Tvmc python tutorial (#9633)

    * finish rpc and shape_dict in tut
    
    * added more to rpc
    
    * tutorial edits
    
    * added tutorial to docs in howto
    
    * accidentally had two copies of tutorial
    
    * Update gallery/how_to/use_tvms_python_api/tvmc_python.py
    
    Co-authored-by: Leandro Nunes <leandro.nunes@arm.com>
    
    * Update gallery/how_to/use_tvms_python_api/tvmc_python.py
    
    Co-authored-by: Leandro Nunes <leandro.nunes@arm.com>
    
    * Update gallery/how_to/use_tvms_python_api/tvmc_python.py
    
    Co-authored-by: Leandro Nunes <leandro.nunes@arm.com>
    
    * Update gallery/how_to/use_tvms_python_api/tvmc_python.py
    
    Co-authored-by: Leandro Nunes <leandro.nunes@arm.com>
    
    * Apply suggestions from code review
    
    Co-authored-by: Leandro Nunes <leandro.nunes@arm.com>
    
    * Update gallery/how_to/use_tvms_python_api/tvmc_python.py
    
    Co-authored-by: Leandro Nunes <leandro.nunes@arm.com>
    
    * Update gallery/how_to/use_tvms_python_api/tvmc_python.py
    
    Co-authored-by: Leandro Nunes <leandro.nunes@arm.com>
    
    * added Leandro's suggestions
    
    * added example model at top
    
    * added example model, blacked it
    
    * trying to get docs to build
    
    * underline too short for title
    
    * forgot Jetson info, added Chris H comments
    
    * reformatting text
    
    * black
    
    * hitting code block issue, trying to debug
    
    * added spaces after the python codeblock
    
    * black
    
    * changing formatting
    
    * touching up more edits'
    
    * more touchups
    
    * changed location of file to tutorial section
    
    * changing doc location
    
    * broke the order of the docs somehow
    
    * fixed it yayy
    
    * added additional indentation
    
    * black'd
    
    Co-authored-by: CircleSpin <jocelyn@pop-os.localdomain>
    Co-authored-by: Leandro Nunes <leandro.nunes@arm.com>
    3 people authored Jan 19, 2022
    Configuration menu
    Copy the full SHA
    14d0187 View commit details
    Browse the repository at this point in the history

Commits on Jan 20, 2022

  1. Configuration menu
    Copy the full SHA
    73aa415 View commit details
    Browse the repository at this point in the history
  2. [microNPU][2d] Add more Part matchers to cascader (#9785)

    * [microNPU][2d] Add more Part matchers for the cascader
    
    Adds Part matchers for ethosu_depthwise_conv2d,
    ethosu_pooling and ethosu_binary_elementwise. Also
    adds additional testing for the CascaderGraph
    creation.
    
    Co-authored-by: Jacob Bohlin <jacob.bohlin@arm.com>
    
    * Extended testing for block config
    
    * Add test guards
    
    Co-authored-by: Matthew Barrett <matthew.barrett@arm.com>
    jacobbohlin and mbaret authored Jan 20, 2022
    Configuration menu
    Copy the full SHA
    bcdc345 View commit details
    Browse the repository at this point in the history
  3. [CI] hot fix Sphinx (#9998)

    Hzfengsy authored Jan 20, 2022
    Configuration menu
    Copy the full SHA
    589fc01 View commit details
    Browse the repository at this point in the history
  4. [microNPU] Move optimization passes to be a module pass and ensure th…

    …ey (#9831)
    
    are running
    
    Moves LayoutOptimizer and LUTOptimizer passes to be a module pass,
    rather than a function pass. This is because it was found that these
    passes were not running in the NPU compilation flow. In addition, a
    test for both LayoutOptimizer and LUTOptimizer has been added to check
    that the passes are running in the compilation pipeline of the NPU.
    
    Change-Id: I5145c6f02eeb0daea3cdba56198e0804ec32f351
    lhutton1 authored Jan 20, 2022
    Configuration menu
    Copy the full SHA
    e390d9e View commit details
    Browse the repository at this point in the history
  5. [CI] Fix Rust path and remove wasmtime from ci_qemu (#10001)

    This fixes the Rust path to ensure `cargo` is accessible in the
    container.
    
    As ci_qemu is targetted at microTVM it likely won't make use of wasmtime
    as a dependency - it's used instead for the JS and WASM standalone applications as
    far as I can see.
    Mousius authored Jan 20, 2022
    Configuration menu
    Copy the full SHA
    099ebaa View commit details
    Browse the repository at this point in the history

Commits on Jan 21, 2022

  1. [Relay] Fix a bug in tensor_array_scatter (#6890)

    * [Relay] Fix a bug in tensor_array_scatter
    
      tensor_array_scatter constructs helper functions according to dtype
      and shape of element. When there are multiple scatter operations with
      same dtype and element shape but different indicies_shape, there will
      be name conflict in prelude.
    
    * Refine get_name
    lixiaoquan authored Jan 21, 2022
    Configuration menu
    Copy the full SHA
    a427efb View commit details
    Browse the repository at this point in the history
  2. [Minor] Typo Fixes (#10000)

    * Fix typos.
    
    * Missed funtion -> function.
    zxybazh authored Jan 21, 2022
    Configuration menu
    Copy the full SHA
    e05a62b View commit details
    Browse the repository at this point in the history
  3. Make cc bot skip errors (#9988)

    * [skip ci] Make cc bot skip errors
    
    This adds some better logging and ignores errors (this job shouldn't ever show up as a PR failure) so we can diagnose things like https://github.com/apache/tvm/runs/4873810315?check_suite_focus=true
    
    * Submit cc'ed reviewers one at a time
    
    Co-authored-by: driazati <driazati@users.noreply.github.com>
    driazati and driazati authored Jan 21, 2022
    Configuration menu
    Copy the full SHA
    2426749 View commit details
    Browse the repository at this point in the history
  4. Don't use std::move in WithFields (#10009)

    * Don't use std::move in WithFields
    
    * lint
    electriclilies authored Jan 21, 2022
    Configuration menu
    Copy the full SHA
    11a84c1 View commit details
    Browse the repository at this point in the history
  5. [LLVM][Hexagon] Revert LLVM header change for version 14 (#10006)

    * Revert LLVM header change
    
    * Trigger
    mehrdadh authored Jan 21, 2022
    Configuration menu
    Copy the full SHA
    2a91f0d View commit details
    Browse the repository at this point in the history
  6. [MetaSchedule] Post Processor: Rewrite Reduction Block (#10013)

    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    Co-authored-by: Xiyou Zhou <xiyou@octoml.ai>
    Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
    Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>
    Co-authored-by: Wuwei Lin <wuwei@apache.org>
    
    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    Co-authored-by: Xiyou Zhou <xiyou@octoml.ai>
    Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
    Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>
    Co-authored-by: Wuwei Lin <wuwei@apache.org>
    7 people authored Jan 21, 2022
    Configuration menu
    Copy the full SHA
    d97274c View commit details
    Browse the repository at this point in the history
  7. [frontend][keras] Add support for TimeDistributed (#7006)

    * First pass on modifying Keras importer to handle TimeDistributed
    
    * Use squeeze inside TimeDistributed, add tests
    
    * linter fixes
    
    * More linting
    
    * Even more linting
    
    * Fix unused argument annotations
    
    * Forgot one pylint annotation
    
    * Forgot to set up data layout in _convert_activation
    
    * Decouple data_layout from etab
    
    * Linting fix
    
    * Forgot to set data_layout argument
    
    * Missed an etab.data_format, also test_conv1d was not in the test file's main
    
    * Rebase fixes
    
    * Linting fix
    
    * _convert_lambda needs a data layout argument too
    
    * linting fix too
    
    * Lint the test file too
    
    * Redundant variables
    
    * Simplify further
    
    * Another simplification
    
    Co-authored-by: Steven Lyubomirsky <slyubomirsky@octoml.ai>
    slyubomirsky and slyubomirsky authored Jan 21, 2022
    Configuration menu
    Copy the full SHA
    25c8f4c View commit details
    Browse the repository at this point in the history
  8. Auto-discover C/C++ compiler instead of hardcoding g++ (#10007)

    Some platforms (e.g. FreeBSD) use clang as the default OS compiler,
    and there is no g++.
    Krzysztof Parzyszek authored Jan 21, 2022
    Configuration menu
    Copy the full SHA
    751f83b View commit details
    Browse the repository at this point in the history
  9. [Docker] Relax name check (#10011)

    Fix a issue that user name like aaa.bb can't be added to docker container
    lixiaoquan authored Jan 21, 2022
    Configuration menu
    Copy the full SHA
    81b66e6 View commit details
    Browse the repository at this point in the history
  10. [MetaSchedule] Schedule Rule: Cross Thread Reduction (#9994)

    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    Co-authored-by: Xiyou Zhou <xiyou@octoml.ai>
    Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
    Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
    Co-authored-by: Ruihang Lai <lairuihangdongdong@qq.com>
    Co-authored-by: Wuwei Lin <wuwei@apache.org>
    7 people authored Jan 21, 2022
    Configuration menu
    Copy the full SHA
    1ac01b4 View commit details
    Browse the repository at this point in the history

Commits on Jan 22, 2022

  1. [TOPI,CUDA] Don't enable cudnn conv2d kernel if is not supported (#10…

    …021)
    
    * [TOPI,CUDA] Don't enable cudnn conv2d kernel if is not supported
    
    Specifically, check that layout is not NCHW if datatype is int8.
    
    * remove all conv2d_cudnn int8 support
    Tristan Konolige authored Jan 22, 2022
    Configuration menu
    Copy the full SHA
    e9ee73f View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    89fa241 View commit details
    Browse the repository at this point in the history
  3. Add user-configurable backtrace limit (#10025)

    A spin off of #9872, this adds an env variable `TVM_BACKTRACE_LIMIT` which can be set to an integer to limit the frames printed out on errors. This can make it easier to run interactive TVM scripts with errors since the stack traces are often long (70+ frames).
    
    ```bash
    export TVM_BACKTRACE_LIMIT=5
    python some_code_with_an_error.py
    ```
    
    cc @tkonolige
    
    Co-authored-by: driazati <driazati@users.noreply.github.com>
    driazati and driazati authored Jan 22, 2022
    Configuration menu
    Copy the full SHA
    9a6423c View commit details
    Browse the repository at this point in the history
  4. [MetaSchedule] disallow_dynamic_loop (#9997)

    * [MetaSchedule] disallow_dynamic_loop
    
    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    Co-authored-by: Xiyou Zhou <xiyou@octoml.ai>
    Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
    Co-authored-by: Ruihang Lai <lairuihangdongdong@qq.com>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>
    Co-authored-by: Wuwei Lin <wuwei@apache.org>
    
    * Update src/meta_schedule/postproc/disallow_dynamic_loop.cc
    
    Co-authored-by: Ruihang Lai <lairuihangdongdong@qq.com>
    
    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    Co-authored-by: Xiyou Zhou <xiyou@octoml.ai>
    Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
    Co-authored-by: Ruihang Lai <lairuihangdongdong@qq.com>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>
    Co-authored-by: Wuwei Lin <wuwei@apache.org>
    7 people authored Jan 22, 2022
    Configuration menu
    Copy the full SHA
    64f2939 View commit details
    Browse the repository at this point in the history
  5. [CUDNN] Support gradient kernels (#9986)

    * Dgrad nchw, nhwc, fp16 working
    
    commit 426e5dca446a27da49270f45171b58f1bfa21fa9
    Author: Masahiro Masuda <masahi129@gmail.com>
    Date:   Tue Jan 18 11:48:53 2022 +0900
    
        black
    
    commit 211a58b80f4d0f0b5b0230720e41f35e50cb1eaf
    Author: Masahiro Masuda <masahi129@gmail.com>
    Date:   Tue Jan 18 11:43:52 2022 +0900
    
        fp16 also works
    
    commit c2a34d473b063873628bff00e51a44cd8e4d0e4f
    Author: Masahiro Masuda <masahi129@gmail.com>
    Date:   Tue Jan 18 11:36:36 2022 +0900
    
        nhwc test also worked
    
    commit c0609ab147fef30c230a94d16b6c1ba35f7dd9c0
    Author: Masahiro Masuda <masahi129@gmail.com>
    Date:   Tue Jan 18 11:21:23 2022 +0900
    
        nchw test worked
    
    commit 2bf68c72763708151e9f49f09916a210b2547be8
    Author: Masahiro Masuda <masahi129@gmail.com>
    Date:   Tue Jan 18 10:41:35 2022 +0900
    
        add test stub
    
    commit c86b1288d5e371f12cba4e1b1866966cb9264401
    Author: Masahiro Masuda <masahi129@gmail.com>
    Date:   Tue Jan 18 10:32:09 2022 +0900
    
        add python definition stub
    
    commit 3166952f9673376801bf4b5b39eeb6f89452f30a
    Author: Masahiro Masuda <masahi129@gmail.com>
    Date:   Tue Jan 18 06:57:18 2022 +0900
    
        bwd filter compiled
    
    commit e311ba3d05c5f9424ecb952cb5a520ce81a0828a
    Author: Masahiro Masuda <masahi129@gmail.com>
    Date:   Tue Jan 18 06:27:55 2022 +0900
    
        dgrad compiled
    
    commit 47f35beb5eeeb7cbf9f6ec7cf8f5c80c65e8da46
    Author: Masahiro Masuda <masahi129@gmail.com>
    Date:   Tue Jan 18 06:16:43 2022 +0900
    
        add dgrad stub
    
    commit ebed032d15b1c3895f541c46ce5d80b6dd769034
    Author: Masahiro Masuda <masahi129@gmail.com>
    Date:   Mon Jan 17 17:01:56 2022 +0900
    
        cpplint
    
    commit 834f54a8c13512130e7d91ca0f54268dc06c5481
    Author: Masahiro Masuda <masahi129@gmail.com>
    Date:   Mon Jan 17 16:55:58 2022 +0900
    
        remove cudnn get output
    
    commit dcbd9c95fdb8ffef9db9c2350430b270461a31c3
    Author: Masahiro Masuda <masahi129@gmail.com>
    Date:   Mon Jan 17 16:28:07 2022 +0900
    
        more refactor
    
    commit 146464e8496fff972bdb1687c4e9d432fe3278d5
    Author: Masahiro Masuda <masahi129@gmail.com>
    Date:   Mon Jan 17 15:57:35 2022 +0900
    
        Introduce SetConvdescriptors to refactor cudnn/conv_forward.cc
    
    * add python function for cudnn wgrad
    
    * adding wgrad test
    
    * black
    
    * wgrad nchw and nhwc worked
    
    * remove bwd algo name stuff
    
    * compute output shape properly
    
    * swap arg order in wgrad
    
    * add kernel size arg in test
    
    * black
    
    * cleanup
    
    * more fix
    
    * fix dgrad test
    
    * support running relay conv2d_backward_weight directly with cudnn
    
    * black
    
    * refactor reference function to support nhwc
    
    * removed unused function
    
    * lint
    
    * enable offloading conv2d_transpose to cudnn dgrad
    
    * relax tol
    
    * name fix, remove print
    masahi authored Jan 22, 2022
    Configuration menu
    Copy the full SHA
    d35b858 View commit details
    Browse the repository at this point in the history

Commits on Jan 23, 2022

  1. Configuration menu
    Copy the full SHA
    7bfb11b View commit details
    Browse the repository at this point in the history
  2. [MetaSchedule] Mutator: Mutate compute location (#10028)

    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    Co-authored-by: Xiyou Zhou <xiyou@octoml.ai>
    Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
    Co-authored-by: Ruihang Lai <lairuihangdongdong@qq.com>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>
    Co-authored-by: Wuwei Lin <wuwei@apache.org>
    
    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    Co-authored-by: Xiyou Zhou <xiyou@octoml.ai>
    Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
    Co-authored-by: Ruihang Lai <lairuihangdongdong@qq.com>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>
    Co-authored-by: Wuwei Lin <wuwei@apache.org>
    7 people authored Jan 23, 2022
    Configuration menu
    Copy the full SHA
    fc1814e View commit details
    Browse the repository at this point in the history
  3. [MetaSchedule] Post Processor: Rewrite Unbound Block (#10027)

    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    Co-authored-by: Xiyou Zhou <xiyou@octoml.ai>
    Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
    Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>
    Co-authored-by: Wuwei Lin <wuwei@apache.org>
    
    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    Co-authored-by: Xiyou Zhou <xiyou@octoml.ai>
    Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
    Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>
    Co-authored-by: Wuwei Lin <wuwei@apache.org>
    7 people authored Jan 23, 2022
    Configuration menu
    Copy the full SHA
    cc67040 View commit details
    Browse the repository at this point in the history
  4. [MetaSchedule] Schedule Rule: Parallelize-Vectorize-Unroll (#10033)

    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    Co-authored-by: Xiyou Zhou <xiyou@octoml.ai>
    Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
    Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>
    Co-authored-by: Wuwei Lin <wuwei@apache.org>
    
    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    Co-authored-by: Xiyou Zhou <xiyou@octoml.ai>
    Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
    Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>
    Co-authored-by: Wuwei Lin <wuwei@apache.org>
    7 people authored Jan 23, 2022
    Configuration menu
    Copy the full SHA
    de01c3e View commit details
    Browse the repository at this point in the history

Commits on Jan 24, 2022

  1. [microNPU] Add support for requantize (#9910)

    * [microNPU] Add support for requantize
    
    Adds support for stand-alone requantize operation which is legalized to
    an identity operation on the NPU.
    
    Change-Id: Ie2450c5fc72f405eddf517593236074aa4716c3b
    
    * fix concatenate tests failing due to not being bit exact
    
    Since requantize is now offloaded, concatenate tests were failing
    due a reference not being used.
    
    Change-Id: I44b26b5daecfefb776ca19e6646f3690f5570f52
    
    * test multiple requantize offload
    
    Change-Id: I60a3283461a7a7083c05289e84f570698388077b
    
    * address comments
    
    Change-Id: I7196a0fa468eb7c6a96f2b8a68f3a2dcf5a5693c
    lhutton1 authored Jan 24, 2022
    Configuration menu
    Copy the full SHA
    74a2fa8 View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    d066441 View commit details
    Browse the repository at this point in the history
  3. [CMSIS-NN] Update microNPU demo to include offloading to CMSIS-NN (#9…

    …979)
    
    * [CMSIS-NN] Update microNPU demo to include offloading to CMSIS-NN
    
    Change-Id: I6a3ba9db3e3cb2bd7c10383ebd52f9a1cdad74d0
    
    * [CMSIS-NN] Update microNPU demo to include offloading to CMSIS-NN
    
    * Addressing comments
    
    Change-Id: I98fcdf95bf408700968827e1abd084a916b3b21c
    
    * [CMSIS-NN] Update microNPU demo to include offloading to CMSIS-NN
    
        * Addressing comments
        * Remove build folder before running demo to address #10020
    
    Change-Id: Ifa7ad3ff431f427f8afb8b3c9f06711b3b59ad62
    
    * Correctly filter tvmc Targets
    
    Fixed logic to check for >2 TVM Target to be based on none-hybrid
    Targets only
    
    Co-authored-by: Chris Sidebottom <chris.sidebottom@arm.com>
    grant-arm and Mousius authored Jan 24, 2022
    Configuration menu
    Copy the full SHA
    65b4b09 View commit details
    Browse the repository at this point in the history
  4. [QNN] Add qnn.rsqrt op (#9982)

    * Add qnn.rsqrt op
    
    * Add comment
    sfvaroglu authored Jan 24, 2022
    Configuration menu
    Copy the full SHA
    6f2b35f View commit details
    Browse the repository at this point in the history
  5. [Hexagon] Do not auto-build apps when building TVM (#9970)

    * [Hexagon] Do not auto-build apps when building TVM
    
    The Hexagon cmakes have recently become unwieldy due to a complex
    network of dependencies between various automatically built components.
    This was in large part because of trying to automatically build some
    apps, which then tried to build TVM runtimes again, but with their
    own configurations.
    
    This patch removes the ability to automatically build any Hexagon-
    -related apps from the main TVM build. The following cmake options
    are now deprecated:
      - `USE_HEXAGON_LAUNCHER`
      - `USE_HEXAGON_PROXY_RPC`
    
    In order to build the binaries needed for HexagonLauncher from
    tvm.contrib.hexagon:
      - Build TVM+runtime for x86, with codegen for Hexagon enabled.
        This can be done via `USE_HEXAGON_DEVICE=sim` or `target`.
      - Build Android runtime and tvm_rpc with `-DUSE_RPC=ON`,
        `-DUSE_CPP_RPC=ON`, and `-DUSE_HEXAGON_RPC=ON`.
      - Build Hexagon runtime with `-DUSE_HEXAGON_RPC=ON`, and
        `-DBUILD_STATIC_RUNTIME=ON`.
    
    * Add README.md
    
    * Restart CI
    
    * Add optional variable to set output directory
    Krzysztof Parzyszek authored Jan 24, 2022
    Configuration menu
    Copy the full SHA
    73bbfbb View commit details
    Browse the repository at this point in the history

Commits on Jan 25, 2022

  1. [Runtime][PipelineExecutor] Add Pipeline Executor Interface (#10010)

    Adding interfaces into Pipeline Executor to "run", "stop","set input",
    and "get input" from the pipeline executor,
    
    In this patch, we also implemented the "BackendRuntime" structure to
    wrap the graph runtime interface in order to support  pipeline executor
    interface and implement data copy method. This method is used to
    transfer data between two backend runtimes.
    huajsj authored Jan 25, 2022
    Configuration menu
    Copy the full SHA
    6720d35 View commit details
    Browse the repository at this point in the history

Commits on Jan 26, 2022

  1. Configuration menu
    Copy the full SHA
    2830c96 View commit details
    Browse the repository at this point in the history
  2. [CUTLASS] Profile only the largest-possible alignment by default (#10…

    …036)
    
    * introduce profile_all_alignments option
    
    * add profile_all_alignment option to API
    
    * wip
    
    * fixed dynamic case
    
    * black
    
    * update gen_gemm too
    
    * minor improvement
    
    * fix
    
    * all tests work
    
    * add doc
    
    * fixed for sm = 75 case
    
    * fix typo
    
    * remove unused import
    
    * profile_all -> find_first_valid
    
    * fix
    masahi authored Jan 26, 2022
    Configuration menu
    Copy the full SHA
    1b9b05e View commit details
    Browse the repository at this point in the history
  3. [Meta Schedule] Add ApplyHisotryBest Meta Schedule Context (#10049)

    * Add ApplyHisotryBest.
    
    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
    Co-authored-by: Ruihang Lai <lairuihangdongdong@qq.com>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>
    Co-authored-by: Wuwei Lin <wuwei@apache.org>
    Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
    
    * Retrigger CI.
    
    * Update integration.py
    
    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
    Co-authored-by: Ruihang Lai <lairuihangdongdong@qq.com>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>
    Co-authored-by: Wuwei Lin <wuwei@apache.org>
    Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
    7 people authored Jan 26, 2022
    Configuration menu
    Copy the full SHA
    94c4e0e View commit details
    Browse the repository at this point in the history
  4. [MetaSchedule] Mutator Rule: Mutate Unroll (#10045)

    * mutate-unroll
    
    * mutate-unroll
    spectrometerHBH authored Jan 26, 2022
    Configuration menu
    Copy the full SHA
    88cbf1b View commit details
    Browse the repository at this point in the history
  5. [TIR][Schedule] Blockize and Tensorize (#9871)

    * WIP
    
    * WIP
    
    * WIP
    
    * test cases
    
    * add examples
    
    * lint
    
    * Amend co-authors information
    
    Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
    Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>
    Co-authored-by: Ruihang Lai <lairuihangdongdong@qq.com>
    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    Co-authored-by: Xiyou Zhou <xiyou@octoml.ai>
    
    * WIP
    
    * address comments and changed tensorized comparator
    
    * update
    
    * nit
    
    * fix example
    
    * lint
    
    * lint
    
    * lint
    
    * remove unused
    
    * trigger ci
    
    * clang-format
    
    * fix
    
    * rebase
    
    Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
    Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>
    Co-authored-by: Ruihang Lai <lairuihangdongdong@qq.com>
    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    Co-authored-by: Xiyou Zhou <xiyou@octoml.ai>
    7 people authored Jan 26, 2022
    Configuration menu
    Copy the full SHA
    5e7438f View commit details
    Browse the repository at this point in the history
  6. [microTVM][tutorial] Add ENV variable to enable testing on physical h…

    …ardware (#9993)
    
    * Add env variable to micro tflite tutorial
    
    * Address @gromero comments
    
    * address @areusch comment
    
    * fix scope
    
    * trigger
    
    * trigger
    mehrdadh authored Jan 26, 2022
    Configuration menu
    Copy the full SHA
    884fee4 View commit details
    Browse the repository at this point in the history
  7. [microNPU] Refactor base address determination to codegen (#9929)

    This commit introduces BaseAddress ObjectRef to determine
    base addresses in the codegen for microNPU. This is
    required when multiple memory pools become available. Thus,
    base addresses could not be statically determined in the
    source module.
    manupak authored Jan 26, 2022
    Configuration menu
    Copy the full SHA
    b972877 View commit details
    Browse the repository at this point in the history
  8. Add FP requantize flow. Set float32 flow by default for llvm x86 targ…

    …ets with (#9637)
    
    sse4.1 support
    Icemist authored Jan 26, 2022
    Configuration menu
    Copy the full SHA
    ffff8dd View commit details
    Browse the repository at this point in the history
  9. [Relay][DefuseOps pass] bug fix: To support function body types other…

    … than call node (#10069)
    
    Co-authored-by: pranav jonnalagadda-SJ1 Eng_ML <pjonnalagadd@sj1mach1.caveonetworks.com>
    pranavjon and pranav jonnalagadda-SJ1 Eng_ML authored Jan 26, 2022
    Configuration menu
    Copy the full SHA
    abdccf9 View commit details
    Browse the repository at this point in the history
  10. [Fix Bug]fix the bug of tensorflow frontend when parsing Range layer …

    …(#9999)
    
    Co-authored-by: wangjiuyang <wang.jiuyang@intellif.com>
    ninesheep and wangjiuyang authored Jan 26, 2022
    Configuration menu
    Copy the full SHA
    92cd754 View commit details
    Browse the repository at this point in the history
  11. [MetaSchedule][M4a] Schedule Rule: Multi-Level-Tiling (#10043)

    * multi level tiling
    
    * remove tensor core related code
    
    * pylint
    
    * fix
    
    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    jinhongyii and junrushao authored Jan 26, 2022
    Configuration menu
    Copy the full SHA
    ffbe491 View commit details
    Browse the repository at this point in the history
  12. Revert "[Frontend] Add Span filling for frontends to Relay (#9723)" (…

    …#10072)
    
    Because of the failure of LSTM conversion from Pytorch
    chunit-quic authored Jan 26, 2022
    Configuration menu
    Copy the full SHA
    095b639 View commit details
    Browse the repository at this point in the history
  13. Improve the tensorflow frontend _test_spop_resource_variables to supp…

    …ort tensoflow 2.6 (#9978)
    
    On tensorflow 2.4 the test is expected to fail as the generated graph is not forzen.
    On tensorflow 2.6 the generated graph is identified as frozen, therefore the test is not needed
    ophirfrish authored Jan 26, 2022
    Configuration menu
    Copy the full SHA
    cc8a7a2 View commit details
    Browse the repository at this point in the history
  14. [MetaSchedule] postproc: rewrite_parallel_vectorize_unroll (#10071)

    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    Co-authored-by: Xiyou Zhou <xiyou@octoml.ai>
    Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
    Co-authored-by: Ruihang Lai <lairuihangdongdong@qq.com>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>
    Co-authored-by: Wuwei Lin <wuwei@apache.org>
    
    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    Co-authored-by: Xiyou Zhou <xiyou@octoml.ai>
    Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
    Co-authored-by: Ruihang Lai <lairuihangdongdong@qq.com>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>
    Co-authored-by: Wuwei Lin <wuwei@apache.org>
    7 people authored Jan 26, 2022
    Configuration menu
    Copy the full SHA
    1935341 View commit details
    Browse the repository at this point in the history
  15. Clear warnings when building with MSVC. (#10059)

    * Fix warning "unsafe mix of type 'const int64_t' and type 'bool' in
    operation" occurring in tvm::tir::HasAnn
    
    * Suppress warning "destructor never returns, potential memory leak"
    occurring in tvm::runtime::detail::LogFatal::~LogFatal
    Icemist authored Jan 26, 2022
    Configuration menu
    Copy the full SHA
    d29c801 View commit details
    Browse the repository at this point in the history
  16. [Makefile] Fixed error in "make clean" (#10048)

    The top-level makefile should delegate `make clean` to the cmake
    folder of each enabled build, similar to the existing delegation of
    `make all` and `make runtime`.
    Lunderberg authored Jan 26, 2022
    Configuration menu
    Copy the full SHA
    622a03c View commit details
    Browse the repository at this point in the history
  17. [Relay] QLinearMatMul allows 1D weight_scale, weight_zero_point input…

    …s (#10047)
    
    * fix after cr
    
    * fix after cr 2
    
    * emptycommit
    
    * emptycommit 2nd try
    yuanfz98 authored Jan 26, 2022
    Configuration menu
    Copy the full SHA
    90e454a View commit details
    Browse the repository at this point in the history
  18. Don't explicitly link libgcc.a into libtvm_runtime.so on Android (#10…

    …052)
    
    Setting Android toolchain via CMAKE_TOOLCHAIN_FILE also causes necessary
    flags to be added. Also, newer versions of the Android NDK no longer ship
    libgcc.a, so this takes care of that as well.
    Krzysztof Parzyszek authored Jan 26, 2022
    Configuration menu
    Copy the full SHA
    09daa88 View commit details
    Browse the repository at this point in the history
  19. Change function constructors to WithFields (#9690)

    * Change function constructors to WithFields
    
    Get rid of std::moves, they were causing problems
    
    * Fix bad rebase
    
    * flaky
    
    * try to trigger ci
    
    * try again
    electriclilies authored Jan 26, 2022
    Configuration menu
    Copy the full SHA
    248ad45 View commit details
    Browse the repository at this point in the history

Commits on Jan 27, 2022

  1. Document missing qnn operators (#10077)

    The following qnn operators were missing from the relay documentation.
    Ramana Radhakrishnan authored Jan 27, 2022
    Configuration menu
    Copy the full SHA
    a40de47 View commit details
    Browse the repository at this point in the history
  2. Add temp git dir to test_cc_reviewers test case (#10058)

    This decouples the test_cc_reviewers test case from the user's git configuration. 
    The implementation reuses the TempGit structure from test_skip_ci to always 
    use a fresh git environment.
    jacobbohlin authored Jan 27, 2022
    Configuration menu
    Copy the full SHA
    e42b9a3 View commit details
    Browse the repository at this point in the history
  3. [CI] Fix Rust permissions for wasmtime and sccache (#10015)

    Previously this was ran as part of `ubuntu_install_rust.sh`, as we now have multiple scripts which write as the container build user we have to fix up each time to ensure future users don't break.
    Mousius authored Jan 27, 2022
    Configuration menu
    Copy the full SHA
    f161bc2 View commit details
    Browse the repository at this point in the history
  4. Configuration menu
    Copy the full SHA
    4b0558c View commit details
    Browse the repository at this point in the history
  5. Configuration menu
    Copy the full SHA
    f93f2a6 View commit details
    Browse the repository at this point in the history
  6. [CI][Fix] Remove additional qnn.op.transpose_conv2d from docs (#10083)

    Fixes CI after #10077, and replaces misuse elsewhere.
    
    Change-Id: I095fc8ea2b8d268b09538832cba1f5482a73a9d9
    lhutton1 authored Jan 27, 2022
    Configuration menu
    Copy the full SHA
    fa317ed View commit details
    Browse the repository at this point in the history

Commits on Jan 28, 2022

  1. [PyTorch] Fix rsub type (#10090)

    * [PyTorch] Fix rsub type
    
    * fix
    comaniac authored Jan 28, 2022
    Configuration menu
    Copy the full SHA
    e6af874 View commit details
    Browse the repository at this point in the history
  2. [microNPU] Removing constant args from PrimFunc (#9951)

    Before this commit, microNPU creates PrimFunc as if
    it accepts constants from the callee. This commit
    changes the PrimFunc to remove the constants as an
    argument to PrimFunc as they are not provided from
    the main function.
    manupak authored Jan 28, 2022
    Configuration menu
    Copy the full SHA
    7b9fd1e View commit details
    Browse the repository at this point in the history
  3. [Relay] fix incorrect binding of Lets in ANF conversion (#10078)

    * fix incorrect binding of lets in ANF conversion
    
    * add test case
    
    * remove really weird auto-import from debugging
    
    * address comments
    altanh authored Jan 28, 2022
    Configuration menu
    Copy the full SHA
    82d4d0f View commit details
    Browse the repository at this point in the history
  4. [microTVM] Update Zephyr to 2.7 (#10094)

    This supports the reference system added in #9853
    Mousius authored Jan 28, 2022
    Configuration menu
    Copy the full SHA
    3af9c30 View commit details
    Browse the repository at this point in the history
  5. [Runtime][PipelineExecutor] Pipeline Executor Sequential execution (#…

    …10082)
    
    * [Runtime][PipelineExecutor] Pipeline Executor Sequential execution
    
    In the first, adding the "get output" logic. Secondly, adding the the sequential executing
    logic of pipeline executor. In the last, testing the pipeline executor interface and
    checking the output data.
    
    * Address review comments.
    
    Co-authored-by: Cody Yu <comaniac0422@gmail.com>
    
    * trigger build.
    
    Co-authored-by: Cody Yu <comaniac0422@gmail.com>
    huajsj and comaniac authored Jan 28, 2022
    Configuration menu
    Copy the full SHA
    80d4d05 View commit details
    Browse the repository at this point in the history
  6. Configuration menu
    Copy the full SHA
    85d42f8 View commit details
    Browse the repository at this point in the history
  7. [Hexagon] Update hexagon API build instruction and cleanup hexagon_pr…

    …oxy_rpc (#10068)
    
    * Fix hexagon api build and Update Readme
    
    * Cleanup hexagon_proxy_rpc
    
    * Target Hack
    
    * Remove hack
    
    * address @cconvey comments
    
    * remove the rest of proxy rpc
    mehrdadh authored Jan 28, 2022
    Configuration menu
    Copy the full SHA
    6a274af View commit details
    Browse the repository at this point in the history

Commits on Jan 29, 2022

  1. [MetaSchedule][M4a] Mutator: Mutate-Tile-Size (#10092)

    * [MetaSchedule][M4a] Mutator: Mutate-Tile-Size
    
    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    Co-authored-by: Xiyou Zhou <xiyou@octoml.ai>
    Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
    Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>
    Co-authored-by: Wuwei Lin <wuwei@apache.org>
    
    * Python 3.8 has no `math.prod`
    
    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    Co-authored-by: Xiyou Zhou <xiyou@octoml.ai>
    Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
    Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>
    Co-authored-by: Wuwei Lin <wuwei@apache.org>
    7 people authored Jan 29, 2022
    Configuration menu
    Copy the full SHA
    4d0dac3 View commit details
    Browse the repository at this point in the history
  2. [TE] Fix Const Int bound analysis to handle uints for division (#10102)

    * case to handle uints
    
    * add unit test
    AndrewZhaoLuo authored Jan 29, 2022
    Configuration menu
    Copy the full SHA
    21154c2 View commit details
    Browse the repository at this point in the history
  3. [Op][Topi] Gather, GatherND, Take can accept unsigned integers as ind…

    …ices (#10080)
    
    * take rel
    
    * gather and more tests
    
    * gathernd case
    
    * lint
    
    * remove test which invalidates take preconditions
    
    * re-add test
    
    * fix dumb test failure oopsie
    AndrewZhaoLuo authored Jan 29, 2022
    Configuration menu
    Copy the full SHA
    0fb5ae2 View commit details
    Browse the repository at this point in the history
  4. [MetaSchedule][M4b] Testcases for TensorRT builder/runner (#10055)

    Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
    Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>
    Co-authored-by: Ruihang Lai <lairuihangdongdong@qq.com>
    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    Co-authored-by: Xiyou Zhou <xiyou@octoml.ai>
    7 people authored Jan 29, 2022
    Configuration menu
    Copy the full SHA
    ba65197 View commit details
    Browse the repository at this point in the history
  5. [MetaSchedule] postproc: rewrite_cooperative_fetch (#10081)

    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    Co-authored-by: Xiyou Zhou <xiyou@octoml.ai>
    Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
    Co-authored-by: Ruihang Lai <lairuihangdongdong@qq.com>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>
    Co-authored-by: Wuwei Lin <wuwei@apache.org>
    
    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    Co-authored-by: Xiyou Zhou <xiyou@octoml.ai>
    Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
    Co-authored-by: Ruihang Lai <lairuihangdongdong@qq.com>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>
    Co-authored-by: Wuwei Lin <wuwei@apache.org>
    7 people authored Jan 29, 2022
    Configuration menu
    Copy the full SHA
    538347e View commit details
    Browse the repository at this point in the history

Commits on Jan 30, 2022

  1. [Op][Topi] 5 ops can accept unsigned integers as indices (#10098)

    * tests passed
    
    * reformat
    
    * add uint test for unravel_index
    yuanfz98 authored Jan 30, 2022
    Configuration menu
    Copy the full SHA
    1f9c76b View commit details
    Browse the repository at this point in the history
  2. [MetaSchedule][M4a] User-API: Tune-TE/TIR/Relay (#10079)

    * Add tuning scripts for tir, te & relay.
    
    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
    Co-authored-by: Ruihang Lai <lairuihangdongdong@qq.com>
    Co-authored-by: Hongyi Jin <3231950289@qq.com>
    Co-authored-by: Wuwei Lin <wuwei@apache.org>
    Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
    
    Minor fix.
    
    Nits.
    
    Add back tests.
    
    * slightly improve tune.py
    
    Co-authored-by: Junru Shao <junrushao1994@gmail.com>
    zxybazh and junrushao authored Jan 30, 2022
    Configuration menu
    Copy the full SHA
    779dc51 View commit details
    Browse the repository at this point in the history

Commits on Jan 31, 2022

  1. Update nn.rs (#10063)

    gussmith23 authored Jan 31, 2022
    Configuration menu
    Copy the full SHA
    5029c9d View commit details
    Browse the repository at this point in the history
  2. [Fix Bug]fix the bugs of keras frontend when parsing LSTM, GRU, RNN …

    …layers. (#9850)
    
    * [Fix Bug]fix the bugs of keras frontend when parsing LSTM, GRU, RNN layers.
    
    * Reformat files with black formatter.
    
    Co-authored-by: AndrewZhaoLuo <andrew.zhao.luo@gmail.com>
    new-TonyWang and AndrewZhaoLuo authored Jan 31, 2022
    Configuration menu
    Copy the full SHA
    d8d0053 View commit details
    Browse the repository at this point in the history
  3. [Caffe Frontend] Add support for Power layer (#9655)

    Co-authored-by: tangkun <kun.tang@hexintek.com>
    KennyTang1988 and tangkun authored Jan 31, 2022
    Configuration menu
    Copy the full SHA
    f2d60fe View commit details
    Browse the repository at this point in the history
  4. Use ci.py explicitly in docs building instructions (#9971)

    This adds `ci.py` to the docs to make it more clear how to easily build the docs locally. This also re-arranges CI following the merging of all CI steps to run concurrently since there's no need to run the Sphinx precheck during GPU unit tests. This still preserves it though in the docs step as a way to quickly bail out if there are formatting errors so the full tutorials don't get built.
    
    Co-authored-by: driazati <driazati@users.noreply.github.com>
    driazati and driazati authored Jan 31, 2022
    Configuration menu
    Copy the full SHA
    b365586 View commit details
    Browse the repository at this point in the history
  5. [microTVM] Include standalone_crt dependencies in MLF (#10095)

     * Adds runtime to AOTExecutorFactoryModule
     * Standalone CRT files are added to MLF tarball if runtime is crt
     * external_dependencies info added to metadata.json for crt runtime
     * microNPU demo Makefile references standalone crt files from MLF tarball
    grant-arm authored Jan 31, 2022
    Configuration menu
    Copy the full SHA
    3b20c21 View commit details
    Browse the repository at this point in the history
  6. [ETHOSN] Per-tensor support for int8 operations (#10018)

    * Per-axis quantization to follow
    Leo-arm authored Jan 31, 2022
    Configuration menu
    Copy the full SHA
    3de25b8 View commit details
    Browse the repository at this point in the history
  7. [CI] Update DGL in gpu image (#10111)

    * validating ci_gpu:20220128-070420-fa317edf7
    * remove gcn tutorial workaround
    * update ci-gpu image to v0.81
    masahi authored Jan 31, 2022
    Configuration menu
    Copy the full SHA
    1d01e28 View commit details
    Browse the repository at this point in the history
  8. [microNPU] Add support for nearest neighbor and bilinear upsampling (…

    …#9841)
    
    * [microNPU] Add support for nearest neighbor and bilinear upsampling
    
    Adds support for 2x2 nearest neighbor and bilinear upsampling. In the
    case of bilinear upsampling with align_corners set to true, the
    upsampling size must be `2*input_size - 1` (as opposed to `2*input_size`).
    
    Change-Id: I95d215eabfaac983629dcdedcda2b90efb8e0ddf
    
    * rebase and add support for no-upsampling case.
    
    Change-Id: I840d8ee3671a40c5c99f22119442c349dbed39cf
    lhutton1 authored Jan 31, 2022
    Configuration menu
    Copy the full SHA
    02a7a41 View commit details
    Browse the repository at this point in the history
  9. [Bugfix][Op] Fix shape inference of adv_index (#9717)

    * init
    
    * test
    
    * lint
    hgt312 authored Jan 31, 2022
    Configuration menu
    Copy the full SHA
    dad8f62 View commit details
    Browse the repository at this point in the history
  10. [ add ] barebone hook for dense ops

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    62e161f View commit details
    Browse the repository at this point in the history
  11. Configuration menu
    Copy the full SHA
    6b4abb9 View commit details
    Browse the repository at this point in the history
  12. Configuration menu
    Copy the full SHA
    b6e357b View commit details
    Browse the repository at this point in the history
  13. Configuration menu
    Copy the full SHA
    f25d3fc View commit details
    Browse the repository at this point in the history
  14. Configuration menu
    Copy the full SHA
    829634b View commit details
    Browse the repository at this point in the history
  15. Add in an example of the dumping mode and some description of the cha…

    …nges in the readme
    slyubomirsky authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    acf14a9 View commit details
    Browse the repository at this point in the history
  16. Configuration menu
    Copy the full SHA
    751df45 View commit details
    Browse the repository at this point in the history
  17. Configuration menu
    Copy the full SHA
    c517626 View commit details
    Browse the repository at this point in the history
  18. Configuration menu
    Copy the full SHA
    aab7be4 View commit details
    Browse the repository at this point in the history
  19. move runtime code to codegen.cc

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    a693b9f View commit details
    Browse the repository at this point in the history
  20. [ add ] vta matmul test

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    1616d2a View commit details
    Browse the repository at this point in the history
  21. Configuration menu
    Copy the full SHA
    79c9238 View commit details
    Browse the repository at this point in the history
  22. Add note about test to the readme

    slyubomirsky authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    5a2b5b0 View commit details
    Browse the repository at this point in the history
  23. Configuration menu
    Copy the full SHA
    74a3605 View commit details
    Browse the repository at this point in the history
  24. [FIX] BYOC compilation error due to missing files (#1)

    * [ add ] vat_matmul runtime files
    
    * [ update ] get rid of pushing 0s to ACC
    
    * add comment about the change in codegen.cc
    
    * restore ACC buffer initialization
    
    * [ add ] compute missing fields of VTA config
    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    e5a97a2 View commit details
    Browse the repository at this point in the history
  25. Configuration menu
    Copy the full SHA
    c8e7170 View commit details
    Browse the repository at this point in the history
  26. Configuration menu
    Copy the full SHA
    23499c4 View commit details
    Browse the repository at this point in the history
  27. [hotfix] Typo in ila_converter

    slyubomirsky authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    393717a View commit details
    Browse the repository at this point in the history
  28. Rebase fixes

    slyubomirsky authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    07d1b13 View commit details
    Browse the repository at this point in the history
  29. end-to-end codegen for ILA-VTA

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    2fbdc3f View commit details
    Browse the repository at this point in the history
  30. [ add ] tests

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    4988b6b View commit details
    Browse the repository at this point in the history
  31. ignore instr_log

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    b91106b View commit details
    Browse the repository at this point in the history
  32. tweak PR

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    b5c5113 View commit details
    Browse the repository at this point in the history
  33. get rid of warnings

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    beb7ef9 View commit details
    Browse the repository at this point in the history
  34. remove logging in pattern matching

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    3ffed80 View commit details
    Browse the repository at this point in the history
  35. Configuration menu
    Copy the full SHA
    e664064 View commit details
    Browse the repository at this point in the history
  36. [ init ] bias add codegen

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    eb07d99 View commit details
    Browse the repository at this point in the history
  37. add data loading

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    0bc1ca2 View commit details
    Browse the repository at this point in the history
  38. [ add ] bias add test case

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    74ae7e1 View commit details
    Browse the repository at this point in the history
  39. [ fix ] datatype conversion

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    2d13366 View commit details
    Browse the repository at this point in the history
  40. change to int8 inputs

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    1e2e4f1 View commit details
    Browse the repository at this point in the history
  41. [ init ] relu runtime

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    1485122 View commit details
    Browse the repository at this point in the history
  42. [ add ] relu runtime code

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    156cdf4 View commit details
    Browse the repository at this point in the history
  43. Add exact matcher

    slyubomirsky authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    f8d8cbd View commit details
    Browse the repository at this point in the history
  44. fix data loading

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    6ccaa55 View commit details
    Browse the repository at this point in the history
  45. [ update ] tests

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    c722868 View commit details
    Browse the repository at this point in the history
  46. [ refactor ] code

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    0de8947 View commit details
    Browse the repository at this point in the history
  47. [ add ] comments

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    d74598f View commit details
    Browse the repository at this point in the history
  48. Configuration menu
    Copy the full SHA
    fc67b74 View commit details
    Browse the repository at this point in the history
  49. Correct inaccurate comment

    slyubomirsky authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    67903a1 View commit details
    Browse the repository at this point in the history
  50. Reformat comment

    slyubomirsky authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    180e8a9 View commit details
    Browse the repository at this point in the history
  51. Throw in refs because why not

    slyubomirsky authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    9a3300c View commit details
    Browse the repository at this point in the history
  52. Configuration menu
    Copy the full SHA
    71f7eb0 View commit details
    Browse the repository at this point in the history
  53. Configuration menu
    Copy the full SHA
    2267c03 View commit details
    Browse the repository at this point in the history
  54. Configuration menu
    Copy the full SHA
    3f838dc View commit details
    Browse the repository at this point in the history
  55. Unused function

    slyubomirsky authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    9fd41db View commit details
    Browse the repository at this point in the history
  56. Configuration menu
    Copy the full SHA
    94eec1a View commit details
    Browse the repository at this point in the history
  57. Incorrect dimension

    slyubomirsky authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    6155d28 View commit details
    Browse the repository at this point in the history
  58. Configuration menu
    Copy the full SHA
    502f09c View commit details
    Browse the repository at this point in the history
  59. [REFACTOR] Compile to ILA Asm (#11)

    * change to uint8_t in ila runtime
    
    * [ refactor ] pre-compile to ILA asm
    
    * [ impl ] jit
    
    * [ add ] mlp model
    
    * [ add ] quantized model in PT
    
    * [ fix ] run quantized
    
    * [ refactor ] AoT compiler
    
    * hmm? I dont remember I touched this file
    
    * [ fix ] naming issue and resolve some warnings
    
    * [ fix ] turn off size check
    
    * [ refactor ] cast according to annotation
    
    * [ fix ] dtype
    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    80774b4 View commit details
    Browse the repository at this point in the history
  60. [ hotfix ] add quantized model

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    1573922 View commit details
    Browse the repository at this point in the history
  61. Configuration menu
    Copy the full SHA
    15b8d2a View commit details
    Browse the repository at this point in the history
  62. LSTM layer smoke test

    slyubomirsky authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    39cbab4 View commit details
    Browse the repository at this point in the history
  63. Configuration menu
    Copy the full SHA
    4074b14 View commit details
    Browse the repository at this point in the history
  64. Change pattern to dense

    Bo-Yuan-Huang authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    d7b5d11 View commit details
    Browse the repository at this point in the history
  65. flexnlp linear layer prototype done

    LeeOHzzZ authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    a64fdfd View commit details
    Browse the repository at this point in the history
  66. Configuration menu
    Copy the full SHA
    2375d32 View commit details
    Browse the repository at this point in the history
  67. flexnlp lstm backend driver is completed, end-to-end testflow passed (#3

    )
    
    * integrate 3la driver into tvm framework
    
    * lstm flexnlp backend driver added
    
    * remove redundant submodule in the python test folder; redirect python driver path inside TVM in the ilaflex_runtime
    
    * flexnlp lstm end-to-end flow complete
    LeeOHzzZ authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    67ae67c View commit details
    Browse the repository at this point in the history
  68. Configuration menu
    Copy the full SHA
    ddb4582 View commit details
    Browse the repository at this point in the history
  69. fixing driver_dir path

    Thierry Tambe authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    caf8765 View commit details
    Browse the repository at this point in the history
  70. Configuration menu
    Copy the full SHA
    70d130d View commit details
    Browse the repository at this point in the history
  71. Op name passed to python driver (#8)

    * end-to-end codegen for ILA-VTA
    
    * [ add ] tests
    
    * ignore instr_log
    
    * tweak PR
    
    * get rid of warnings
    
    * remove logging in pattern matching
    
    * [ add ] instruction for running the end-to-end test script
    
    * [ init ] bias add codegen
    
    * add data loading
    
    * [ add ] bias add test case
    
    * [ fix ] datatype conversion
    
    * change to int8 inputs
    
    * integrate 3la driver into tvm framework
    
    * lstm flexnlp backend driver added
    
    * [ init ] relu runtime
    
    * [ add ] relu runtime code
    
    * Add exact matcher
    
    * fix data loading
    
    * [ update ] tests
    
    * [ refactor ] code
    
    * [ add ] comments
    
    * remove redundant submodule in the python test folder; redirect python driver path inside TVM in the ilaflex_runtime
    
    * Simplify implementation, correct pattern bugs, and add more tests
    
    * flexnlp lstm end-to-end flow complete
    
    * Correct inaccurate comment
    
    * Reformat comment
    
    * Throw in refs because why not
    
    * Need to visit matched args to find all matches
    
    * Move utility checkers to exact_matcher file (they will come in handy again)
    
    * Add test scaling the pattern matching to the speech-to-text model
    
    * Unused function
    
    * Add test case of not matching free var in match block
    
    * Incorrect dimension
    
    * set ila driver as external python scripts located through /home/leeoh/3la/pattern_matching/3la_ILA_tensor_op/ path
    
    * 3la flexnlp python driver as standalone path; small fixed in include module in flex_linear.py
    
    * Correct attribute names and also include primitive
    
    * pass op_name when offloading calculation
    
    Co-authored-by: AD1024 <dh63@cs.washington.edu>
    Co-authored-by: Steven S. Lyubomirsky <sslyu@cs.washington.edu>
    Co-authored-by: Bo-Yuan Huang <Bo-Yuan-Huang@users.noreply.github.com>
    4 people authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    99cff9b View commit details
    Browse the repository at this point in the history
  72. merged from steve's hlscnn

    LeeOHzzZ authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    aa5feac View commit details
    Browse the repository at this point in the history
  73. Configuration menu
    Copy the full SHA
    8897d39 View commit details
    Browse the repository at this point in the history
  74. Configuration menu
    Copy the full SHA
    bbd8a4f View commit details
    Browse the repository at this point in the history
  75. Configuration menu
    Copy the full SHA
    40d029f View commit details
    Browse the repository at this point in the history
  76. [ add ] compile time wall clock

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    c2b969f View commit details
    Browse the repository at this point in the history
  77. [ add ] runtime wallclock

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    c03a201 View commit details
    Browse the repository at this point in the history
  78. fix

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    d650ea8 View commit details
    Browse the repository at this point in the history
  79. save changes to api calls

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    b682022 View commit details
    Browse the repository at this point in the history
  80. uncomment sim call

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    0527549 View commit details
    Browse the repository at this point in the history
  81. fix submodules

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    aba60cd View commit details
    Browse the repository at this point in the history
  82. [ add ] record time

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    e3dadec View commit details
    Browse the repository at this point in the history
  83. [ init ] conv1d

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    8bd8387 View commit details
    Browse the repository at this point in the history
  84. [ fix ] data layout

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    f1ec7de View commit details
    Browse the repository at this point in the history
  85. [ finish ] conv1d codegen

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    0d5151f View commit details
    Browse the repository at this point in the history
  86. Configuration menu
    Copy the full SHA
    88eba3b View commit details
    Browse the repository at this point in the history
  87. [ add ] attention on flexnlp

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    04d23b7 View commit details
    Browse the repository at this point in the history
  88. [ add ] env setup guide

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    c0c6cb3 View commit details
    Browse the repository at this point in the history
  89. save changes

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    3da68f3 View commit details
    Browse the repository at this point in the history
  90. disable printing the command

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    8fa62a7 View commit details
    Browse the repository at this point in the history
  91. Configuration menu
    Copy the full SHA
    06a5e4a View commit details
    Browse the repository at this point in the history
  92. [ add ] accelerator call operator

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    3f0092c View commit details
    Browse the repository at this point in the history
  93. Configuration menu
    Copy the full SHA
    1287c63 View commit details
    Browse the repository at this point in the history
  94. [ fix ] conflict (acutally not)

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    5a2610b View commit details
    Browse the repository at this point in the history
  95. remove dep

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    b421a01 View commit details
    Browse the repository at this point in the history
  96. [ add ] PaddAttrs and format

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    7abff9e View commit details
    Browse the repository at this point in the history
  97. Restore VTA dependency

    slyubomirsky authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    c9dbc53 View commit details
    Browse the repository at this point in the history
  98. Configuration menu
    Copy the full SHA
    d202cfa View commit details
    Browse the repository at this point in the history
  99. Use a callback in the exact matcher (#18)

    * Don't use checked_type for annotating types in the matcher (breaks tests and relies on type checking)
    
    * Add callback feature to exact matcher, expand tests, simplify recent additions
    slyubomirsky authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    733eafb View commit details
    Browse the repository at this point in the history
  100. [ attempt ] intimm for integer

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    ce73b9c View commit details
    Browse the repository at this point in the history
  101. attempt to fix

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    eb93b16 View commit details
    Browse the repository at this point in the history
  102. Configuration menu
    Copy the full SHA
    6de2c30 View commit details
    Browse the repository at this point in the history
  103. revert changes

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    9a7184c View commit details
    Browse the repository at this point in the history
  104. fix code use alu op number directly

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    4658a43 View commit details
    Browse the repository at this point in the history
  105. [ add ] vta quantization

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    82e77ff View commit details
    Browse the repository at this point in the history
  106. fix

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    7a56788 View commit details
    Browse the repository at this point in the history
  107. Rust binding updates

    gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    fe77da1 View commit details
    Browse the repository at this point in the history
  108. Configuration menu
    Copy the full SHA
    76b3deb View commit details
    Browse the repository at this point in the history
  109. use 16-bit imm

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    d947453 View commit details
    Browse the repository at this point in the history
  110. try to use test_lib.cc code

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    3fe4e0a View commit details
    Browse the repository at this point in the history
  111. init block matmul

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    51e8f58 View commit details
    Browse the repository at this point in the history
  112. fix

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    c61affc View commit details
    Browse the repository at this point in the history
  113. add ref print

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    8270655 View commit details
    Browse the repository at this point in the history
  114. try output

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    cd13d9c View commit details
    Browse the repository at this point in the history
  115. add vta calls

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    7e131f8 View commit details
    Browse the repository at this point in the history
  116. fix calls

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    ce097b4 View commit details
    Browse the repository at this point in the history
  117. use new

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    9ef9519 View commit details
    Browse the repository at this point in the history
  118. try fix

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    b366b79 View commit details
    Browse the repository at this point in the history
  119. try fix

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    beabf4b View commit details
    Browse the repository at this point in the history
  120. use previous codegen

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    0b5b021 View commit details
    Browse the repository at this point in the history
  121. fix

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    080c058 View commit details
    Browse the repository at this point in the history
  122. fix

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    187c9a7 View commit details
    Browse the repository at this point in the history
  123. try fix segfault

    AD1024 authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    3541210 View commit details
    Browse the repository at this point in the history
  124. Configuration menu
    Copy the full SHA
    0eb349b View commit details
    Browse the repository at this point in the history
  125. Configuration menu
    Copy the full SHA
    fce3048 View commit details
    Browse the repository at this point in the history
  126. Temporarily remove VTA submodule

    slyubomirsky authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    0c09d2a View commit details
    Browse the repository at this point in the history
  127. Add back VTA submodule

    slyubomirsky authored and gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    d5b63fb View commit details
    Browse the repository at this point in the history
  128. Configuration menu
    Copy the full SHA
    1973194 View commit details
    Browse the repository at this point in the history
  129. Configuration menu
    Copy the full SHA
    7647138 View commit details
    Browse the repository at this point in the history
  130. Remove nonexistent import

    gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    f0365ee View commit details
    Browse the repository at this point in the history
  131. Configuration menu
    Copy the full SHA
    debce2d View commit details
    Browse the repository at this point in the history
  132. remove import

    gussmith23 committed Jan 31, 2022
    Configuration menu
    Copy the full SHA
    82699db View commit details
    Browse the repository at this point in the history