-
Notifications
You must be signed in to change notification settings - Fork 160
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
torch_tensor version not support? #1103
Comments
pip list absl-py 1.4.0 |
my cuda version:
Thu Mar 30 07:08:20 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 520.61.05 Driver Version: 520.61.05 CUDA Version: 11.8 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... On | 00000000:01:00.0 Off | N/A |
| 0% 44C P8 17W / 170W | 10MiB / 12288MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
on docker image:
bladedisc/bladedisc:latest-runtime-torch1.12.0-cu113
run command:
bash benchmark/torch-tensorrt/test_trt_benchmark.sh 2>&1 | tee test_trt.log
error:
======begine================>
Torch-Blade =================================
batch size=1, num iterations=100
Median FPS: 213.5, mean: 213.2
Median latency: 0.004683, mean: 0.004691, 99th_p: 0.004742, std_dev: 0.000027
Running Torch-TensorRT 11111111
Running Torch-TensorRT 22222222
torch_tensorrt failed
compile_graph(): incompatible function arguments. The following argument types are supported:
1. (arg0: torch::jit::Module, arg1: torch_tensorrt._C.ts.CompileSpec) -> torch::jit::Module
Invoked with: <torch.ScriptModule object at 0x7fc2debfd1b0>, <torch_tensorrt._C.ts.CompileSpec object at 0x7fc2d7c86570>
torch_tensorrt failed
Converting method to TensorRT engine...
tensorrt failed
Running Torch for precision: fp16
torch failed
Running Torch-Blade
<=================end================
The text was updated successfully, but these errors were encountered: