Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: INT8_PTQ & INT8_ACQ Performance and Accuracy for WinCLIP #2298

Open
1 task done
junxnone opened this issue Sep 5, 2024 · 0 comments
Open
1 task done

[Bug]: INT8_PTQ & INT8_ACQ Performance and Accuracy for WinCLIP #2298

junxnone opened this issue Sep 5, 2024 · 0 comments

Comments

@junxnone
Copy link

junxnone commented Sep 5, 2024

Describe the bug

INT8 Performance

As the NNCF document says, when quantizing a Transformer model, the parameter model_type should be set to TRANSFORMER.
In my tests, if it is not set, the performance of the INT8 model will decline. If it is set, there will be a 10% performance improvement.

INT8 Accuracy

  • Image-level seems to have a small reduction
  • The accuracy at Pixel-Level has decreased so much that it cannot be used.
  • No matter whether the parameter preset is set to MIXED or PERFORMANCE, it is the same.

Dataset

MVTec

Model

Other (please specify in the field below)

Steps to reproduce the behavior

  • Export WinCLIP OpenVINO INT8 Model(PTQ & ACQ)
  • run benchmark with the OpenVINO model

OS information

OS information:

  • OS: [e.g. Ubuntu 22.04]
  • Python version: [e.g. 3.10.14]
  • Anomalib version: [e.g.1.2.0.dev0]
  • PyTorch version: [e.g. 2.4.0+cpu]

Expected behavior

works for the INT8 Model

Screenshots

No response

Pip/GitHub

GitHub

What version/branch did you use?

No response

Configuration YAML

None

Logs

None

Code of Conduct

  • I agree to follow this project's Code of Conduct
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant