Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update imports after QAT was moved out of prototype #1883

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

andrewor14
Copy link
Contributor

Summary: pytorch/ao#1091 moved QAT out of prototype in torchao. This is a BC-breaking change so torchtune also needs to update its QAT imports. Additionally, after pytorch/ao#987 we decided that QAT in torchao will use module swaps to insert fake quantizes, so there is no need to have a separate module swap quantizer, so this commit removes the *ModuleSwapQuantizer option.

Test Plan:
pytest -m integration_test tests/recipes/test_qat_distributed.py should work

Copy link

pytorch-bot bot commented Oct 22, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/1883

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 1d05e1a with merge base e030626 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 22, 2024
Copy link
Contributor

@ebsmothers ebsmothers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the update! Couple small comments but otherwise looks good

@@ -20,7 +20,7 @@
_USE_NEW_TENSOR_CORE_TILED_LAYOUT_API = _is_fbcode() or (
not _is_fbcode()
and (
("dev" not in torchao_version and torchao_version >= "0.6.0")
("dev" not in torchao_version and torchao_version >= "0.7.0")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the new API is not in 0.6.1? I thought it landed before that release

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah it's not in 0.6.x

Comment on lines 155 to 156
- :class:`~torchao.quantization.quant_api.Int8DynActInt4WeightQuantizer`: "8da4w" (requires ``torch>=2.3.0``)
- :class:`~torchao.quantization.prototype.qat.Int8DynActInt4WeightQATQuantizer`: "8da4w-qat" (requires ``torch>=2.4.0``)
- :class:`~torchao.quantization.qat.Int8DynActInt4WeightQATQuantizer`: "8da4w-qat" (requires ``torch>=2.4.0``)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can fully remove both of these torch requirements as we assume stable PyTorch (i.e. >= 2.5)

mode = _quantizer_to_mode.get(type(quantizer), None)
if mode is not None and "module-swap" in mode:
warn(
"*QuantizerModuleSwap is deprecated. Please use the version without 'ModuleSwap' instead"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: should be able to just give the exact classes in the warn message using e.g. f"{quantizer.__class__.__name__}" (or something like that)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm I don't think that's possible given we just assign the classes today, e.g.:

>>> class A: pass
... 
>>> B = A
>>> A().__class__.__name__
'A'
>>> B().__class__.__name__
'A'

Summary: pytorch/ao#1091 moved QAT out of
prototype in torchao. This is a BC-breaking change so torchtune
also needs to update its QAT imports. Additionally, after
pytorch/ao#987 we decided that QAT in
torchao will use module swaps to insert fake quantizes, so there
is no need to have a separate module swap quantizer, so this
commit removes the `*ModuleSwapQuantizer` option.

Test Plan:
pytest -m integration_test tests/recipes/test_qat_distributed.py should work
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants