Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: remove PyTorch 2.5.0 checks #1877

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

JP-sDEV
Copy link

@JP-sDEV JP-sDEV commented Oct 21, 2024

Context

What is the purpose of this PR? Is it to

  • add a new feature
  • fix a bug
  • update tests and/or documentation

#1861

Changelog

What are the changes made in this PR?

  • modify documentation and checks for PyTorch 2.5.0

Note: Line 69 in torchtune/torchtune/training/_activation_offloading.py, should this if block be removed? or modified to check for pytorch 2.5.0? - was not included in the issue to modify this block.

 if use_streams is None:
            # Default to True if an acceptable torch is installed (later nightly/version or from source)
            **self.use_streams = torch.__version__ >= "2.5.0.dev20240907"** 

Test plan

Please make sure to do each of the following if applicable to your PR. If you're unsure about any one of these just ask and we will happily help. We also have a contributing page for some guidance on contributing.

  • run pre-commit hooks and linters (make sure you've first installed via pre-commit install)
  • add unit tests for any new functionality
  • update docstrings for any new or updated methods or classes
  • run unit tests via pytest tests
  • run recipe tests via pytest tests -m integration_test
  • manually run any new or modified recipes with sufficient proof of correctness
  • include relevant commands and any other artifacts in this summary (pastes of loss curves, eval results, etc.)

UX

If your function changed a public API, please add a dummy example of what the user experience will look like when calling it.
Here is a docstring example
and a tutorial example

  • I did not change any public API
  • I have added an example to docs or docstrings

Copy link

pytorch-bot bot commented Oct 21, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/1877

Note: Links to docs will display an error until the docs builds have been completed.

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 21, 2024
Copy link
Contributor

@RdoubleA RdoubleA left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks again for doing these - looks great. A few minor nits and one comment we should decide on.

@@ -108,7 +108,7 @@ checkpointing, where all activations will either be recomputed later in the back

To enable activation offloading, use the ``enable_activation_offloading`` config entry or flag
in our lora finetuning single device recipe, e.g. ``enable_activation_offloading=True``. To allow
usage of streams, make sure you are on a torch version later than PyTorch 2.5.0.dev20240907.
usage of streams, make sure you are on a torch version equal to or later than PyTorch.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
usage of streams, make sure you are on a torch version equal to or later than PyTorch.
usage of streams, make sure you are on a torch version equal to or later than PyTorch 2.5.0.

@@ -33,7 +33,7 @@ class OffloadActivations(saved_tensors_hooks):
use_streams (Optional[bool]): Whether or not to use streams for performance optimization where
the communications get overlapped with the computation. Requires a torch build
after torch-2.5.0.dev20240907. Default: True if a later torch build is found, else False.
after torch-2.5.0.]. Default: True.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
after torch-2.5.0.]. Default: True.
after torch-2.5.0. Default: True.

Compiling full model with torch.compile...
For faster compile times via per-layer compile, please run on PyTorch nightlies.
"""
log.warning(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if we want to retain the fallback logic for older pytorch versions. If so, then the if-else should remain the same and only the warning message should be updated. any thoughts? @ebsmothers @felipemello1

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry just seeing this now. I think if we claim to not support PyTorch < 2.5 then we shouldn't leave in the full-model compile option at all. For the same reason I'm ambivalent about leaving in the log warning.. really if we want to check someone is at least on the latest stable PyTorch we should just do it in a single consolidated place. So not the end of the world to keep the warning in, but personally I'd just take it out.

@JP-sDEV
Copy link
Author

JP-sDEV commented Oct 22, 2024

Thanks again for doing these - looks great. A few minor nits and one comment we should decide on.

Thanks, will keep up with the updates! Once finalized, let me know what updates are needed. Cheers! :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants