Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] dense_stack_tds #506

Merged
merged 4 commits into from
Aug 3, 2023
Merged

Conversation

matteobettini
Copy link
Contributor

No description provided.

Signed-off-by: Matteo Bettini <matbet@meta.com>
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Aug 2, 2023
Signed-off-by: Matteo Bettini <matbet@meta.com>
Copy link
Contributor

@vmoens vmoens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for this!
I think we can clarify it a bit, it isn't super transparent atm

tensordict/tensordict.py Outdated Show resolved Hide resolved
Comment on lines 8655 to 8656
This must be used when some of the :class:`tensordict.TensorDictBase` objects that need to be stacked
can have :class:`tensordict.LazyStackedTensorDict` among entries (or nested entries).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why "can have"? why not just "have"?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

among their entries

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

because this method can be used even when no lazy stacks are involved

tensordict/tensordict.py Show resolved Hide resolved
Comment on lines 8690 to 8695
1 ->
b: Tensor(shape=torch.Size([2, 2]), device=cpu, dtype=torch.float32, is_shared=False)},
batch_size=torch.Size([2, 2]),
device=None,
is_shared=False,
stack_dim=1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the appearance of b seems to be the only difference but it's still a lazy key
Why do we need 2 levels of nesting? This example isn't super clear to me

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we need 2 levels of nesting because the function is used to stack tds that contain lazy stacks. so it is only useful when there are at least two levels of nesting. The change of the stack dim from 0 to 1 is the core difference here. "b" is just there to highlight that

test/test_tensordict.py Outdated Show resolved Hide resolved
test/test_tensordict.py Outdated Show resolved Hide resolved
"nested_stack_dim",
[0, 1, 2],
)
def test_dense_stack_tds(stack_dim, nested_stack_dim):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see what we're testing specific to this method here. All the assertions would hold with a regular tensordict no?
Why aren't we testing anything specific to the keys we provide?

Copy link
Contributor Author

@matteobettini matteobettini Aug 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the idea is that this function should behave exactly the same with regualr tds or lazy stacks so the fact that a regualr td passes is ok to me. the main thing we are testing is that, when stacking tds that contain lazy stacks, we can call this function to remove one level of lazyness and densify one stack_dim

Signed-off-by: Matteo Bettini <matbet@meta.com>
Signed-off-by: Matteo Bettini <matbet@meta.com>
@matteobettini
Copy link
Contributor Author

I have updated docstring, example and test.
Hopefully now it is more clear what this function does

Copy link
Contributor

@vmoens vmoens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it sorry I'm a Bit slow

@vmoens vmoens merged commit 37e66d1 into pytorch:main Aug 3, 2023
19 of 21 checks passed
@matteobettini matteobettini deleted the dense-stack-tds branch August 4, 2023 08:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants