-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] dense_stack_tds
#506
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for this!
I think we can clarify it a bit, it isn't super transparent atm
tensordict/tensordict.py
Outdated
This must be used when some of the :class:`tensordict.TensorDictBase` objects that need to be stacked | ||
can have :class:`tensordict.LazyStackedTensorDict` among entries (or nested entries). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why "can have"? why not just "have"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
among their entries
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
because this method can be used even when no lazy stacks are involved
tensordict/tensordict.py
Outdated
1 -> | ||
b: Tensor(shape=torch.Size([2, 2]), device=cpu, dtype=torch.float32, is_shared=False)}, | ||
batch_size=torch.Size([2, 2]), | ||
device=None, | ||
is_shared=False, | ||
stack_dim=1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the appearance of b
seems to be the only difference but it's still a lazy key
Why do we need 2 levels of nesting? This example isn't super clear to me
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we need 2 levels of nesting because the function is used to stack tds that contain lazy stacks. so it is only useful when there are at least two levels of nesting. The change of the stack dim from 0 to 1 is the core difference here. "b" is just there to highlight that
"nested_stack_dim", | ||
[0, 1, 2], | ||
) | ||
def test_dense_stack_tds(stack_dim, nested_stack_dim): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see what we're testing specific to this method here. All the assertions would hold with a regular tensordict no?
Why aren't we testing anything specific to the keys we provide?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the idea is that this function should behave exactly the same with regualr tds or lazy stacks so the fact that a regualr td passes is ok to me. the main thing we are testing is that, when stacking tds that contain lazy stacks, we can call this function to remove one level of lazyness and densify one stack_dim
I have updated docstring, example and test. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it sorry I'm a Bit slow
No description provided.