-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parallel Dataloader failing when using num_workers > 0 #1161
Comments
When torch creates a parallel dataloader (num_workers > 1) it will create some new R processes using callr and then copy the dataset you passed on into each one of those processes. It will then run Problems can arise when copying dataset into those processes, for example:
Here's a small example running the mnist dataset in parallel:
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi,
I am trying to increase the number of workers used by the dataloader but have been encountering issues. I saw issue 625 and 626 which included the warning message but cannot find an example vignette showing how to properly implement the parallel dataloader. Would it be possible to have a brief example for this?
The text was updated successfully, but these errors were encountered: