Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: The size of tensor a (65) must match the size of tensor b (10) at non-singleton dimension 1 #11

Open
niasalva opened this issue May 30, 2022 · 0 comments

Comments

@niasalva
Copy link

hello
I want to train the model using python train.py --dataset VCTK command but i faced following error:

Number of Daft-Exprt Parameters: 20603604
Removing weight norm...
Training:   0%|                                                                                                                | 0/900000 [00:00<?, ?it/s
Traceback (most recent call last):                                                                                                 | 0/674 [00:00<?, ?it/s]
  File "train.py", line 190, in <module>
    main(args, configs)
  File "train.py", line 85, in main
    output = model(*(batch[2:]))
  File "/home/prosody_control/daftenv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/prosody_control/daftenv/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 161, in forward
    outputs = self.parallel_apply(replicas, inputs, kwargs)
  File "/home/prosody_control/daftenv/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 171, in parallel_apply
    return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
  File "/home/prosody_control/daftenv/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
    output.reraise()
  File "/home/prosody_control/daftenv/lib/python3.6/site-packages/torch/_utils.py", line 428, in reraise
    raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
  File "/home/prosody_control/daftenv/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
    output = module(*input, **kwargs)
  File "/home/prosody_control/daftenv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/prosody_control/Daft-Exprt-main/model/DaftExprt.py", line 98, in forward
    p_control, e_control, d_control, src_masks, ref_mel_lens, ref_max_mel_len, ref_mel_masks, src_lens
  File "/home/prosody_control/daftenv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/prosody_control/Daft-Exprt-main/model/modules.py", line 539, in forward
    s_input = p_embed + e_embed + d_embed + encoder_outputs
RuntimeError: The size of tensor a (65) must match the size of tensor b (10) at non-singleton dimension 1

Training:   0%|                                                                                                   | 1/900000 [00:04<1071:57:08,  4.29s/it]
Epoch 1:   0%|                                                                                                                    | 0/674 [00:04<?, ?it/s]```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant