-
Notifications
You must be signed in to change notification settings - Fork 675
RuntimeError: Sizes of tensors must match except in dimension 1 #1320
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I didn't run the code, but I know Their code is funny. |
So do you know what I can change to make it work? |
Just make the length of training integer multiple of the batch size. For example, your batch size is 64. Training length is 6420. Then drop the last 20 samples. |
It's the validation data that fails so I assume I should drop it based on validation set? Although I tried both and neither works. |
I am currently faced with similar issue even when I tried to evaluate the performance of the tft model. predictions = best_tft.predict(val_dataloader, return_y=True, trainer_kwargs=dict(accelerator="cpu")) Please, if you find a way around yours, let me know how |
I'm having the same issue with pretty much the same code :/ |
Yes, the code in question (which produces this error) is in the TFT demand example in the documentation. |
I've found a fix : modifying the
|
I've been struggling with a similar problem for a long time now. What worked for me (I don't know if it makes mathematical sense) was to lower the batch size to the size that the error tells you. In your case 42. Hope this helps |
Please see my comment here - #449 (comment). If you don't need the ys (it's easy to format them yourself), then setting @hippotilt thanks! I tracked down the problem to this function. It would be nice if something similar was merged upstream so that we don't need to hack it in our own code. |
I encountered the same error and narrowed down the issue, as mentioned by many above, to the concat_sequences function in utils.py. The following fix worked for me:
Just changing the concat dimension to 0 (the axis containing the batches) fixes the error. I am not sure how this function is used elsewhere in the package and hope it does not break things in those places. |
Same issue here, can't predict all my examples because they aren't a multiplier of |
I also encountered this problem, so how should I fix it? |
Have you solved the problem? |
hi,I modified dim=0 according to your method, but the error was still reported before. Did you run it successfully? |
thank you ! I solved the problem your way! |
Expected behavior
I executed code
Baseline().predict(val_dataloader, return_y=True)
and did not expect any errorsActual behavior
Received the following error
Code to reproduce the problem
I am running the following code on an internal dataset
The text was updated successfully, but these errors were encountered: