[BUG] concat_sequences
Stacks Batches as Time Steps for Single-Step Predictions
#1808
Labels
bug
Something isn't working
Describe the bug
We’re predicting 1 step ahead (
max_prediction_length=1
) withTemporalFusionTransformer.predict(return_y=True)
on 128 rows (2 batches of 64).We expect
y
to be(128, 1)
1 actual per row. Instead, it’s(64, 2)
batches stacked as time steps.To Reproduce
Expected behavior
Additional context
PredictCallback.on_predict_epoch_end
concat_sequences
inutils/_utils.py
usestorch.cat(..., dim=1)
.For 2
(64, 1)
batches,dim=1
makes(64, 2)
stacks horizontally (time).Should use
dim=0
for single-step, stacking vertically to(128, 1)
(rows).Multi-step (
max_prediction_length > 1
) needsdim=1
, but not here.Related: #1752, #1509, #1320 similar
dim=1
issues.Versions
System:
python: 3.12.9 (main, Feb 12 2025, 14:50:50) [Clang 19.1.6 ]
executable: /home/username/code/rich/.venv/bin/python
machine: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Python dependencies:
pip: None
pytorch-forecasting: 1.3.0
torch: 2.6.0
lightning: 2.5.1
numpy: 2.2.4
scipy: 1.15.2
pandas: 2.2.3
cpflows: None
matplotlib: None
optuna: None
optuna-integration: None
pytorch_optimizer: None
scikit-learn: 1.6.1
scikit-base: None
statsmodels: None
The text was updated successfully, but these errors were encountered: