Skip to content

Commit aa8423b

Browse files
committed
time weighted loss
1 parent 37ffcd6 commit aa8423b

File tree

2 files changed

+6
-0
lines changed

2 files changed

+6
-0
lines changed

case-study.tex

+1
Original file line numberDiff line numberDiff line change
@@ -354,6 +354,7 @@ \subsection{Offline evaluation results and discussion}
354354

355355

356356
\subsection{Training data requirements}
357+
\label{train-req}
357358
So far the models have been trained with 4.5 years of data.
358359
Table \ref{train-reg} shows how the MAPE changes on the hourly task when the amount of training data is decreased.
359360
The training data was always the data closest to the train set.

conclusion.tex

+5
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,11 @@ \subsubsection{Transfer learning}
2626
Transfer learning could be applied to load forecasting, whereby a single neural network model is first trained to forecast many different feeders, and then this pre-trained model is finally trained to forecast a single feeder.
2727
This could potentially expand the generalization ability of the model, while also likely reducing the amount of training time required when starting from a pre-trained model.
2828

29+
\subsubsection{Time-weighted loss function}
30+
Section \ref{train-req} indicated that the most recent training data may be the most important.
31+
It is possible to modify the loss function such that it weights samples from more recently higher.
32+
Whether this would be superior to simply restricting the training set to recent data is unknown.
33+
2934
\subsubsection{Monthly and hourly models}
3035
Similar to \citet{Ceperic2013}, different models could be train for every month and every hour.
3136
This introduces 288 models, making the training time potentially quite large, so pre-training or transfer learning could be used to make this more approachable.

0 commit comments

Comments
 (0)