Skip to content

Commit 3317084

Browse files
committed
fixed typo in gradient of cost function
1 parent 78e8c60 commit 3317084

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

random_signals_LTI_systems/linear_prediction.ipynb

+3-3
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@
6666
"Above equation is referred to as [*cost function*](https://en.wikipedia.org/wiki/Loss_function) $J$ of the optimization problem. We aim at minimizing the cost function, hence minimizing the MSE between the signal $x[k]$ and its prediction $\\hat{x}[k]$. The solution of this [convex optimization](https://en.wikipedia.org/wiki/Convex_optimization) problem is referred to as [minimum mean squared error](https://en.wikipedia.org/wiki/Minimum_mean_square_error) (MMSE) solution. Minimizing the cost function is achieved by calculating its gradient with respect to the filter coefficients [[Haykin](../index.ipynb#Literature)] using results from [matrix calculus](https://en.wikipedia.org/wiki/Matrix_calculus)\n",
6767
"\n",
6868
"\\begin{align}\n",
69-
"\\nabla_\\mathbf{h} J &= -2 E \\left\\{ x[k-1] (x[k] - \\mathbf{h}^T[k] \\mathbf{x}[k-1]) \\right\\} \\\\\n",
69+
"\\nabla_\\mathbf{h} J &= -2 E \\left\\{ \\mathbf{x}[k-1] (x[k] - \\mathbf{h}^T[k] \\mathbf{x}[k-1]) \\right\\} \\\\\n",
7070
"&= - 2 \\mathbf{r}[k] + 2 \\mathbf{R}[k-1] \\mathbf{h}[k]\n",
7171
"\\end{align}\n",
7272
"\n",
@@ -10436,9 +10436,9 @@
1043610436
"name": "python",
1043710437
"nbconvert_exporter": "python",
1043810438
"pygments_lexer": "ipython3",
10439-
"version": "3.9.13"
10439+
"version": "3.12.6"
1044010440
}
1044110441
},
1044210442
"nbformat": 4,
10443-
"nbformat_minor": 1
10443+
"nbformat_minor": 4
1044410444
}

0 commit comments

Comments
 (0)