"Above equation is referred to as [*cost function*](https://en.wikipedia.org/wiki/Loss_function) $J$ of the optimization problem. We aim at minimizing the cost function, hence minimizing the MSE between the signal $x[k]$ and its prediction $\\hat{x}[k]$. The solution of this [convex optimization](https://en.wikipedia.org/wiki/Convex_optimization) problem is referred to as [minimum mean squared error](https://en.wikipedia.org/wiki/Minimum_mean_square_error) (MMSE) solution. Minimizing the cost function is achieved by calculating its gradient with respect to the filter coefficients [[Haykin](../index.ipynb#Literature)] using results from [matrix calculus](https://en.wikipedia.org/wiki/Matrix_calculus)\n",
0 commit comments