\( \newcommand{\matr}[1] {\mathbf{#1}} \newcommand{\vertbar} {\rule[-1ex]{0.5pt}{2.5ex}} \newcommand{\horzbar} {\rule[.5ex]{2.5ex}{0.5pt}} \newcommand{\E} {\mathrm{E}} \)
deepdream of
          a sidewalk
Show Question
\( \newcommand{\cat}[1] {\mathrm{#1}} \newcommand{\catobj}[1] {\operatorname{Obj}(\mathrm{#1})} \newcommand{\cathom}[1] {\operatorname{Hom}_{\cat{#1}}} \newcommand{\multiBetaReduction}[0] {\twoheadrightarrow_{\beta}} \newcommand{\betaReduction}[0] {\rightarrow_{\beta}} \newcommand{\betaEq}[0] {=_{\beta}} \newcommand{\string}[1] {\texttt{"}\mathtt{#1}\texttt{"}} \newcommand{\symbolq}[1] {\texttt{`}\mathtt{#1}\texttt{'}} \newcommand{\groupMul}[1] { \cdot_{\small{#1}}} \newcommand{\groupAdd}[1] { +_{\small{#1}}} \newcommand{\inv}[1] {#1^{-1} } \newcommand{\bm}[1] { \boldsymbol{#1} } \newcommand{\qed} { {\scriptstyle \Box} } \require{physics} \require{ams} \require{mathtools} \)
Math and science::INF ML AI

x times derivative of ln of x

The following expression simplifies:

\[ x \dv{x} \ln(x) = 1 \]

You see this in the rewrite of the weighted score function:

\[ p(x | \theta) \cdot \pdv{\theta} \ln(p(x | \theta)) \; = \; \pdv{\theta} p(x | \theta) \]

Which is used to prove that the expectation of the score function over \( x \) is zero:

\[ \begin{align*} \mathbb{E}_{x \sim p(x | \theta)} [s(x, \theta)] &= \int_{-\infty}^{\infty} \pdv{\theta} \ln(p(x | \theta)) \cdot p(x | \theta) \, \dd x \\ &= \int_{-\infty}^{\infty} \pdv{\theta} p(x | \theta) \, \dd x \\ &= \pdv{\theta} \int_{-\infty}^{\infty} p(x | \theta) \, \dd x \\ &= \pdv{\theta} 1 \\ &= 0 \end{align*} \]

Which, in turn, is used to describe the Fisher information as being the variance of the score function.