Math and science::INF ML AI
Negative log likelihood loss. A perspective.
Negative log likelihood loss is normally calculated as the positivized mean log likelihood. This is:
\[
\text{loss} = \sum_{i=0}{i=\text{n_steps}} \mathcal{P}(\text{data} | \text{model_out})
\]
As this mean is taken over many samples, it approximates an expectation—an expectation over log probabilities. Sound familiar? This is an approximation to entropy.
\text{entropy} = \sum_{i=0}{N} -p(x) \log(p(x))
Eneregy based
By trying to maximize entropy, we are trying to make the probability distribution as pointy as possible, giving weight only to the data point observed and zero everywhere else. This can be viewed as energy based "pushing down" but when high compatibility is instead given high values.