\(
\newcommand{\cat}[1] {\mathrm{#1}}
\newcommand{\catobj}[1] {\operatorname{Obj}(\mathrm{#1})}
\newcommand{\cathom}[1] {\operatorname{Hom}_{\cat{#1}}}
\newcommand{\multiBetaReduction}[0] {\twoheadrightarrow_{\beta}}
\newcommand{\betaReduction}[0] {\rightarrow_{\beta}}
\newcommand{\betaEq}[0] {=_{\beta}}
\newcommand{\string}[1] {\texttt{"}\mathtt{#1}\texttt{"}}
\newcommand{\symbolq}[1] {\texttt{`}\mathtt{#1}\texttt{'}}
\newcommand{\groupMul}[1] { \cdot_{\small{#1}}}
\newcommand{\inv}[1] {#1^{-1} }
\)
Math and science::INF ML AI
Latent variables in autoencoders
The word 'latent' in 'latent values', 'latent variables' or 'latent vector' when describing the bottleneck of an autoencoder tries to communicate the fact that
these values were in the data all along, we just didn't know how to find them.
Source
Andrew Glassner section of Siggraph 2020's "Making Machine Learning Work: from Ideas to Production Tools"
-->