\( \newcommand{\matr}[1] {\mathbf{#1}} \newcommand{\vertbar} {\rule[-1ex]{0.5pt}{2.5ex}} \newcommand{\horzbar} {\rule[.5ex]{2.5ex}{0.5pt}} \)
deepdream of
          a sidewalk
water bottles notebook lots of colored squares Anki notes

Visualizing a Perceptron

A lot of machine learning techniques can be viewed as an attempt to represent high-dimensional data in fewer dimensions without losing any important information. In a sense, it is lossy compression—compressing the data to be small and amenable before being passed to some next stage of data processing. If our data consists of elements of \( \mathbb{R}^D \), we are trying to find interesting functions of the form: \[ f : \mathbb{R}^D \to \mathbb{R}^d \] where \( D \) is fixed, but \( d \) can be chosen freely. Read more...

Matrix Mnemonics

Reading math notation for matrices is burdened by having to recall trivial things like whether rows or columns are indexed first. An author can be trying to communicate something quite simple, yet the cognitive load required by the reader can be high as they work to unpack the matrix notation. Matrix symmetry makes this especially frustrating as there is no satisfying reason for some rules, such as the rule that rows are indexed before columns; a choice just had to be made. Read more...

Visualizing Matrix Multiplication

Whenever I come across a matrix multiplication, my first attempt at visualizing it is to view the multiplication as: multiple objects, combined together, many times Matrices are usually carrying a list of objects, with each object represented by a row or column of the matrix. Inspecting how matrices behave by looking these objects can be an effective way to understand what an author is trying to communicate when they use matrices. Read more...
1 of 51 Next Page