\( \newcommand{\matr}[1] {\mathbf{#1}} \newcommand{\vertbar} {\rule[-1ex]{0.5pt}{2.5ex}} \newcommand{\horzbar} {\rule[.5ex]{2.5ex}{0.5pt}} \newcommand{\E} {\mathrm{E}} \)
deepdream of
          a sidewalk
Show Question
Math and science::INF ML AI

Sinusoidal positional embedding

  1. Can you remember how to generate an embedding for an array of shape [n_time, n_channel]?
  2. How to add this array to a Pytorch module?

1. Create sinusoidal embedding


@torch.no_grad()
def get_sinusoidal_embeddings(length: int, n_ch: int) -> torch.Tensor:
    """
    Create a tensor of shape [time_steps, n_channels//2] and fill all rows with
    the sequence: [0, 1, 2, ..., length-1]. Then, scale each row by a different
    value. The list of scaling values are chosen to span from 1 to 1/10000.
    The 0th channel is scaled by 1, then gradually the scale is reduced
    until the last channel, (n_ch//2 -1), is scaled by 1/10000. The steps are
    exponentially spaced, so the scaling will initially rapidly decrease, then
    slow down as the minimum is approached.
    This resulting array is used as input to two functions: sine and cos, and
    the two results are concatenated in order to get a tensor of shape 
    [time_steps, n_channels].
    """
    half_nch = n_ch // 2
    # From 1 to 1/10000, exponentially in half_nch steps.
    slope = (
        torch.arange(half_nch, dtype=torch.float)
        * -math.log(10000)
        / (half_nch - 1)
    ).exp()
    t = slope[:, None] * torch.arange(length, dtype=torch.float)[None, :]
    res = torch.cat([t.sin(), t.cos()], dim=0).to(dtype=torch.float32)
    return res

2. Use in Pytorch Module


class SinusoidalPosEmbed(nn.Module):
    def __init__(self, n_time, n_ch, is_trainable):
        super().__init__()

        # Initialize trainable embedding with sinusoidal encoding, or
        if is_trainable:
            self.pos_embed = nn.Parameter(
                get_sinusoidal_embeddings(n_time, n_ch)
            )
        else:        
            # don't allow subsequent updates when training.
            self.pos_embed = get_sinusoidal_embeddings(n_time, n_ch)
            # Could register buffer, but as it's deterministic, it's not
            # necessary. Note: self.register_buffer returns None.
            # self.register_buffer('pos_embed',
            #    get_sinusoidal_embeddings(n_time, n_ch)
    
    def forward(self, x):
        x = x + self.pos_embed
        return x