- Meant to mimic cognitive attention
- Picks out relevant bits of information
- Use gradient descent
- Used in 90s
- Multiplicative modules
- Sigma pi units
- Hyper-networks
- Draw from relevant state at any preceding point along sequence
- Addresses RNNs vanishing gradient issues
- LSTM tends to poorly preserve far back knowledge
- Attention layer access all previous states and weighs according to learned measure of relevance
- Allows referring arbitrarily far back to relevant tokens
- Can be addd to RNNs
- In 2016, a new type of highly parallelisable decomposable attention was successfully combined with a feedforward network
- Attention useful in of itself, not just with RNNs
- Transformers use attention without recurrent connections
- Process all tokens simultaneously
- Calculate attention weights in successive layers
Scaled Dot-Product
- Calculate attention weights between all tokens at once
- Learn 3 weight matrices
- Word vectors
- For each token, i, input word embedding, xi
- Multiply with each of above to produce vector
- Query Vector
- qi=xiWQ
- Key Vector
- ki=xiWK
- Value Vector
- vi=xiWV
- Attention vector
- Query and key vectors between token i and j
- aij=qi⋅kj
- Divided by root of dimensionality of key vectors
- Pass through softmax to normalise
- WQ and WK are different matrices
- Attention can be non-symmetric
- Token i attends to j (qi⋅kj is large)
- Doesn’t imply that j attends to i (qj⋅ki can be small)
- Output for token i is weighted sum of value vectors of all tokens weighted by aij
- Attention from token i to each other token
- Q,K,V are matrices where ith row are vectors qi,ki,vi respectively
Attention(Q,K,V)=softmax(dkQKT)V
- softmax taken over horizontal axis