in

Hey GPU, What’s Up with My Matrix? | by Thushan Ganegedara | Jun, 2023


Matrix multiplication; the holy grail of deep neural networks and trendy language understanding behemoths. As MLEs or information scientists, our fingers are too fast to kind tf.matmul or torch.matmul and we by no means look again. However don’t inform me you’ve by no means had the millisecond infatuation to know what is perhaps taking place to that matrix when it enters the GPU! When you did, you’re in the suitable place. Be a part of me in a journey by means of the fascinating intricacies inside a GPU.

I’ll clarify to you the way these compute powerhouses crunch up the numbers. You’ll study three little-known spectacular issues GPUs do, once they come face-to-face with matrices. By the tip of this weblog submit, you’ll have understanding of how matrix multiplication works inside GPUs.

GEMM or generalized matrix multiplication is the kernel that’s executed when GPUs carry out matrix multiplication.

C = a (A.B) + b C

Right here, a and b are scalars, A is an MxK matrix, B is an KxN matrix, and thus C is an MxN matrix. It’s simple as that! You may surprise why that trailing addition exists. Seems this can be a fairly frequent sample for neural networks (e.g. including bias, making use of ReLU, including residual connections).

When you’re requested to put in writing a matrix multiplication algorithm from first ideas, right here’s what you’ll do (except you’re gifted with a GPU in lieu of a mind — wouldn’t that get monetary savings for an MLE!).

for (int i = 0; i < M; ++i)
for (int j = 0; j < N; ++j)
for (int ok = 0; ok < Ok; ++ok)
C[i][j] += A[i][k] * B[k][j];

Right here’s an animated visible that exhibits you what this does.

Interior product primarily based multiplication of two matrices (Recreated by writer — supply of inspiration: https://www.adityaagrawal.net/blog/architecture/matrix_multiplication)

However do you know GPUs despise this implementation 🤔? To grasp why that’s the case, you’ll want to perceive the GPU reminiscence structure,

For all comparisons and specs, I’ll be utilizing the Nvidia A100 GPU specs.

A GPU has three essential reminiscence ranges,

  • World reminiscence or HBM (what you usually discuss with as GPU reminiscence and what you see once you run nvidia-smi )
  • Shared reminiscence (an area reminiscence that’s devoted to a single streaming multiprocessor [or SM] and shared between threads working in that SM)
  • Registers (individually allotted to threads to hold out their workload)

That is what it appears like,

The standard reminiscence hierarchy of a GPU (L0/L1/L2 caches ignored for simplicity)

The very first thing to notice is that shared reminiscence (known as SRAM to any extent further) is method smaller than the HBM, not to mention registers. So your matrix isn’t going to slot in there (in most events). If we return to our animation, for a single row of A all columns of B must be retrieved, and repeat the method for all rows in A. This implies, the GPU must do many-many reads to compute the output. The HBM (~1.5TB/s) is a number of magnitudes slower than SRAM (~19TB/s).

To place that in numbers, say you need to multiply a 10x20 and 20x30 matrix, you’ll want to learn columns of B 10x30=300 instances. Is there a greater method we will do that?

Seems a easy trick can go a good distance right here! Merely flip the order of the loops, in order that ok turns into the outer most loop. And also you’re finished! 😮

for (int ok = 0; ok < Ok; ++ok) 
for (int i = 0; i < M; ++i)
for (int j = 0; j < N; ++j)
C[i][j] += A[i][k] * B[k][j];

We didn’t contact the precise computation, simply the order of the loops, so we should always get the identical outcome as earlier than. Right here’s what the matrix multiplication appears like now!

Outer product primarily based multiplication of two matrices (Recreated by writer — supply of inspiration: https://www.adityaagrawal.net/blog/architecture/matrix_multiplication)

You see, we solely deliver one column of A and one row of B at a time and by no means look again. This requires far much less reads than the unique implementation. The one distinction is we had been computing the inside product between two vectors earlier than, now we’re computing the outer product.

The distinction between inside product and outer product proven in inexperienced for 2 vectors (blue and yellow).

However nonetheless, we’d like total C in SRAM, which is perhaps too huge to slot in SRAM. What does CUDA do then? That brings us to the second trick.

To not fear! I’m not going to blast you with any advanced arithmetic or Leetcode algorithms. The principle factor to bear in mind is, a matrix is a 2D structure of particular person tiles. The next animation does justice to what I’m attempting to elucidate.

You possibly can iterate every block in A and B and nonetheless compute the precise reply for C’s corresponding block

The results of the inexperienced block 💚 is the sunshine blue strip of A 💙 and the sunshine yellow strip of B 💛. Taking this a step additional, to compute the output, you possibly can deliver one block of that strip of A and one block from B’s strip at a time, compute the output and accumulate the outcome within the inexperienced field.

This offers us a versatile framework the place we will load an arbitrary dimension block (or tile) of A and B and nonetheless compute the ultimate reply. We don’t should cease there, we will maintain recursively dividing the issue to even smaller issues. i.e. the matrix is damaged into tiles, tiles are damaged into fragments, and fragments to particular person values.

Utilizing the tiling strategy, the issue will be damaged down recursively

And this lends itself properly to the method execution structure in a GPU. There are three layers to a kernel execution in a GPU. For simplicity, we’ll say a SM runs a single thread block (though in follow they execute them concurrently, to cut back one thing often called the tail effect).

  • Threads
  • Warps (a set of 32 threads)
  • Thread blocks (a set of a number of warps)

The precise variety of threads in a thread block relies on a particular structure. For instance, an A100 has the following specifications.

  • Most of 2048 threads per SM
  • Most of 1024 threads per block
  • Most of 32 thread blocks per SM

Sidebar #2: Magic of the ability of two

Going again to the tiling, It has been discovered that (heuristically) a matrix tile of dimension 256x128 per thread block provides affordable effectivity for many issues. Due to this fact it’s a typical tile dimension utilized by CUDA.

You may need heard a couple of greatest follow of preserving batch dimension, hidden dimension dimension as powers of two. That is the place this comes from! When your matrix dimensions are of powers of two, will probably be totally divisible to a set of tiles with no the rest. If not, it makes your code less efficient.

GPU computations are extra environment friendly when your matrix dimensions are within the energy of two

What occurs when it’s not an influence of two?

Sidebar #2: Tile quantization

What occurs is an impact often called tile quantization. In different phrases, if in case you have a tile row dimension of 128 however your matrix has 257 components in a row, you’ll needn’t two, however three tiles in a row (i.e. 256+1). That is illustrated beneath.

Simply because we had on additional component in rows, we now have to dedicate two total thread blocks

Drawback with that is that, the thread block does the identical quantity of computation whatever the helpful information residing in it. So, you’re taking the chance to do helpful computation away out of your GPU, resulting in inefficiencies.

An analogous impact is named wave quantization, the place the matrix is over-sized and the SMs collectively can’t match it without delay. Then the GPU must do the computation in 2 “waves”. Nevertheless that is much less of a priority for contemporary GPUs as they leverage concurrency to cut back wave quantization.

Tile quantization occurs when a thread block has to spill information partially, wave quantization occurs when SMs should spill information.

The ultimate trick is kernel fusion. As a rule, it’s quicker to do all of the computations in a single kernel than having two kernels referred to as one after the opposite. Why? As a result of one kernel wants to put in writing the info to HBM and different must learn that again. We already talked about how gradual that is. A greater strategy is simply mix the 2 operations into one. Some examples are,

In order it’s seen right here (I’m positive Pytorch has an analogous glossary), there are lots of fused kernels provided by means of TensorFlow that mixes commodious operations in to a single kernel. In code, it means one thing like this,

for (int m = 0; m < M; m += Mtile) 
for (int n = 0; n < N; n += Ntile)
for (int ok = 0; ok < Ok; ++ok)
float tmp = 0
for (int i = 0; i < Mtile; ++i)
for (int j = 0; j < Ntile; ++j)
int row = m + i;
int col = n + j;
tmp += A[row][k] * B[k][col];
// Do different issues
C[row][col] = tmp

In different phrases, we maintain dearly to our tmp variable till after we’ve completed all our computations. Then solely we’ll write the outcome again to C .

That’s it people. I hope this was an fulfilling tour by means of the weeds of a GPU. When you’re within the audio-visual model right here’s the hyperlink to my YouTube video.

To recap, we mentioned three issues that makes GPUs actually quick at matrix multiplication.

  • GPUs abandon the friendlier inside product implementation of matmul and embrace the extra read-efficient outer product implementation of matmul
  • GPUs cut up the matrices into smaller blocks (and blocks into fragments) and cut up the compute load throughout thread blocks, warps and threads.
  • GPUs make use of kernel fusion to deliver generally co-occurring performance togetter, enhancing GPU effectivity.

When you loved this story, be happy subscribe to Medium, and you’re going to get notifications to recent content material from me, in addition to unlock full entry to 1000’s of high quality tales from different authors.

Except in any other case famous all pictures are by the writer


Studying to navigate open air with none outside expertise – Google AI Weblog

Managing Deep Studying Fashions Simply With TOML Configurations | by Shubham Panchal | Jun, 2023