Matrix multiplication SGEMM: Wider data-types (mat2x4)

"WebGL 2 compute" NxN matrix multiplication C = A x B (SGEMM) v.4_4 demo.
See Kernel 4: Wider data-types by Cedric Nugteren.
All A, B elements are random (0 - 1). Error "er1" is calculated as the sum of |CCPU - CGPU |/(N*N) for all matrix elements. er2 = max(|CCPU - CGPU |). See also Shader 4_8 benchmark.

Compute Shader 4_8

See also the page source
#version 310 es
#define TS 32u
#define WIDTH 8u
#define TSW 4u  // TS/WIDTH
layout (local_size_x = TSW, local_size_y = TS, local_size_z = 1) in;
layout (std430, binding = 0) readonly buffer ssbA {
  mat2x4 A[];
};
layout (std430, binding = 1) readonly buffer ssbB {
  mat2x4 B[];
};
layout (std430, binding = 2) writeonly buffer ssbC {
  mat2x4 C[];
};
  uniform uvec3 MNK;
  shared mat2x4 Asub[TS][TS/WIDTH];  // Local memory to fit a tile of
  shared mat2x4 Bsub[TS][TS/WIDTH];  // TS*TS elements of A and B
void main() {
    uint M = MNK.x, N = MNK.y, K = MNK.z;

    // Thread identifiers
    uint row = gl_LocalInvocationID.x; // Local row ID (max: TS/WIDTH)
    uint col = gl_LocalInvocationID.y; // Local col ID (max: TS)
    uint globalRow = (TS/WIDTH)*gl_WorkGroupID.x + row; // Row ID of C (0..M/WIDTH)
    uint globalCol = TS*gl_WorkGroupID.y + col; // Col ID of C (0..N)

    // Initialise the accumulation register
    mat2x4 acc = mat2x4(vec4(0.0),vec4(0.0));

    // Loop over all tiles
    uint numTiles = K/TS;
    for (uint t=0u; t < numTiles; t++) {

        // Load one tile of A and B into local memory
        uint tiledRow = (TS/WIDTH)*t + row;
        uint tiledCol = TS*t + col;
        Asub[col][row] = A[tiledCol*(M/WIDTH) + globalRow];
        Bsub[col][row] = B[globalCol*(K/WIDTH) + tiledRow];

        // Synchronise to make sure the tile is loaded
        memoryBarrierShared();
        barrier();

        // Perform the computation for a single tile
        mat2x4 vecB;
        for (uint k=0u; k < TS/WIDTH; k++) {
            vecB = Bsub[col][k];
            acc += Asub[WIDTH*k][row] * vecB[0][0];
            acc += Asub[WIDTH*k + 1u][row] * vecB[0][1];
            acc += Asub[WIDTH*k + 2u][row] * vecB[0][2];
            acc += Asub[WIDTH*k + 3u][row] * vecB[0][3];
            acc += Asub[WIDTH*k + 4u][row] * vecB[1][0];
            acc += Asub[WIDTH*k + 5u][row] * vecB[1][1];
            acc += Asub[WIDTH*k + 6u][row] * vecB[1][2];
            acc += Asub[WIDTH*k + 7u][row] * vecB[1][3];
        }

        // Synchronise before loading the next tile
        barrier();
    }
    // Store the final result in C
    C[globalCol*(M/WIDTH) + globalRow] = acc;
}

Comment:

Nvidia GPUs have 256 bits register file. Therefore 8 x 32-bits words data (vec8) have the best performance in OpenCL. Unfortunately in WebGL the script with mat2x8 data is a bit slower than vec4 based one (26.5 and 30 GFLOPS).

What is optimal data (GPU regiser) length for mobile GPU?


GEMM in WebGL2-compute     updated 4 Mar 2019