TensorFlow.js HGEMM matrix multiplication benchmark

TensorFlow.js (WebGL) based NxN matrix multiplication C = A x B benchmark. Random A, B are generated for calculations. FLOPS = 2 N3 / Tmin. You can set new N value (note that execution time ~N3). The first run initializes A,B and is the slowest.

calculating
N= it=

tfjs-core: Make it possible to force usage of F16 textures. #1902
Is not in 1.2.8 release yet, therefore local tf-core.js is compiled.

Test results are strange (note, that FP16 is 2 times faster than FP32 in OpenCL CLBlast tests).
As you can see in console, in tfjs-core fragment shaders "highp" precision for floats is set therefore drivers use fp32 math. In my fast HGEMM with RGBA16F textures script "mediump" precision for floats are used!

<script src="tf-core.js"> </script>

var N = 1024, A,B;
function init() {
  tf.webgl.forceHalfFloat()

  A = tf.randomUniform([N, N]);
  B = tf.randomUniform([N, N]);
  run()
}
function run() {
 tf.tidy(() => {
  var ti0 = performance.now(), ti, Tmin = 100000, str = "\n T(ms)= "
  for(var i = 0; i < it; i++){
    const C = A.matMul(B)
    var t = C.dataSync()[0];
    ti = performance.now()
    dt = ti - ti0
    if(Tmin > dt) Tmin = dt
    str += Math.round(10*dt)/10 + "\u2003"
    ti0 = ti
  }
  document.getElementById('output').innerText = "N = " + N +
    " GFLOPS=" + Math.round(2*N*N*N/Tmin/10000)/100 + str
 })
}

Comments

This script uses dummy "C.dataSync()[0]" every run for synchronisation. "Asynchronous" script.
SGEMM in WebGL2-compute     updated 4 Sep 2019