Skip to content

Int8 Matmul backward for all GPUs

Compare
Choose a tag to compare
@TimDettmers TimDettmers released this 02 Feb 14:51
· 549 commits to main since this release

This release changed the default bitsandbytets matrix multiplication (bnb.matmul) to now support memory efficient backward by default. Additionally, matrix multiplication with 8-bit weights is supported for all GPUs.

During backdrop, the Int8 weights are converted back to a row-major layout through an inverse index. The general matmul for all GPUs by using Int8 weights is done by casting the weights from Int8 to the inputs data type (FT32/FP32/BF16/F16) and then doing standard matrix multiplication. As such, the matrix multiplication during backdrop and for non-tensor-core devices will be memory efficient, but slow.

These contributions were the work of Alexander Borzunov and Yozh, thank you!

Features:

  • Int8 MatmulLt now supports backward through inversion of the ColTuring/ColAmpere format. Slow, but memory efficient. Big thanks to @borzunov
  • Int8 now supported on all GPUs. On devices with compute capability < 7.5, the Int weights are cast to 16/32-bit for the matrix multiplication. Contributed by @borzunov

Improvements:

  • Improved logging for the CUDA detection mechanism.