Webinar 2019 Using reduced numerical precision on Pascal, Volta and Turing GPUs

From SHARCNETHelp
Revision as of 10:46, 26 September 2019 by imported>Syam (Created page with "Deep learning algorithms often do not require high numerical precision to produce satisfactory results, and thus benefit from performance gains that using reduced precision ca...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

Deep learning algorithms often do not require high numerical precision to produce satisfactory results, and thus benefit from performance gains that using reduced precision can provide. To take advantage of this, recent generations of NVIDIA GPUs have been increasing support for reduced precision operations acting on FP16, INT8, INT4 and BOOL variables. This seminar will describe how to use such operations in CUDA code.