Difference between revisions of "Webinar 2019 Using reduced numerical precision on Pascal, Volta and Turing GPUs"

From SHARCNETHelp
Jump to navigationJump to search
imported>Syam
(Created page with "Deep learning algorithms often do not require high numerical precision to produce satisfactory results, and thus benefit from performance gains that using reduced precision ca...")
 
(No difference)

Latest revision as of 10:46, 26 September 2019

Deep learning algorithms often do not require high numerical precision to produce satisfactory results, and thus benefit from performance gains that using reduced precision can provide. To take advantage of this, recent generations of NVIDIA GPUs have been increasing support for reduced precision operations acting on FP16, INT8, INT4 and BOOL variables. This seminar will describe how to use such operations in CUDA code.