<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://helpwiki.sharcnet.ca/wiki/index.php?action=history&amp;feed=atom&amp;title=Porting_CUDA_to_OpenCL</id>
	<title>Porting CUDA to OpenCL - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://helpwiki.sharcnet.ca/wiki/index.php?action=history&amp;feed=atom&amp;title=Porting_CUDA_to_OpenCL"/>
	<link rel="alternate" type="text/html" href="https://helpwiki.sharcnet.ca/wiki/index.php?title=Porting_CUDA_to_OpenCL&amp;action=history"/>
	<updated>2026-04-04T06:31:10Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.36.1</generator>
	<entry>
		<id>https://helpwiki.sharcnet.ca/wiki/index.php?title=Porting_CUDA_to_OpenCL&amp;diff=349&amp;oldid=prev</id>
		<title>Syam: Created page with &quot;&lt;!--Originally by Fei Mao, 2014--&gt; The data-parallel programming model in OpenCL shares some commonalities with CUDA programming model, making it relatively straightforward to...&quot;</title>
		<link rel="alternate" type="text/html" href="https://helpwiki.sharcnet.ca/wiki/index.php?title=Porting_CUDA_to_OpenCL&amp;diff=349&amp;oldid=prev"/>
		<updated>2021-03-18T17:45:52Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;&amp;lt;!--Originally by Fei Mao, 2014--&amp;gt; The data-parallel programming model in OpenCL shares some commonalities with CUDA programming model, making it relatively straightforward to...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;&amp;lt;!--Originally by Fei Mao, 2014--&amp;gt;&lt;br /&gt;
The data-parallel programming model in OpenCL shares some commonalities with CUDA programming model, making it relatively straightforward to convert programs from CUDA to OpenCL.&lt;br /&gt;
&lt;br /&gt;
=Hardware Terminology=&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
! style=&amp;quot;background:#8AA8E5;&amp;quot; | CUDA&lt;br /&gt;
! style=&amp;quot;background:#ECCF98;&amp;quot; | OpenCL&lt;br /&gt;
|-&lt;br /&gt;
| SM (Stream Multiprocessor)&lt;br /&gt;
| CU (Compute Unit)&lt;br /&gt;
|-&lt;br /&gt;
| Thread&lt;br /&gt;
| Work-item&lt;br /&gt;
|-&lt;br /&gt;
| Block&lt;br /&gt;
| Work-group&lt;br /&gt;
|-&lt;br /&gt;
| Global memory&lt;br /&gt;
| Global memory&lt;br /&gt;
|-&lt;br /&gt;
| Constant memory&lt;br /&gt;
| Constant memory&lt;br /&gt;
|-&lt;br /&gt;
| Shared memory&lt;br /&gt;
| Local memory&lt;br /&gt;
|-&lt;br /&gt;
| Local memory&lt;br /&gt;
| Private memory&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
Private memory (local memory in CUDA) used within a work item that is similar to registers in a GPU multiprocessor or CPU core. Variables inside a kernel function not declared with an address space qualifier, all variables inside non-kernel functions, and all function arguments are in the __private or private address space. Application performance can plummet when too much private memory is used on some devices – like GPUs because it is spilled to slower memory. Depending on the device, private memory can be spilled to cache memory. GPUs that do not have cache memory will spill to global memory causing significant performance drops.&lt;br /&gt;
&lt;br /&gt;
=Qualifiers for Kernel Functions=&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
! style=&amp;quot;background:#8AA8E5;&amp;quot; | CUDA&lt;br /&gt;
! style=&amp;quot;background:#ECCF98;&amp;quot; | OpenCL&lt;br /&gt;
|-&lt;br /&gt;
| __global__ function&lt;br /&gt;
| __kernel function&lt;br /&gt;
|-&lt;br /&gt;
| __device__ function&lt;br /&gt;
| No annotation necessary&lt;br /&gt;
|-&lt;br /&gt;
| __constant__ variable declaration&lt;br /&gt;
| __constant variable declaration&lt;br /&gt;
|-&lt;br /&gt;
| __device__ variable declaration&lt;br /&gt;
| __global variable declaration&lt;br /&gt;
|-&lt;br /&gt;
| __shared__ variable declaration&lt;br /&gt;
| __local variable declaration&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=Kernels Indexing=&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
! style=&amp;quot;background:#8AA8E5;&amp;quot; | CUDA&lt;br /&gt;
! style=&amp;quot;background:#ECCF98;&amp;quot; | OpenCL&lt;br /&gt;
|-&lt;br /&gt;
| gridDim&lt;br /&gt;
| get_num_groups()&lt;br /&gt;
|-&lt;br /&gt;
| blockDim&lt;br /&gt;
| get_local_size()&lt;br /&gt;
|-&lt;br /&gt;
| blockIdx&lt;br /&gt;
| get_group_id()&lt;br /&gt;
|-&lt;br /&gt;
| threadIdx&lt;br /&gt;
| get_local_id()&lt;br /&gt;
|-&lt;br /&gt;
| blockIdx * blockDim + threadIdx&lt;br /&gt;
| get_global_id()&lt;br /&gt;
|-&lt;br /&gt;
| gridDim * blockDim&lt;br /&gt;
| get_global_size()&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
CUDA is using threadIdx.x to get the id for the first dimension while OpenCL is using get_local_id(0).&lt;br /&gt;
&lt;br /&gt;
=Kernels Synchronization=&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
! style=&amp;quot;background:#8AA8E5;&amp;quot; | CUDA&lt;br /&gt;
! style=&amp;quot;background:#ECCF98;&amp;quot; | OpenCL&lt;br /&gt;
|-&lt;br /&gt;
| __syncthreads()&lt;br /&gt;
| barrier()&lt;br /&gt;
|-&lt;br /&gt;
| __threadfence()&lt;br /&gt;
| No direct equivalent&lt;br /&gt;
|-&lt;br /&gt;
| __threadfence_block()&lt;br /&gt;
| mem_fence()&lt;br /&gt;
|-&lt;br /&gt;
| No direct equivalent&lt;br /&gt;
| read_mem_fence()&lt;br /&gt;
|-&lt;br /&gt;
| No direct equivalent&lt;br /&gt;
| write_mem_fence()&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=API Calls=&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
! style=&amp;quot;background:#8AA8E5;&amp;quot; | CUDA&lt;br /&gt;
! style=&amp;quot;background:#ECCF98;&amp;quot; | OpenCL&lt;br /&gt;
|-&lt;br /&gt;
| cudaGetDeviceProperties()&lt;br /&gt;
| clGetDeviceInfo()&lt;br /&gt;
|-&lt;br /&gt;
| cudaMalloc()&lt;br /&gt;
| clCreateBuffer()&lt;br /&gt;
|-&lt;br /&gt;
| cudaMemcpy()&lt;br /&gt;
| clEnqueueRead(Write)Buffer()&lt;br /&gt;
|-&lt;br /&gt;
| cudaFree()&lt;br /&gt;
| clReleaseMemObj()&lt;br /&gt;
|-&lt;br /&gt;
| kernel&amp;lt;&amp;lt;&amp;lt;...&amp;gt;&amp;gt;&amp;gt;()&lt;br /&gt;
| clEnqueueNDRangeKernel()&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
=Example Code=&lt;br /&gt;
A simple vector-add code will be given here to introduce the basic workflow of OpenCL program. An simple OpenCL program contains a source file &amp;#039;&amp;#039;main.c&amp;#039;&amp;#039; and a kernel file &amp;#039;&amp;#039;kernel.cl&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
main.c&lt;br /&gt;
&amp;lt;source lang=C&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#ifdef __APPLE__ //Mac OSX has a different name for the header file&lt;br /&gt;
#include &amp;lt;OpenCL/opencl.h&amp;gt;&lt;br /&gt;
#else&lt;br /&gt;
#include &amp;lt;CL/cl.h&amp;gt;&lt;br /&gt;
#endif&lt;br /&gt;
&lt;br /&gt;
#define MEM_SIZE (128)//suppose we have a vector with 128 elements&lt;br /&gt;
#define MAX_SOURCE_SIZE (0x100000)&lt;br /&gt;
&lt;br /&gt;
int main()&lt;br /&gt;
{&lt;br /&gt;
    //In general Intel CPU and NV/AMD&amp;#039;s GPU are in different platforms&lt;br /&gt;
    //But in Mac OSX, all the OpenCL devices are in the platform &amp;quot;Apple&amp;quot;&lt;br /&gt;
    cl_platform_id platform_id = NULL;&lt;br /&gt;
    cl_device_id device_id = NULL;&lt;br /&gt;
    cl_context context = NULL;&lt;br /&gt;
    cl_command_queue command_queue = NULL; //&amp;quot;stream&amp;quot; in CUDA&lt;br /&gt;
    cl_mem memobj = NULL;//device memory&lt;br /&gt;
    cl_program program = NULL; //cl_prgram is a program executable created from the source or binary&lt;br /&gt;
    cl_kernel kernel = NULL; //kernel function&lt;br /&gt;
    cl_uint ret_num_devices;&lt;br /&gt;
    cl_uint ret_num_platforms;&lt;br /&gt;
    cl_int ret; //accepts return values for APIs&lt;br /&gt;
    &lt;br /&gt;
    float mem[MEM_SIZE]; //alloc memory on host(CPU) ram&lt;br /&gt;
    &lt;br /&gt;
    //OpenCL source can be placed in the source code as text strings or read from another file.&lt;br /&gt;
    FILE *fp;&lt;br /&gt;
    const char fileName[] = &amp;quot;./kernel.cl&amp;quot;;&lt;br /&gt;
    size_t source_size;&lt;br /&gt;
    char *source_str;&lt;br /&gt;
    cl_int i;&lt;br /&gt;
    &lt;br /&gt;
    // read the kernel file into ram&lt;br /&gt;
    fp = fopen(fileName, &amp;quot;r&amp;quot;);&lt;br /&gt;
    if (!fp) {&lt;br /&gt;
        fprintf(stderr, &amp;quot;Failed to load kernel.\n&amp;quot;);&lt;br /&gt;
        exit(1);&lt;br /&gt;
    }&lt;br /&gt;
    source_str = (char *)malloc(MAX_SOURCE_SIZE);&lt;br /&gt;
    source_size = fread( source_str, 1, MAX_SOURCE_SIZE, fp );&lt;br /&gt;
    fclose( fp );&lt;br /&gt;
    &lt;br /&gt;
    //initialize the mem with 1,2,3...,n&lt;br /&gt;
    for( i = 0; i &amp;lt; MEM_SIZE; i++ ) {&lt;br /&gt;
        mem[i] = i;&lt;br /&gt;
    }&lt;br /&gt;
    &lt;br /&gt;
    //get the device info&lt;br /&gt;
    ret = clGetPlatformIDs(1, &amp;amp;platform_id, &amp;amp;ret_num_platforms);&lt;br /&gt;
    ret = clGetDeviceIDs(platform_id, CL_DEVICE_TYPE_DEFAULT, 1, &amp;amp;device_id, &amp;amp;ret_num_devices);&lt;br /&gt;
    &lt;br /&gt;
    //create context on the specified device&lt;br /&gt;
    context = clCreateContext( NULL, 1, &amp;amp;device_id, NULL, NULL, &amp;amp;ret);&lt;br /&gt;
    &lt;br /&gt;
    //create the command_queue (stream)&lt;br /&gt;
    command_queue = clCreateCommandQueue(context, device_id, 0, &amp;amp;ret);&lt;br /&gt;
    &lt;br /&gt;
    //alloc mem on the device with the read/write flag&lt;br /&gt;
    memobj = clCreateBuffer(context, CL_MEM_READ_WRITE, MEM_SIZE * sizeof(float), NULL, &amp;amp;ret);&lt;br /&gt;
    &lt;br /&gt;
    //copy the memory from host to device, CL_TRUE means blocking write/read&lt;br /&gt;
    ret = clEnqueueWriteBuffer(command_queue, memobj, CL_TRUE, 0, MEM_SIZE * sizeof(float), mem, 0, NULL, NULL);&lt;br /&gt;
    &lt;br /&gt;
    //create a program object for a context&lt;br /&gt;
    //load the source code specified by the text strings into the program object&lt;br /&gt;
    program = clCreateProgramWithSource(context, 1, (const char **)&amp;amp;source_str, (const size_t *)&amp;amp;source_size, &amp;amp;ret);&lt;br /&gt;
    &lt;br /&gt;
    //build (compiles and links) a program executable from the program source or binary&lt;br /&gt;
    ret = clBuildProgram(program, 1, &amp;amp;device_id, NULL, NULL, NULL);&lt;br /&gt;
    &lt;br /&gt;
    //create a kernel object with specified name&lt;br /&gt;
    kernel = clCreateKernel(program, &amp;quot;vecAdd&amp;quot;, &amp;amp;ret);&lt;br /&gt;
    &lt;br /&gt;
    //set the argument value for a specific argument of a kernel&lt;br /&gt;
    ret = clSetKernelArg(kernel, 0, sizeof(cl_mem), (void *)&amp;amp;memobj);&lt;br /&gt;
    &lt;br /&gt;
    //define the global size and local size (grid size and block size in CUDA)&lt;br /&gt;
    size_t global_work_size[3] = {MEM_SIZE, 0, 0};&lt;br /&gt;
    size_t local_work_size[3]  = {MEM_SIZE, 0, 0};&lt;br /&gt;
    &lt;br /&gt;
    //Enqueue a command to execute a kernel on a device (&amp;quot;1&amp;quot; indicates 1-dim work)&lt;br /&gt;
    ret = clEnqueueNDRangeKernel(command_queue, kernel, 1, NULL, global_work_size, local_work_size, 0, NULL, NULL);&lt;br /&gt;
    &lt;br /&gt;
    //copy memory from device to host&lt;br /&gt;
    ret = clEnqueueReadBuffer(command_queue, memobj, CL_TRUE, 0, MEM_SIZE * sizeof(float), mem, 0, NULL, NULL);&lt;br /&gt;
    &lt;br /&gt;
    //print out the result&lt;br /&gt;
    for(i=0; i&amp;lt;MEM_SIZE; i++) {&lt;br /&gt;
        printf(&amp;quot;mem[%d] : %.2f\n&amp;quot;, i, mem[i]);&lt;br /&gt;
    }&lt;br /&gt;
    &lt;br /&gt;
    //clFlush only guarantees that all queued commands to command_queue get issued to the appropriate device&lt;br /&gt;
    //There is no guarantee that they will be complete after clFlush returns&lt;br /&gt;
    ret = clFlush(command_queue);&lt;br /&gt;
    //clFinish blocks until all previously queued OpenCL commands in command_queue are issued to the associated device and have completed.&lt;br /&gt;
    ret = clFinish(command_queue);&lt;br /&gt;
    ret = clReleaseKernel(kernel);&lt;br /&gt;
    ret = clReleaseProgram(program);&lt;br /&gt;
    ret = clReleaseMemObject(memobj);//free memory on device&lt;br /&gt;
    ret = clReleaseCommandQueue(command_queue);&lt;br /&gt;
    ret = clReleaseContext(context);&lt;br /&gt;
    &lt;br /&gt;
    free(source_str);//free memory on host&lt;br /&gt;
    &lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
kernel.cl&lt;br /&gt;
&amp;lt;source lang=C&amp;gt;&lt;br /&gt;
__kernel void vecAdd(__global float* a)&lt;br /&gt;
{&lt;br /&gt;
    int gid = get_global_id(0);// in CUDA = blockIdx.x * blockDim.x + threadIdx.x&lt;br /&gt;
    &lt;br /&gt;
    a[gid] += a[gid];&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Atomic operations on floating point numbers=&lt;br /&gt;
&lt;br /&gt;
CUDA has atomicAdd() for floating numbers, but OpenCL doesn&amp;#039;t have it. The only atomic function that can work on floating number is atomic_cmpxchg(). According to [http://simpleopencl.blogspot.ca/2013/05/atomic-operations-and-floats-in-opencl.html Atomic operations and floating point numbers in OpenCL], you can serialize the memory access like it is done in the next code:&lt;br /&gt;
&amp;lt;source lang=C&amp;gt;&lt;br /&gt;
float sum=0;&lt;br /&gt;
void atomic_add_global(volatile global float *source, const float operand) {&lt;br /&gt;
    union {&lt;br /&gt;
        unsigned int intVal;&lt;br /&gt;
        float floatVal;&lt;br /&gt;
    } newVal;&lt;br /&gt;
    union {&lt;br /&gt;
        unsigned int intVal;&lt;br /&gt;
        float floatVal;&lt;br /&gt;
    } prevVal;&lt;br /&gt;
 &lt;br /&gt;
    do {&lt;br /&gt;
        prevVal.floatVal = *source;&lt;br /&gt;
        newVal.floatVal = prevVal.floatVal + operand;&lt;br /&gt;
    } while (atomic_cmpxchg((volatile global unsigned int *)source, prevVal.intVal, newVal.intVal) != prevVal.intVal);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
First function works on global memory the second one work on the local memory.&lt;br /&gt;
&amp;lt;source lang=C&amp;gt;&lt;br /&gt;
float sum=0;&lt;br /&gt;
void atomic_add_local(volatile local float *source, const float operand) {&lt;br /&gt;
    union {&lt;br /&gt;
        unsigned int intVal;&lt;br /&gt;
        float floatVal;&lt;br /&gt;
    } newVal;&lt;br /&gt;
 &lt;br /&gt;
    union {&lt;br /&gt;
        unsigned int intVal;&lt;br /&gt;
        float floatVal;&lt;br /&gt;
    } prevVal;&lt;br /&gt;
 &lt;br /&gt;
    do {&lt;br /&gt;
        prevVal.floatVal = *source;&lt;br /&gt;
        newVal.floatVal = prevVal.floatVal + operand;&lt;br /&gt;
    } while (atomic_cmpxchg((volatile local unsigned int *)source, prevVal.intVal, newVal.intVal) != prevVal.intVal);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
A faster approch is based on the discuss in CUDA developer forums [https://devtalk.nvidia.com/default/topic/458062/atomicadd-float-float-atomicmul-float-float-/ ]&lt;br /&gt;
&amp;lt;source lang=C&amp;gt;&lt;br /&gt;
inline void atomicAdd_f(__global float* address, float value)&lt;br /&gt;
{&lt;br /&gt;
    float old = value;&lt;br /&gt;
    &lt;br /&gt;
    while ((old = atomic_xchg(address, atomic_xchg(address, 0.0f)+old))!=0.0f);&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Syam</name></author>
	</entry>
</feed>