首页 > 其他 > 详细

关于何时使用cudaDeviceSynchronize

时间:2014-02-26 03:59:57      阅读:880      评论:0      收藏:0      [点我收藏+]

When to call cudaDeviceSynchronize

why do we need cudaDeviceSynchronize(); in kernels with device-printf?

Although CUDA kernel launches are asynchronous, all GPU-related tasks placed in one stream (which is default behaviour) are executed sequentially.

So, for example,

kernel1<<<X,Y>>>(...); // kernel start execution, CPU continues to next statement
kernel2<<<X,Y>>>(...); // kernel is placed in queue and will start after kernel1 finishes, CPU continues to next statement
cudaMemcpy(...); // CPU blocks until ememory is copied, memory copy starts only

而GOOGLE中文排名第二的解释是不太完整的:

These are all barriers. Barriers prevent code execution beyond the barrier until some condition is met.

  1. cudaDeviceSynchronize() halts execution in the CPU/host thread (that the cudaDeviceSynchronize was issued in) until the GPU has finished processing all previously requested cuda tasks (kernels, data copies, etc.)
  2. cudaThreadSynchronize() as you‘ve discovered, is just a deprecated version of cudaDeviceSynchronize. Deprecated just means that it still works for now, but it‘s recommended not to use it (use cudaDeviceSynchronize instead) and in the future, it may become unsupported. But cudaThreadSynchronize() and cudaDeviceSynchronize() are basically identical.
  3. cudaStreamSynchronize() is similar to the above two functions, but it prevents further execution in the CPU host thread until the GPU has finished processing all previously requested cuda tasks that were issued in the referenced stream. So cudaStreamSynchronize() takes a stream id as it‘s only parameter. cuda tasks issued in other streams may or may not be complete when CPU code execution continues beyond this barrier.

关于何时使用cudaDeviceSynchronize

原文:http://blog.csdn.net/mathgeophysics/article/details/19905935

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!