Vulkan API Overview: Pipeline barriers

Last time on Vulkan API Overview you got an overview about how Vulkan pieces assemble when rendering primitives.

I'm going to have a recess about Vulkan overviews because I need to write some more complete demos and samples in Vulkan to fuel better articles.

But yet once, I hope to demystify what the pipeline barriers are and when you need to insert them in Vulkan.

Pipeline write hazards

Many GPUs have several caches in different sections of the pipeline. These caches are demand-driven and fill themselves from GPU memory whenever the pipeline needs to read or write from the memory.

Write into one of the GPU caches isn't going to be visible elsewhere until a pipeline barrier is inserted.

vkCmdPipelineBarrier inserts a barrier, and it roughly does the following in some order:

When you insert a pipeline barrier, it is interpreted that you want some region of pipeline to wait for results from other part of the pipeline.

You can be unspecific when you describe what your pipeline barrier should flush. If you do this, it is interpreted that you're not going to need the previous contents of the memory.

What happens if pipeline barrier isn't there

On software rasterizers the pipeline barrier might be a NOP and nothing would happen from a missing pipeline barrier. On other systems it could cause:

Specificity of barriers

You've got an option of inserting a barrier that stalls the whole pipeline if you want to. But doing so could affect not just your program but everything running on the same GPU.

You have lot of choices and options in barriers because the details of caching differ across GPU cards. By being specific about which region in the pipeline has to invalidate, flush and wait for another region, you hopefully make your application perform well across architectures.

Gotchas in specifying image barriers

Note that you need a barrier when you relayout something or want to read something you wrote. Since lot of rendering has to do with piecing images together, most often you use image barriers.

There are situations when you might like to read from one part of the image and write into an another part of the image. For that reason you've got subresourceRange -field in your image barrier. If you forget this field, your image barrier will do nothing.

If you just want the whole image cache synchronized, you could copy the subresourceRange from the same record you passed to the image view. But don't forget to fill this field.

When do you need a memory buffer barrier

Memory buffer barriers are there if you use GPU to write something into memory buffer while another one is reading from there.

You also want barriers when the host is accessing memory that was written by the GPU. But there's often no need to add a barrier when the host writes into a memory because every command buffer submission does an implicit host-write barrier.

If HOST_COHERENT_BIT -property isn't set, you need to use Flush/Invalidate on mapped memory ranges to make writes visible.

Note the flush in memory mapped ranges mean that the host memory is flushed to the device. Invalidation means that the mapped ranges are invalidated so that they are fetched from the device upon next access.

There's little bit of gotcha in reading results from the GPU. Remember the command buffer execution isn't happening immediately after you submit. Use a fence to wait for the execution to finish before you attempt your Invalidate+Read.


I hope that this post would manage to reduce both the superfluous use and omissions of pipeline barriers. To me, they were the most arcane part of Vulkan before I studied them.