The GCD Handbook
Calling
dispatch_semaphore_wait
will block the thread untildispatch_semaphore_signal
is called. This means thatsignal
must be called from a different thread, since the current thread is totally blocked. Further, you should never callwait
from the main thread, only from background threads. […] One notable caveat is that each time you callenqueueWork
, if you have hit the semaphore’s limit, it will spin up a new thread. If you have a low limit and lots of work to enqueue, you can create hundreds of threads.[…]
If you have many blocks of work to execute, and you need to be notified about their collective completion, you can use a group.
dispatch_group_async
lets you add work onto a queue (the work in the block should be synchronous), and it keeps track of how many items have been added. Note that the same dispatch group can add work to multiple different queues and can keep track of them all. When all of the tracked work is complete, the block passed todispatch_group_notify
is fired, kind of like a completion block.[…]
GCD’s
barrier
set of APIs do something special: they will wait until the queue is totally empty before executing the block. Using thebarrier
APIs for our writes will limit access to the dictionary and make sure that we can never have any writes happening at the same time as a read or another write.
Update (2016-06-06): Michael Rhodes:
The problem I saw here, which Soroush also notes, is that this approach starts a potentially unbounded number of threads, which are immediately blocked by waiting on a semaphore. Obviously GCD will limit you at some point, but that’s still a lot of work and a decent chunk of memory. While this code is necessarily simplified to introduce this use of semaphores, the bunch of waiting threads needled at me.
To achieve effects like this with queue-based systems, I often find I need to combine more than one queue. Here, in the solution Soroush and I got to, we need two queues to get to a more efficient solution which only requires a single blocked thread.