Preventing data races with Swift’s Actors

Published on: June 14, 2021

We all know that async / await was one of this year’s big announcements WWDC. It completely changes the way we interact with concurrent code. Instead of using completion handlers, we can await results in a non-blocking way. More importantly, with the new Swift Concurrency features, our Swift code is much safer and consistent than ever before.

For example, the Swift team built an all-new threading model that ensures your program doesn’t spawn more threads than there are CPU cores to avoid thread explosion. This is a huge difference from GCD where every call to async would spawn a new thread and the CPU had to give each of your threads some time to run which caused significant overhead due to a lot of context switching.

While all this is interesting, and makes our concurrent code much better, this post is not about Swift concurrency as a whole. Instead, I want to focus on a smaller feature called Actors.

Understanding the problem that actors solve

An actor in Swift 5.5 is an object that isolates access to its mutable state. This means that anybody that wants to call a method on an actor where the method relies on mutable state, regardless of reading or writing, has to do so asynchronously.

But what does this mean? And why is this the case?

To answer that, let’s consider an example of code that you might write today.

class DateFormatterCache {
    static let shared = DateFormatterCache()

    private var formatters = [String: DateFormatter]()

    func formatter(with format: String) -> DateFormatter {
        if let formatter = formatters[format] {
            return formatter
        }

        let formatter = DateFormatter()
        formatter.locale = Locale(identifier: "en_US_POSIX")
        formatter.dateFormat = format
        formatters[format] = formatter
        return formatter
    }
}

This code is quite straightforward, and something that I have actually included in a project once. However, it got rejected in a PR. To find out why, let’s see how this code would be used.

Let’s emulate a situation where this code is used in a multithreaded environment.

Let’s add a few print statements first:

class DateFormatterCache {
    static let shared = DateFormatterCache()

    private var formatters = [String: DateFormatter]()

    func formatter(with format: String) -> DateFormatter {
        if let formatter = formatters[format] {
            print("returning cached formatter for \(format)")
            return formatter
        }

        print("creating new formatter for \(format)")
        let formatter = DateFormatter()
        formatter.locale = Locale(identifier: "en_US_POSIX")
        formatter.dateFormat = format
        formatters[format] = formatter
        return formatter
    }
}

This will tell us whether we’re reusing an existing formatter or creating a new one. These print statements will make it easier to follow what this code does exactly.

Here’s how I’ll emulate the multithreaded environment:

let formats = ["DD/MM/YYYY", "DD-mm-yyyy", "yyyy", "DD-MM", "DD-mm"]
DispatchQueue.concurrentPerform(iterations: 10) { iteration in
    let formatter = DateFormatterCache.shared.formatter(with: formats.randomElement()!)
}

I know these date formats might not be the best; it’s not the point for me to show you some clever date formats. Instead, I want to demonstrate a problem to you.

Running this code crashes most of the time for me. I get an EXC_BAD_ACCESS error on the formatters dictionary after a couple of iterations.

When looking at the console, the output looks a little like this:

creating new formatter for DD-mm-yyyy
creating new formatter for DD-mm
creating new formatter for DD-mm-yyyy
creating new formatter for yyyy
creating new formatter for DD-mm-yyyy
creating new formatter for DD-MM
creating new formatter for DD-MM
creating new formatter for DD-mm-yyyy
creating new formatter for DD-mm-yyyy
creating new formatter for DD/MM/YYYY

This makes it look like the cache is not doing anything. Clearly, we're creating a new formatter for every iteration.

Let’s run this code in a normal for loop to see if that’s any better.

for _ in 0..<10 {
    let formatter = DateFormatterCache.shared.formatter(with: formats.randomElement()!)
}

The first thing to note is that this code wouldn’t crash. There’s no bad access on formatters inside of the cache anymore.

Let’s look at the console:

creating new formatter for DD/MM/YYYY
creating new formatter for DD-mm-yyyy
returning cached formatter for DD/MM/YYYY
returning cached formatter for DD/MM/YYYY
creating new formatter for yyyy
returning cached formatter for DD/MM/YYYY
creating new formatter for DD-mm
returning cached formatter for yyyy
returning cached formatter for DD-mm
returning cached formatter for DD-mm

This looks much better. Apparently the caching logic should work. But not when we introduce concurrency…

The reason the formatter cache crashed in the concurrent example is a data race. Multiple threads attempt to read, and modify, the formatters dictionary. The program can’t handle these concurrent reads and writes which puts our program in an inconsistent state and eventually leads to a crash.

Another interesting aspect of this is the broken cache. This of course related to the data race, but let’s see what actually happens when the code runs.

I have explained issues with concurrency, mutable state, and dictionaries before in this post.

Because we’re running code concurrently, we call the formatter(with:) method ten times at roughly the same time. When this functions starts, it reads the formatters dictionary which will be empty, so no formatters are cached. And because we have ten concurrent reads, the dictionary will be empty for each of the ten calls.

Dictionaries are passed by value with reference characteristics. This means that the dictionary is not copied until you attempt to modify it. This is important to remember.

When each of the ten calls to formatter(with:) attempt to add the newly created formatter to the cache, the formatter will be copied and the new value is added to the copy. This means that each iteration will be adding to the dictionary that was read earlier. An empty dictionary that we'll add one entry to, and we'll make that the new value of formatters. This means that we'll end up with a different dictionary that has one value after each of these concurrent function calls.

Usually.

Because our concurrent code might also run slightly slower, we could sometimes have a dictionary with two, three, or more items. And this dictionary could be overwritten by a later iteration if our code happens to run that way.

There’s a ton of ambiguity here. We don’t control exactly how our formatter cache is accessed, by which thread, and how often. This means that my initial, simple implementation, can never work reliably in a multithreaded environment.

Solving data races without Actors

We can fix this without Swift’s new concurrency by synchronizing access to the formatters dictionary. Synchronizing means that we ensure that we execute the formatter(with:) function serially even if it’s called in parallel. This will ensure that the formatters dictionary is read, and updated, atomically. Or in one pass. Or in other words, without interruption. To gain a better understanding of what atomicity is, you can refer to this post I wrote earlier. By synchronizing code we'll know that once the formatter(with:) function has done its work, we’re ready to handle another call to formatter(with:). Basically callers to formatter(with:) will have to wait for their turn.

Synchronizing code like that can be done with a dispatch queue:

class DateFormatterCache {
    static let shared = DateFormatterCache()

    private var formatters = [String: DateFormatter]()
    private let queue = DispatchQueue(label: "com.dw.DateFormatterCache.\(UUID().uuidString)")

    func formatter(with format: String) -> DateFormatter {
        return queue.sync {
            if let formatter = formatters[format] {
                print("returning cached formatter for \(format)")
                return formatter
            }

            print("creating new formatter for \(format)")
            let formatter = DateFormatter()
            formatter.locale = Locale(identifier: "en_US_POSIX")
            formatter.dateFormat = format
            formatters[format] = formatter
            return formatter
        }
    }
}

By creating a private queue and calling sync on it in the formatter, we make sure the queue only runs one of these closures at a time. We can return the result of our operation from the closure, by returning queue.sync from the function because everything happens synchronously.

While this code runs we block the calling thread. This means that nothing else can run on that thread until the sync closure ran.

When we run the concurrent example code again with this private queue in place:

let formats = ["DD/MM/YYYY", "DD-mm-yyyy", "yyyy", "DD-MM", "DD-mm"]
DispatchQueue.concurrentPerform(iterations: 10) { iteration in
    let formatter = DateFormatterCache.shared.formatter(with: formats.randomElement()!)
}

It doesn’t crash and produces the following output:

creating new formatter for DD/MM/YYYY
returning cached formatter for DD/MM/YYYY
creating new formatter for yyyy
creating new formatter for DD-mm
returning cached formatter for DD-mm
creating new formatter for DD-mm-yyyy
returning cached formatter for DD/MM/YYYY
returning cached formatter for yyyy
returning cached formatter for DD-mm
creating new formatter for DD-MM

Clearly, the code works well! Awesome.

But there are a few problems here:

  1. We block the thread. This means that GCD will spawn new threads to make sure the CPU stays busy with those threads instead of sitting completely idle. This means that we’ll potentially have tons of threads, which can be expensive if the CPU has to context switch between threads a lot.
  2. It’s not clear to the caller of formatter(with:) that it’s a blocking function. A caller of this function might have to wait for many other calls to this function to complete which might be unexpected.
  3. It’s easy to forget synchronization, especially if the formatters property should be readable from outside of the class. The compiler can’t help us so we have to rely on our own judgement and hope that any mistakes get caught in PR, just like my mistake was.

In Swift 5.5, we can leverage actors to achieve proper mutable state isolation with compiler support.

Solving data races with Actors

As I mentioned earlier, actors isolate access to their mutable state. This means that an object like the DateFormatterCache can be written as an actor instead of a class, and we’ll get synchronization for free:

actor DateFormatterCache {
    static let shared = DateFormatterCache()

    private var formatters = [String: DateFormatter]()

    func formatter(with format: String) -> DateFormatter {
        if let formatter = formatters[format] {
            print("returning cached formatter for \(format)")
            return formatter
        }

        print("creating new formatter for \(format)")
        let formatter = DateFormatter()
        formatter.locale = Locale(identifier: "en_US_POSIX")
        formatter.dateFormat = format
        formatters[format] = formatter
        return formatter
    }
}

Note how the object is completely unchanged from the initial version. All I did was change class to actor and I removed the queue that we added later. Also note that actors are reference types, just like classes are.

Now that DateFormatterCache is an actor, Swift will know that formatters is mutable and that any access to it will need to be synchronized. This also means that Swift knows hat formatter(with:) might not return immediately, even if the function isn’t marked async. This is very similar to what we had earlier with the private queue.

If I were to make formatters an internal or public property instead of private, accessing formatters directly from the outside would also be synchronized, and therefor be done asynchronously from the caller’s point of view.

Within the actor, we know that we’re already synchronized. So I don’t have to wait for formatters’s value to be read. I can just read it directly without doing any manual synchronization. I get all of this for free; there’s no work to be done by me to ensure correct synchronization.

Running the following test code from earlier produces an error though:

let formats = ["DD/MM/YYYY", "DD-mm-yyyy", "yyyy", "DD-MM", "DD-mm"]
DispatchQueue.concurrentPerform(iterations: 10) { iteration in
    let formatter = DateFormatterCache.shared.formatter(with: formats.randomElement()!)
}

Here’s the error:

Actor-isolated instance method ‘formatter(with:)’ can only be referenced from inside the actor

This error seems to suggest that we cannot access formatter(with:) at all. This isn’t entirely correct, but we’ll need access it asynchronously rather than synchronously like we do now. The easiest way to do this is to either already be in an async context, or enter one:

let formats = ["DD/MM/YYYY", "DD-mm-yyyy", "yyyy", "DD-MM", "DD-mm"]
DispatchQueue.concurrentPerform(iterations: 10) { iteration in
    Task {
        let formatter = DateFormatterCache.shared.formatter(with: formats.randomElement()!)
    }
}

Doing this provides us with a more useful compiler error:

Expression is ‘async’ but is not marked with ‘await’

Remember how I explained that formatter(with:) might not return immediately because it will be synchronized by the actor just like how the queue.sync version in the class didn’t return immediately?

In the old version of this code, the blocking nature of formatters(with:) was hidden.

With an actor, the compiler will tell us that formatter(with:) might not return immediately, so it forces us to use an await so that our asynchronous work can be suspended until formatter(with:) is run.

Not only is this much nicer due to the more expressive nature of the code, it’s also much better because we’re not blocking our thread. Instead, we’re suspending our function so its execution context can be set aside while the existing thread does other work. We don't create a new thread like we did with GCD. Eventually the actor runs formatter(with:) and our execution context is picked back up where it left off.

Here's what the corrected code looks like:

let formats = ["DD/MM/YYYY", "DD-mm-yyyy", "yyyy", "DD-MM", "DD-mm"]
DispatchQueue.concurrentPerform(iterations: 10) { iteration in
    Task {
        let formatter = await DateFormatterCache.shared.formatter(with: formats.randomElement()!)
    }
}

What’s interesting is that because Swift’s new concurrency model does not spawn more threads than CPU cores, simply wrapping the class based version of the cache in a Task.init or Task.detached block would already mask our bug most of the time. The reason for this is that it’s very likely that all of the task you create run on the same thread. This means that they won’t actually run concurrently like they do with Task.

You can try this out by making DateFormatter a class again and removing the await from the last code snippet. Keep the Task though since that will leverage Swift's new concurrency features.

However, you should not assume that the bug would actually be fixed by using a class, not synchronizing, and using Task. There is no guarantee that your closures would run on the same thread. And more importantly, in the really world you might have many tasks happening that are spawned from many different threads. This would make it far more likely for data races to occur than it is in my simple example.

Conclusion

In this post, I explained a little bit about what Swift's new Actors are, and what their roles is in this new async / await world that we can start exploring. You also learned when data races occur, and how you can solve them. First, you saw an approach without actors. After that, I showed you an approach that's much more expressive and without any of the hidden implications that the earlier version had.

Swift's actors are an extremely useful tool to ensure you don't run into data races by isolating mutatble state, and synchronizing access. What's even better is that the Swift language and compiler make sure of all this, and any potential errors can be raised as compiler error rather than bugs and runtime crashes. I’m extremely excited for concurrency in Swift 5.5, and can’t wait to explore this feature more over the coming weeks.

Subscribe to my newsletter