Solving actor-isolated protocol conformance related errors in Swift 6.2

Swift 6.2 comes with several quality of life improvements for concurrency. One of these features is the ability to have actor-isolated conformances to protocols. Another feature is that your code will now run on the main actor by default.

This does mean that sometimes, you’ll run into compiler errors. In this blog post, I’ll explore these errors, and how you can fix them when you do.

Before we do, let’s briefly talk about actor-isolated protocol conformance to understand what this feature is about.

Understanding actor-isolated protocol conformance

Protocols in Swift can require certain functions or properties to be nonisolated. For example, we can define a protocol that requires a nonisolated var name like this:

protocol MyProtocol {
  nonisolated var name: String { get }
}

class MyModelType: MyProtocol {
  var name: String

  init(name: String) {
    self.name = name
  }
}

Our code will not compile at the moment with the following error:

Conformance of 'MyModelType' to protocol 'MyProtocol' crosses into main actor-isolated code and can cause data races

In other words, our MyModelType is isolated to the main actor and our name protocol conformance isn’t. This means that using MyProtocol and its name in a nonisolated way, can lead to data races because name isn’t actually nonisolated.

When you encounter an error like this you have two options:

  1. Embrace the nonisolated nature of name
  2. Isolate your conformance to the main actor

The first solution usually means that you don’t just make your property nonisolated, but you apply this to your entire type:

nonisolated class MyModelType: MyProtocol {
  // ...
}

This might work but you’re now breaking out of main actor isolation and potentially opening yourself up to new data races and compiler errors.

When your code runs on the main actor by default, going nonisolated is often not what you want; everything else is still on main so it makes sense for MyModelType to stay there too.

In this case, we can mark our MyProtocol conformance as @MainActor:

class MyModelType: @MainActor MyProtocol {
  // ...
}

By doing this, MyModelType conforms to my protocol but only when we’re on the main actor. This automatically makes the nonisolated requirement for name pointless because we’re always going to be on the main actor when we’re using MyModelType as a MyProtocol.

This is incredibly useful in apps that are main actor by default because you don’t want your main actor types to have nonisolated properties or functions (usually). So conforming to protocols on the main actor makes a lot of sense in this case.

Now let’s look at some errors related to this feature, shall we? I initially encountered an error around my SwiftData code, so let’s start there.

Fixing Main actor-isolated conformance to 'PersistentModel' cannot be used in actor-isolated context

Let’s dig right into an example of what can happen when you’re using SwiftData and a custom model actor. The following model and model actor produce a compiler error that reads “Main actor-isolated conformance of 'Exercise' to 'PersistentModel' cannot be used in actor-isolated context”:

@Model
class Exercise {
  var name: String
  var date: Date

  init(name: String, date: Date) {
    self.name = name
    self.date = date
  }
}

@ModelActor
actor BackgroundActor {
  func example() {
    // Call to main actor-isolated initializer 'init(name:date:)' in a synchronous actor-isolated context
    let exercise = Exercise(name: "Running", date: Date())
    // Main actor-isolated conformance of 'Exercise' to 'PersistentModel' cannot be used in actor-isolated context
    modelContext.insert(exercise)
  }
}

There’s actually a second error here too because we’re calling the initializer for exercise from our BackgroundActor and the init for our Exercise is isolated to the main actor by default.

Fixing our problem in this case means that we need to allow Exercise to be created and used from non-main actor contexts. To do this, we can mark the SwiftData model as nonisolated:

@Model
nonisolated class Exercise {
  var name: String
  var date: Date

  init(name: String, date: Date) {
    self.name = name
    self.date = date
  }
}

Doing this will make both the init and our conformance to PersistentModel nonisolated which means we’re free to use Exercise from non-main actor contexts.

Note that this does not mean that Exercise can safely be passed from one actor or isolation context to the other. It just means that we’re free to create and use Exercise instances away from the main actor.

Not every app will need this or encounter this, especially when you’re running code on the main actor by default. If you do encounter this problem for SwiftData models, you should probably isolate the problematic are to the main actor unless you specifically created a model actor in the background.

Let’s take a look at a second error that, as far as I’ve seen is pretty common right now in the Xcode 26 beta; using Codable objects with default actor isolation.

Fixing Conformance of protocol 'Encodable' crosses into main actor-isolated code and can cause data races

This error is quite interesting and I wonder whether it’s something Apple can and should fix during the beta cycle. That said, as of Beta 2 you might run into this error for models that conform to Codable. Let’s look at a simple model:

struct Sample: Codable {
  var name: String
}

This model has two compiler errors:

  1. Circular reference
  2. Conformance of 'Sample' to protocol 'Encodable' crosses into main actor-isolated code and can cause data races

I’m not exactly sure why we’re seeing the first error. I think this is a bug because it makes no sense to me at the moment.

The second error says that our Encodable conformance “crossed into main actor-isolated code”. If you dig a bit deeper, you’ll see the following error as an explanation for this: “Main actor-isolated instance method 'encode(to:)' cannot satisfy nonisolated requirement”.

In other words, our protocol conformance adds a main actor isolated implementation of encode(to:) while the protocol requires this method to be non-isolated.

The reason we’re seeing this error is not entirely clear to me but there seems to be a mismatch between our protocol conformance’s isolation and our Sample type.

We can do one of two things here; we can either make our model nonisolated or constrain our Codable conformance to the main actor.

nonisolated struct Sample: Codable {
  var name: String
}

// or
struct Sample: @MainActor Codable {
  var name: String
}

The former will make it so that everything on our Sample is nonisolated and can be used from any isolation context. The second option makes it so that our Sample conforms to Codable but only on the main actor:

func createSampleOnMain() {
  // this is fine
  let sample = Sample(name: "Sample Instance")
  let data = try? JSONEncoder().encode(sample)
  let decoded = try? JSONDecoder().decode(Sample.self, from: data ?? Data())
  print(decoded)
}

nonisolated func createSampleFromNonIsolated() {
  // this is not fine
  let sample = Sample(name: "Sample Instance")
  // Main actor-isolated conformance of 'Sample' to 'Encodable' cannot be used in nonisolated context
  let data = try? JSONEncoder().encode(sample)
  // Main actor-isolated conformance of 'Sample' to 'Decodable' cannot be used in nonisolated context
  let decoded = try? JSONDecoder().decode(Sample.self, from: data ?? Data())
  print(decoded)
}

So generally speaking, you don’t want your protocol conformance to be isolated to the main actor for your Codable models if you’re decoding them on a background thread. If your models are relatively small, it’s likely perfectly acceptable for you to be decoding and encoding on the main actor. These operations should be fast enough in most cases, and sticking with main actor code makes your program easier to reason about.

The best solution will depend on your app, your constraints, and your requirements. Always measure your assumptions when possible and stick with solutions that work for you; don’t introduce concurrency “just to be sure”. If you find that your app benefits from decoding data on a background thread, the solution for you is to mark your type as nonisolated; if you find no direct benefits from background decoding and encoding in your app you should constrain your conformance to @MainActor.

If you’ve implemented a custom encoding or decoding strategy, you might be running into a different error…

Conformance of 'CodingKeys' to protocol 'CodingKey' crosses into main actor-isolated code and can cause data races

Now, this one is a little trickier. When we have a custom encoder or decoder, we might also want to provide a CodingKeys enum:

struct Sample: @MainActor Decodable {
  var name: String

  // Conformance of 'Sample.CodingKeys' to protocol 'CodingKey' crosses into main actor-isolated code and can cause data races
  enum CodingKeys: CodingKey {
    case name
  }

  init(from decoder: any Decoder) throws {
    let container = try decoder.container(keyedBy: CodingKeys.self)
    self.name = try container.decode(String.self, forKey: .name)
  }
}

Unfortunately, this code produces an error. Our conformance to CodingKey crosses into main actor isolated code and that might cause data races. Usually this would mean that we can constraint our conformance to the main actor and this would solve our issue:

// Main actor-isolated conformance of 'Sample.CodingKeys' to 'CustomDebugStringConvertible' cannot satisfy conformance requirement for a 'Sendable' type parameter 'Self'
enum CodingKeys: @MainActor CodingKey {
  case name
}

This unfortunately doesn’t work because CodingKeys requires us to be CustomDebugStringConvertable which requires a Sendable Self.

Marking our conformance to main actor should mean that both CodingKeys and CodingKey are Sendable but because the CustomDebugStringConvertible is defined on CodingKey I think our @MainActor isolation doesn’t carry over.

This might also be a rough edge or bug in the beta; I’m not sure.

That said, we can fix this error by making our CodingKeys nonisolated:

struct Sample: @MainActor Decodable {
  var name: String

  nonisolated enum CodingKeys: CodingKey {
    case name
  }

  init(from decoder: any Decoder) throws {
    let container = try decoder.container(keyedBy: CodingKeys.self)
    self.name = try container.decode(String.self, forKey: .name)
  }
}

This code works perfectly fine both when Sample is nonisolated and when Decodable is isolated to the main actor.

Both this issue and the previous one feel like compiler errors, so if these get resolved during Xcode 26’s beta cycle I will make sure to come back and update this article.

If you’ve encountered errors related to actor-isolated protocol conformance yourself, I’d love to hear about them. It’s an interesting feature and I’m trying to figure out how exactly it fits into the way I write code.

What is @concurrent in Swift 6.2?

Swift 6.2 is available and it comes with several improvements to Swift Concurrency. One of these features is the @concurrent declaration that we can apply to nonisolated functions. In this post, you will learn a bit more about what @concurrent is, why it was added to the language, and when you should be using @concurrent.

Before we dig into @concurrent itself, I’d like to provide a little bit of context by exploring another Swift 6.2 feature called nonisolated(nonsending) because without that, @concurrent wouldn’t exist at all.

And to make sense of nonisolated(nonsending) we’ll go back to nonisolated functions.

Exploring nonisolated functions

A nonisolated function is a function that’s not isolated to any specific actor. If you’re on Swift 6.1, or you’re using Swift 6.2 with default settings, that means that a nonisolated function will always run on the global executor.

In more practical terms, a nonisolated function would run its work on a background thread.

For example the following function would run away from the main actor at all times:

nonisolated 
func decode<T: Decodable>(_ data: Data) async throws -> T {
  // ...
}

While it’s a convenient way to run code on the global executor, this behavior can be confusing. If we remove the async from that function, it will always run on the callers actor:

nonisolated 
func decode<T: Decodable>(_ data: Data) throws -> T {
  // ...
}

So if we call this version of decode(_:) from the main actor, it will run on the main actor.

Since that difference in behavior can be unexpected and confusing, the Swift team has added nonisolated(nonsending). So let’s see what that does next.

Exploring nonisolated(nonsending) functions

Any function that’s marked as nonisolated(nonsending) will always run on the caller’s executor. This unifies behavior for async and non-async functions and can be applied as follows:

nonisolated(nonsending) 
func decode<T: Decodable>(_ data: Data) async throws -> T {
  // ...
}

Whenever you mark a function like this, it no longer automatically offloads to the global executor. Instead, it will run on the caller’s actor.

This doesn’t just unify behavior for async and non-async functions, it also makes our code less concurrent and easier to reason about.

When we offload work to the global executor, this means that we’re essentially creating new isolation domains. The result of that is that any state that’s passed to or accessed inside of our function is potentially accessed concurrently if we have concurrent calls to that function.

This means that we must make the accessed or passed-in state Sendable, and that can become quite a burden over time. For that reason, making functions nonisolated(nonsending) makes a lot of sense. It runs the function on the caller’s actor (if any) so if we pass state from our call-site into a nonisolated(nonsending) function, that state doesn’t get passed into a new isolation context; we stay in the same context we started out from. This means less concurrency, and less complexity in our code.

The benefits of nonisolated(nonsending) can really add up which is why you can make it the default for your nonisolated function by opting in to Swift 6.2’s NonIsolatedNonSendingByDefault feature flag.

When your code is nonisolated(nonsending) by default, every function that’s either explicitly or implicitly nonisolated will be considered nonisolated(nonsending). This means that we need a new way to offload work to the global executor.

Enter @concurrent.

Offloading work with @concurrent in Swift 6.2

Now that you know a bit more about nonisolated and nonisolated(nonsending), we can finally understand @concurrent.

Using @concurrent makes most sense when you’re using the NonIsolatedNonSendingByDefault feature flag as well. Without that feature flag, you can continue using nonisolated to achieve the same “offload to the global executor” behavior. That said, marking functions as @concurrent can future proof your code and make your intent explicit.

With @concurrent we can ensure that a nonisolated function runs on the global executor:

@concurrent
func decode<T: Decodable>(_ data: Data) async throws -> T {
  // ...
}

Marking a function as @concurrent will automatically mark that function as nonisolated so you don’t have to write @concurrent nonisolated. We can apply @concurrent to any function that doesn’t have its isolation explicitly set. For example, you can apply @concurrent to a function that’s defined on a main actor isolated type:

@MainActor
class DataViewModel {
  @concurrent
  func decode<T: Decodable>(_ data: Data) async throws -> T {
    // ...
  }
}

Or even to a function that’s defined on an actor:

actor DataViewModel {
  @concurrent
  func decode<T: Decodable>(_ data: Data) async throws -> T {
    // ...
  }
}

You’re not allowed to apply @concurrent to functions that have their isolation defined explicitly. Both examples below are incorrect since the function would have conflicting isolation settings.

@concurrent @MainActor
func decode<T: Decodable>(_ data: Data) async throws -> T {
  // ...
}

@concurrent nonisolated(nonsending)
func decode<T: Decodable>(_ data: Data) async throws -> T {
  // ...
}

Knowing when to use @concurrent

Using @concurrent is an explicit declaration to offload work to a background thread. Note that doing so introduces a new isolation domain and will require any state involved to be Sendable. That’s not always an easy thing to pull off.

In most apps, you only want to introduce @concurrent when you have a real issue to solve where more concurrency helps you.

An example of a case where @concurrent should not be applied is the following:

class Networking {
  func loadData(from url: URL) async throws -> Data {
    let (data, response) = try await URLSession.shared.data(from: url)
    return data
  }
}

The loadData function makes a network call that it awaits with the await keyword. That means that while the network call is active, we suspend loadData. This allows the calling actor to perform other work until loadData is resumed and data is available.

So when we call loadData from the main actor, the main actor would be free to handle user input while we wait for the network call to complete.

Now let’s imagine that you’re fetching a large amount of data that you need to decode. You started off using default code for everything:

class Networking {
  func getFeed() async throws -> Feed {
    let data = try await loadData(from: Feed.endpoint)
    let feed: Feed = try await decode(data)
    return feed
  }

  func loadData(from url: URL) async throws -> Data {
    let (data, response) = try await URLSession.shared.data(from: url)
    return data
  }

  func decode<T: Decodable>(_ data: Data) async throws -> T {
    let decoder = JSONDecoder()
    return try decoder.decode(T.self, from: data)
  }
}

In this example, all of our functions would run on the caller’s actor. For example, the main actor. When we find that decode takes a lot of time because we fetched a whole bunch of data, we can decide that our code would benefit from some concurrency in the decoding department.

To do this, we can mark decode as @concurrent:

class Networking {
  // ...

  @concurrent
  func decode<T: Decodable>(_ data: Data) async throws -> T {
    let decoder = JSONDecoder()
    return try decoder.decode(T.self, from: data)
  }
}

All of our other code will continue behaving like it did before by running on the caller’s actor. Only decode will run on the global executor, ensuring we’re not blocking the main actor during our JSON decoding.

We made the smallest unit of work possible @concurrent to avoid introducing loads of concurrency where we don’t need it. Introducing concurrency with @concurrent is not a bad thing but we do want to limit the amount of concurrency in our app. That’s because concurrency comes with a pretty high complexity cost, and less complexity in our code typically means that we write code that’s less buggy, and easier to maintain in the long run.

Exploring tab bars on iOS 26 with Liquid Glass

When your app has a tab bar and you recompile it using Xcode 26, you will automatically see that your tab bar has a new look and feel based on Liquid Glass. In this blog post, we’ll explore the new tab bar, and which new capabilities we’ve gained with the Liquid Glass redesign. I’ll also spend a little bit of time on providing some tips around how you can conditionally apply iOS 26 specific view modifiers to your tab bar using Dave DeLong’s “Backport” approach.

By the end of this post you’ll have a much better sense of how Liquid Glass changes your app’s tab bar, and how you can configure the tab bar to really lean into iOS 26’s Liquid Glass design philosophy.

Tab Bar basics in iOS 26

If you’ve adopted iOS 18’s tab bar updates, you’re already in a really good spot for adopting the new features that we get with Liquid Glass. If you haven’t, here’s what a very simple tab bar looks like using TabView and Tab:

TabView {
  Tab("Workouts", systemImage: "dumbbell.fill") {
    WorkoutsView()
  }

  Tab("Exercises", systemImage: "figure.strengthtraining.traditional") {
    ExercisesView()
  }
}

When you compile your app with Xcode 26, and you run it on a device with iOS 18 installed, your tab bar would look a bit like this:

When running the exact some code on iOS 26, you’ll find that the tab bar gets a new Liquid Glass based design:

ios-26-plain.png

Liquid glass encourages a more layer approach to designing your app, so having this approach where there’s a large button above the tab bar and obscuring content isn’t very iOS 26-like.

Here’s what the full screen that this tab bar is on looks like:

ios-26-plain-full.png

To make this app feel more at home on iOS 26, I think we should expand the list’s contents so that they end up underneath the tab bar using a bit of a blurry overlay. Similar to what Apple does for their own apps:

ios-26-health.png

Notice that this app has a left-aligned tab bar and that there’s a search button at the bottom as well. Before we talk a bit about how to achieve that layout, I’d like to explore the setup where they have content that expands underneath the tab bar first. After that we’ll look at more advanced tab bar features like having a search button and more.

Understanding the tab bar’s blur effect

If you’ve spent time with the tab bar already, you’ll know that the blur effect that we see in the health app is actually the default effect for a tab bar that sits on top of a scrollable container.

The app we’re looking at in this post has a view layout that looks a bit like this:

VStack {
  ScrollView(.horizontal) { /* filter options */ }
  List { /* The exercises */ }
  Button { /* The purple button + action */
}

The resulting effect is that the tab doesn’t overlay a scrolling container, and we end up with a solid colored background.

If we remove the button for now, we actually get the blurred background behavior that we want:

ios26-blur.png

The next objective now is to add that “Add Exercise” button again in a way that blends nicely with Liquid Glass, so let’s explore some other cool tab view behaviors on iOS 26, and how we can enable those.

Minimizing a Liquid Glass tab view

Let’s start with a cool effect that we can apply to a tab bar to make it less prominent while the user scrolls.

ios-26-minimized.png

While this effect doesn’t bring our “Add Exercise” button back, it does opt-in to a feature from iOS 26 that I like a lot. We can have our tab bar minimize when the user scrolls up or down by applying a new view modifier to our TabView:

TabView {
  /* ... */
}.tabBarMinimizeBehavior(.onScrollDown)

When this view modifier is applied to your tab view, it will automatically minimize itself when the content that’s overlayed by the tab bar gets scrolled. So in our case, the tab bar minimizes when the list of exercises gets scrolled.

Note that the tab bar doesn’t minimize if we’d apply this view modifier with the old design. That’s because the tab bar didn’t overlay any scrolling content. This makes it even more clear that the old design really doesn’t fit well in a liquid glass world.

Let’s see how we can add our button on top of the Liquid Glass TabView in a way that fits nicely with the new design.

Adding a view above your tab bar on iOS 26

On iOS 26 we’ve gained the ability to add an accessory view to our tab bars. This view will be placed above your tab bar on iOS and when your tab bar minimizes the accessory view is placed next to the minimized tab bar button:

ios-26-acc.png

Note that the button seems a little cut off in the minimized example. This seems to be a but in the beta as far as I can tell right now; if later in the beta cycle it turns out that I’m doing something wrong here, I will update the article as needed.

To place an accessory view on a tab bar, you apply the tabViewBottomAccessory view modifier to your TabView:

TabView {
  /* ... */
}
.tabBarMinimizeBehavior(.onScrollDown)
.tabViewBottomAccessory {
  Button("Add exercise") {
    // Action to add an exercise
  }.purpleButton()
}

Note that the accessory will be visible for every tab in your app so our usage here might not be the best approach; but it works. It’s possible to check the active tab inside of your view modifier to return different buttons or views depending on the active tab:

.tabViewBottomAccessory {
  if activeTab == .workouts {
    Button("Start workout") {
      // Action to add an exercise
    }.purpleButton()
  } else {
    Button("Add exercise") {
      // Action to add an exercise
    }.purpleButton()
  }
}

Again, this works but I’m not sure this is the intended use case for a bottom accessory. Apple’s own usage seems pretty limited to views that are relevant for every view in the app. Like the music app where they have player controls as the tab view’s accessory.

So, while this approach let us add the “Add exercise” button again; it seems like this isn’t the way to go.

Adding a floating button to our view

In the health app example from before, there was a search button in the bottom right side of the screen. We can add a button of our own to that location by using a TabItem in our TabView that has a .search role:

Tab("Add", systemImage: "plus", value: Tabs.exercises, role: .search) {
  /* Your view */
}

While this adds a bottom to the bottom right of our view, it’s far from a solution to replacing our view-specific “Add exercise” button. A Tab that has a search role is separated from your other tabs but you’re expected to present a full screen view from this tab. So a search tab really only makes sense when your current tab bar contains a search page.

That said, I do think that a floating button is what we need in this Liquid Glass world so let’s add one to our exercises view.

It won’t use the TabView APIs but I do think it’s important to cover the solution that works well in my opinion.

Given that Liquid Glass enforces a more layered design, this pattern of having a large button at the bottom of our list just doesn’t work as well as it used to.

Instead, we can leverage a ZStack and add a button on top of it so we can have our scrolling content look the way that we like while also having an “Add Exercise” button:

ZStack(alignment: .bottomTrailing) {
  // view contents

  Button(action: {
    // ...
  }) {
    Label("Add Exercise", systemImage: "plus")
      .bold()
      .labelStyle(.iconOnly)
      .padding()
  }
  .glassEffect(.regular.interactive())
  .padding([.bottom, .trailing], 12)
}

The key to making our floating button look at home is applying the glassEffect view modifier. I won’t cover that modifier in depth but you can probably guess what it does; it makes our button have that Liquid Glass design that we’re looking for:

ios-26-float.png

I’m not 100% sold on this approach because I felt like there was something nice about having that large purple button in my old design. But, this is a new design era. And this feels like it’s something that would fit nicely in the iOS 26 design language.

In Summary

Knowing which options you have for customizing iOS 26’s TabView will greatly help with adopting Liquid Glass. Knowing how you can minimize your tab bar, or when to assign an accessory view can really help you build better experiences for your users. Adding a search tab with the search role will help SwiftUI position your search feature properly and consistently across platforms.

While Liquid Glass is a huge change in terms of design language, I like these new TabView APIs a lot and I’m excited to spend more time with them.

Opting your app out of the Liquid Glass redesign with Xcode 26

On iOS 26, iPadOS 26 and more, your apps will take on a whole new look based on Apple's Liquid Glass redesign. All you need to do to adopt this new style in your apps is recompile. Once recompiled, your app will have all-new UI components which means your app will look fresh and right at home in Apple's latest OS.

That said, there are many reasons why you might not want to adopt Liquid Glass just yet.

It's a big redesign and for lots of apps there will be work to do to properly adapt your designs to fit in with Liquid Glass.

For these apps, Apple allows developers to opt-out of the redesign using a specific property list key that you can add to your app's Info. When you add UIDesignRequiresCompatibility to your Info.plist and set it to YES, your app will run using the old OS design instead of the new Liquid Glass design.

According to Apple this flag should mainly be used for debugging and testing but it can also be used to stay on the old design for a while longer. A word of warning though; Apple intends to remove this option in the next major Xcode release. This means that even though in Xcode 26 you will be able to opt-out, Xcode 27 will probably make adopting Liquid Glass mandatory.

That said, for now you can keep the old look and feel for your app while you figure out how Liquid Glass impacts your design choices.

Setting default actor isolation in Xcode 26

With Swift 6.2, Apple has made a several improvements to Swift Concurrency and its approachability. One of the biggest changes is that new Xcode projects will now, by default, apply an implicit main actor annotation to all your code. This essentially makes your apps single-threaded by default.

I really like this change because without this change it was far too easy to accidentally introduce loads of concurrency in your apps.

In this post I'd like to take a quick look at how you can control this setting as well as the setting for nonisolated(nonsending) from Xcode 26's build settings menu.

Setting your default actor isolation

Open your build settings and look for "Default Actor Isolation". You can use the search feature to make it easier to find the setting.

New projects will have this set to MainActor while existing projects will have this set to nonisolated. I highly recommend trying to set this to MainActor instead. You will need to refactor some of your code and apply explicit nonisolated declarations where you intended to use concurrency so you'll want to allocate some time for this.

MainActor and nonisolated are the only two valid values for this setting.

Enabling nonisolated(nonsending)

Another feature that's introduced through Swift 6.2 is nonisolated(nonsending). This feature makes it so that your nonisolated sync functions automatically inherit the calling actor's isolation instead of always running on the global executor without being isolated to any actor. To get the old behavior back you can annotate your functions with @concurrent. You can learn more about this in my post about Swift 6.2's changes.

You can turn on nonisolated(nonsending) in one of two ways. You can either enable the feature flag for this feature or you can turn on "Approachable Concurrency".

WIth Approachable Concurrency you will get nonisolated(nonsending) along with a couple of other changes that should make the compiler smarter and more sensible when it comes to how concurrent your code will really be.

If you're not sure which one you should use I recommend that you go for Approachable Concurrency.

Exploring concurrency changes in Swift 6.2

It's no secret that Swift concurrency can be pretty difficult to learn. There are a lot of concepts that are different from what you're used to when you were writing code in GCD. Apple recognized this in one of their vision documents and they set out to make changes to how concurrency works in Swift 6.2. They're not going to change the fundamentals of how things work. What they will mainly change is where code will run by default.

In this blog post, I would like to take a look at the two main features that will change how your Swift concurrency code works:

  1. The new nonisolated(nonsending) default feature flag
  2. Running code on the main actor by default with the defaultIsolation setting

By the end of this post you should have a pretty good sense of the impact that Swift 6.2 will have on your code, and how you should be moving forward until Swift 6.2 is officially available in a future Xcode release.

Understanding nonisolated(nonsending)

The nonisolated(nonsending) feature is introduced by SE-0461 and it’s a pretty big overhaul in terms of how your code will work moving forward. At the time of writing this, it’s gated behind an upcoming feature compiler flag called NonisolatedNonsendingByDefault. To enable this flag on your project, see this post on leveraging upcoming features in an SPM package, or if you’re looking to enable the feature in Xcode, take a look at enabling upcoming features in Xcode.

For this post, I’m using an SPM package so my Package.swift contains the following:

.executableTarget(
    name: "SwiftChanges",
    swiftSettings: [
        .enableExperimentalFeature("NonisolatedNonsendingByDefault")
    ]
)

I’m getting ahead of myself though; let’s talk about what nonisolated(nonsending) is, what problem it solves, and how it will change the way your code runs significantly.

Exploring the problem with nonisolated in Swift 6.1 and earlier

When you write async functions in Swift 6.1 and earlier, you might do so on a class or struct as follows:

class NetworkingClient {
  func loadUserPhotos() async throws -> [Photo] {
    // ...
  }
}

When loadUserPhotos is called, we know that it will not run on any actor. Or, in more practical terms, we know it’ll run away from the main thread. The reason for this is that loadUserPhotos is a nonisolated and async function.

This means that when you have code as follows, the compiler will complain about sending a non-sendable instance of NetworkingClient across actor boundaries:

struct SomeView: View {
  let network = NetworkingClient()

  var body: some View {
    Text("Hello, world")
      .task { await getData() }
  }

  func getData() async {
    do {
      // sending 'self.network' risks causing data races
      let photos = try await network.loadUserPhotos()
    } catch {
      // ...
    }
  }
}

When you take a closer look at the error, the compiler will explain:

sending main actor-isolated 'self.network' to nonisolated instance method 'loadUserPhotos()' risks causing data races between nonisolated and main actor-isolated uses

This error is very similar to one that you’d get when sending a main actor isolated value into a sendable closure.

The problem with this code is that loadUserPhotos runs in its own isolation context. This means that it will run concurrently with whatever the main actor is doing.

Since our instance of NetworkingClient is created and owned by the main actor we can access and mutate our networking instance while loadUserPhotos is running in its own isolation context. Since that function has access to self, it means that we can have two isolation contexts access the same instance of NetworkingClient at the exact same time.

And as we know, multiple isolation contexts having access to the same object can lead to data races if the object isn’t sendable.

The difference between an async function and a non-async function that are both nonisolated, is that the non-async function will always run on the caller’s actor. On the other hand, if we call a nonisolated async function from the main actor then the function will not run on the main actor. When we call a nonisolated async function from a place that’s not on the main actor, then the called function will also not run on the main actor.

The code below is commented to show this through some examples:

// this function will _always_ run on the caller's actor
nonisolated func nonIsolatedSync() {}

// this function is isolated to an actor so it always runs on that actor (main in this case)
@MainActor func isolatedSync() {}

// this function will _never_ run on any actor (it runs on a bg thread)
nonisolated func nonIsolatedAsync() async {}

// this function is isolated to an actor so it always runs on that actor (main in this case)
@MainActor func isolatedAsync() async {}

As you can see, there's quite some difference in behavior for functions that are async versus functions that are not. Specifically for nonisolated async versus nonisolated non-async.

Swift 6.2 aims to fix this with a new default for nonisolated functions that's intended to make sure that async and non-async functions can behave in the exact same way.

Understanding nonisolated(nonsending)

The behavior in Swift 6.1 and earlier is inconsistent and confusing for folks, so in Swift 6.2, async functions will adopt a new default for nonisolated functions called nonisolated(nonsending). You don’t have to write this manually; it’s the default so every nonisolated async function will be nonsending unless you specify otherwise.

When a function is nonisolated(nonsending) it means that the function won’t cross actor boundaries. Or, in a more practical sense, a nonisolated(nonsending) function will run on the caller’s actor.

So when we opt-in to this feature by enabling the NonisolatedNonsendingByDefault upcoming feature, the code we wrote earlier is completely fine.

The reason for that is that loadUserPhotos() would now be nonisolated(nonsending) by default, and it would run its function body on the main actor instead of running it on the cooperative thread pool.

Let’s take a look at some examples, shall we? We saw the following example earlier:

class NetworkingClient {
  func loadUserPhotos() async throws -> [Photo] {
    // ...
  }
}

In this case, loadUserPhotos is both nonisolated and async. This means that the function will receive a nonisolated(nonsending) treatment by default, and it runs on the caller’s actor (if any). In other words, if you call this function on the main actor it will run on the main actor. Call it from a place that’s not isolated to an actor; it will run away from the main thread.

Alternatively, we might have added a @MainActor declaration to NetworkingClient:

@MainActor
class NetworkingClient {
  func loadUserPhotos() async throws -> [Photo] {
    return [Photo()]
  }
}

This makes loadUserPhotos isolated to the main actor so it will always run on the main actor, no matter where it’s called from.

Then we might also have the main actor annotation along with nonisolated on loadUserPhotos:

@MainActor
class NetworkingClient {
  nonisolated func loadUserPhotos() async throws -> [Photo] {
    return [Photo()]
  }
}

In this case, the new default kicks in even though we didn’t write nonisolated(nonsending) ourselves. So, NetworkingClient is main actor isolated but loadUserPhotos is not. It will inherit the caller’s actor. So, once again if we call loadUserPhotos from the main actor, that’s where we’ll run. If we call it from some other place, it will run there.

So what if we want to make sure that our function never runs on the main actor? Because so far, we’ve only seen possibilities that would either isolate loadUserPhotos to the main actor, or options that would inherit the callers actor.

Running code away from any actors with @concurrent

Alongside nonisolated(nonsending), Swift 6.2 introduces the @concurrent keyword. This keyword will allow you to write functions that behave in the same way that your code in Swift 6.1 would have behaved:

@MainActor
class NetworkingClient {
  @concurrent
  nonisolated func loadUserPhotos() async throws -> [Photo] {
    return [Photo()]
  }
}

By marking our function as @concurrent, we make sure that we always leave the caller’s actor and create our own isolation context.

The @concurrent attribute should only be applied to functions that are nonisolated. So for example, adding it to a method on an actor won’t work unless the method is nonisolated:

actor SomeGenerator {
  // not allowed
  @concurrent
  func randomID() async throws -> UUID {
    return UUID()
  }

  // allowed
  @concurrent
  nonisolated func randomID() async throws -> UUID {
    return UUID()
  }
}

Note that at the time of writing both cases are allowed, and the @concurrent function that’s not nonisolated acts like it’s not isolated at runtime. I expect that this is a bug in the Swift 6.2 toolchain and that this will change since the proposal is pretty clear about this.

How and when should you use NonisolatedNonSendingByDefault

In my opinion, opting in to this upcoming feature is a good idea. It does open you up to a new way of working where your nonisolated async functions inherit the caller’s actor instead of always running in their own isolation context, but it does make for fewer compiler errors in practice, and it actually helps you get rid of a whole bunch of main actor annotation based on what I’ve been able to try so far.

I’m a big fan of reducing the amount of concurrency in my apps and only introducing it when I want to explicitly do so. Adopting this feature helps a lot with that. Before you go and mark everything in your app as @concurrent just to be sure; ask yourself whether you really have to. There’s probably no need, and not running everything concurrently makes your code, and its execution a lot easier to reason about in the big picture.

That’s especially true when you also adopt Swift 6.2’s second major feature: defaultIsolation.

Exploring Swift 6.2’s defaultIsolation options

In Swift 6.1 your code only runs on the main actor when you tell it to. This could be due to a protocol being @MainActor annotated or you explicitly marking your views, view models, and other objects as @MainActor.

Marking something as @MainActor is a pretty common solution for fixing compiler errors and it’s more often than not the right thing to do.

Your code really doesn’t need to do everything asynchronously on a background thread.

Doing so is relatively expensive, often doesn’t improve performance, and it makes your code a lot harder to reason about. You wouldn’t have written DispatchQueue.global() everywhere before you adopted Swift Concurrency, right? So why do the equivalent now?

Anyway, in Swift 6.2 we can make running on the main actor the default on a package level. This is a feature introduced by SE-0466.

This means that you can have UI packages and app targets and model packages etc, automatically run code on the main actor unless you explicitly opt-out of running on main with @concurrent or through your own actors.

Enable this feature by setting defaultIsolation in your swiftSettings or by passing it as a compiler argument:

swiftSettings: [
    .defaultIsolation(MainActor.self),
    .enableExperimentalFeature("NonisolatedNonsendingByDefault")
]

You don’t have to use defaultIsolation alongside NonisolatedNonsendingByDefault but I did like to use both options in my experiments.

Currently you can either pass MainActor.self as your default isolation to run everything on main by default, or you can use nil to keep the existing behavior (or don’t pass the setting at all to keep the existing behavior).

Once you enable this feature, Swift will infer every object to have an @MainActor annotation unless you explicitly specify something else:

@Observable
class Person {
  var myValue: Int = 0
  let obj = TestClass()

  // This function will _always_ run on main 
  // if defaultIsolation is set to main actor
  func runMeSomewhere() async {
    MainActor.assertIsolated()
    // do some work, call async functions etc
  }
}

This code contains a nonisolated async function. This means that, by default, it would inherit the actor that we call runMeSomewhere from. If we call it from the main actor that’s where it runs. If we call it from another actor or from no actor, it runs away from the main actor.

This probably wasn’t intended at all.

Maybe we just wrote an async function so that we could call other functions that needed to be awaited. If runMeSomewhere doesn’t do any heavy processing, we probably want Person to be on the main actor. It’s an observable class so it probably drives our UI which means that pretty much all access to this object should be on the main actor anyway.

With defaultIsolation set to MainActor.self, our Person gets an implicit @MainActor annotation so our Person runs all its work on the main actor.

Let’s say we want to add a function to Person that’s not going to run on the main actor. We can use nonisolated just like we would otherwise:

// This function will run on the caller's actor
nonisolated func runMeSomewhere() async {
  MainActor.assertIsolated()
  // do some work, call async functions etc
}

And if we want to make sure we’re never on the main actor:

// This function will run on the caller's actor
@concurrent
nonisolated func runMeSomewhere() async {
  MainActor.assertIsolated()
  // do some work, call async functions etc
}

We need to opt-out of this main actor inference for every function or property that we want to make nonisolated; we can’t do this for the entire type.

Of course, your own actors will not suddenly start running on the main actor and types that you’ve annotated with your own global actors aren’t impacted by this change either.

Should you opt-in to defaultIsolation?

This is a tough question to answer. My initial thought is “yes”. For app targets, UI packages, and packages that mainly hold view models I definitely think that going main actor by default is the right choice.

You can still introduce concurrency where needed and it will be much more intentional than it would have been otherwise.

The fact that entire objects will be made main actor by default seems like something that might cause friction down the line but I feel like adding dedicated async packages would be the way to go here.

The motivation for this option existing makes a lot of sense to me and I think I’ll want to try it out for a bit before making up my mind fully.

Enabling upcoming feature flags in an SPM package

As Swift evolves, a lot of new evolution proposals get merged into the language. Eventually these new language versions get shipped with Xcode, but sometimes you might want to try out Swift toolchains before they're available inside of Xcode.

For example, I'm currently experimenting with Swift 6.2's upcoming features to see how they will impact certain coding patterns once 6.2 becomes available for everybody.

This means that I'm trying out proposals like SE-0461 that can change where nonisolated async functions run. This specific proposal requires me to turn on an upcoming feature flag. To do this in SPM, we need to configure the Package.json file as follows:

let package = Package(
    name: "SwiftChanges",
    platforms: [
        .macOS("15.0")
    ],
    targets: [
        // Targets are the basic building blocks of a package, defining a module or a test suite.
        // Targets can depend on other targets in this package and products from dependencies.
        .executableTarget(
            name: "SwiftChanges",
            swiftSettings: [
                .enableExperimentalFeature("NonisolatedNonsendingByDefault")
            ]
        ),
    ]
)

The section to pay attention to is the swiftSettings argument that I pass to my executableTarget:

swiftSettings: [
    .enableExperimentalFeature("NonisolatedNonsendingByDefault")
]

You can pass an array of features to swiftSettings to enable multiple feature flags.

Happy experimenting!

Should you use network connectivity checks in Swift?

A lot of modern apps have a networking component to them. This could be because your app relies on a server entirely for all data, or you’re just sending a couple of requests as a back up or to kick off some server side processing. When implementing networking, it’s not uncommon for developers to check the network’s availability before making a network request.

The reasoning behind such a check is that we can inform the user that their request will fail before we even attempt to make the request.

Sound like good UX, right?

The question is whether it really is good UX. In this blog post I’d like to explore some of the pros and cons that a user might run into when you implement a network connectivity check with, for example, NWPathMonitor.

A user’s connection can change at any time

Nothing is as susceptible to change as a user’s network connection. One moment they might be on WiFi, the next they’re in an elevator with no connection, and just moments later they’ll be on a fast 5G connection only to switch to a much slower connection when their train enters a huge tunnel.

If you’re preventing a user from initiating a network call when they momentarily don’t have a connection, that might seem extremely weird to them. By the time your alert shows up to tell them there’s no connection, they might have already restored connection. And by the time the actual network call gets made the elevator door close and … the network call still fails due to the user not being connected to the internet.

Due to changing conditions, it’s often recommended that apps attempt a network call, regardless of the user’s connection status. After all, the status can change at any time. So while you might be able to successfully kick off a network call, there’s no guarantee you’re able to finish the call.

A much better user experience is to just try the network call. If the call fails due to a lack of internet connection, URLSession will tell you about it, and you can inform the user accordingly.

Speaking of URLSession… there are several ways in which URLSession will help us handle offline usage of our app.

You might have a cached response

If your app is used frequently, and it displays relatively static data, it’s likely that your server will include cache headers where appropriate. This will allow URLSession to locally cache responses for certain requests which means that you don’t have to go to the server for those specific requests.

This means that, when configured correctly, URLSession can serve certain requests without an internet connection.

Of course, that means that the user must have visited a specific URL before, and the server must include the appropriate cache headers in its response but when that’s all set up correctly, URLSession will serve cached responses automatically without even letting you, the developer, know.

Your user might be offline and most of the app still works fine without any work from your end.

This will only work for requests where the user fetches data from the server so actions like submitting a comment or making a purchase in your app won’t work, but that’s no reason to start putting checks in place before sending a POST request.

As I mentioned in the previous section, the connection status can change at any time, and if URLSession wasn’t able to make the request it will inform you about it.

For situations where your user tries to initiate a request when there’s no active connection (yet) URLSession has another trick up its sleeve; automatic retries.

URLSession can retry network calls automatically upon reconnecting

Sometimes your user will initiate actions that will remain relevant for a little while. Or, in other words, the user will do something (like sending an email) where it’s completely fine if URLSession can’t make the request now and instead makes the request as soon as the user is back online.

To enable this behavior you must set the waitsForConnectivity on your URLSession’s configuration to true:

class APIClient {
  let session: URLSession

  init() {
    let config = URLSessionConfiguration.default
    config.waitsForConnectivity = true

    self.session = URLSession(configuration: config)
  }

  func loadInformation() async throws -> Information {
    let (data, response) = try await session.data(from: someURL)
    // ...
  }

In the code above, I’ve created my own URLSession instance that’s configured to wait for connectivity if we attempt to make a network call when there’s no network available. Whenever I make a request through this session while offline, the request will not fail immediately. Instead, it remains pending until a network connection is established.

By default, the wait time for connectivity is several days. You can change this to a more reasonable number like 60 seconds by setting timeoutIntervalForResource:

init() {
  let config = URLSessionConfiguration.default
  config.waitsForConnectivity = true
  config.timeoutIntervalForResource = 60

  self.session = URLSession(configuration: config)
}

That way a request will remain pending for 60 seconds before giving up and failing with a network error.

If you want to have some logic in your app to detect when URLSession is waiting for connectivity, you can implement a URLSessionTaskDelegate. The delegate’s urlSession(_:taskIsWaitingForConnectivity:) method will be called whenever a task is unable to make a request immediately.

Note that waiting for connectivity won’t retry the request if the connection drops in the middle of a data transfer. This option only applies to waiting for a connection to start the request.

In summary

Handling offline scenarios should be a primary concern for mobile developers. A user’s connection status can change quickly, and frequently. Some developers will “preflight” their requests and check whether a connection is available before attempting to make a request in order to save a user’s time and resources.

The major downside of doing this is that having a connection right before making a request doesn’t mean the connection is there when the request actually starts, and it doesn’t mean the connection will be there for the entire duration of the request.

The recommended approach is to just go ahead and make the request and to handle offline scenarios if / when a network call fails.

URLSession has built-in mechanisms like a cache and the ability to wait for connections to provide data (if possible) when the user is offline, and it also has the built-in ability to take a request, wait for a connection to be available, and then start the request automatically.

The system does a pretty good job of helping us support and handle offline scenarios in our apps, which means that checking for connections with utilities like NWPathMonitor usually ends up doing more harm than good.

Choosing between LazyVStack, List, and VStack in SwiftUI

SwiftUI offers several approaches to building lists of content. You can use a VStack if your list consists of a bunch of elements that should be placed on top of each other. Or you can use a LazyVStack if your list is really long. And in other cases, a List might make more sense.

In this post, I’d like to take a look at each of these components, outline their strengths and weaknesses and hopefully provide you with some insights about how you can decide between these three components that all place content on top of each other.

We’ll start off with a look at VStack. Then we’ll move on to LazyVStack and we’ll wrap things up with List.

Understanding when to use VStack

By far the simplest stack component that we have in SwiftUI is the VStack. It simply places elements on top of each other:

VStack {
  Text("One")
  Text("Two")
  Text("Three")
}

A VStack works really well when you only have a handful of items, and you want to place these items on top of each other. Even though you’ll typically use a VStack for a small number of items, but there’s no reason you couldn’t do something like this:

ScrollView {
  VStack {
    ForEach(models) { model in 
      HStack {
        Text(model.title)
        Image(systemName: model.iconName)
      }
    }
  }
}

When there’s only a few items in models, this will work fine. Whether or not it’s the correct choice… I’d say it’s not.

If your models list grows to maybe 1000 items, you’ll be putting an equal number of views in your VStack. It will require a lot of work from SwiftUI to draw all of these elements.

Eventually this is going to lead to performance issues because every single item in your models is added to the view hierarchy as a view.

Now let's say these views also contain images that must be loaded from the network. SwiftUI is then going to load these images and render them too:

ScrollView {
  VStack {
    ForEach(models) { model in 
      HStack {
        Text(model.title)
        RemoteImage(url: model.imageURL)
      }
    }
  }
}

The RemoteImage in this case would be a custom view that enables loading images from the network.

When everything is placed in a VStack like I did in this sample, your scrolling performance will be horrendous.

A VStack is great for building a vertically stacked view hierarchy. But once your hierarchy starts to look and feel more like a scrollable list… LazyVStack might be the better choice for you.

Understanding when to use a LazyVStack

The LazyVStack components is functionally mostly the same as a regular VStack. The key difference is that a LazyVStack doesn’t add every view to the view hierarchy immediately.

As your user scrolls down a long list of items, the LazyVStack will add more and more views to the hierarchy. This means that you’re not paying a huge cost up front, and in the case of our RemoteImage example from earlier, you’re not loading images that the user might never see.

Swapping a VStack out for a LazyVStack is pretty straightforward:

ScrollView {
  LazyVStack {
    ForEach(models) { model in 
      HStack {
        Text(model.title)
        RemoteImage(url: model.imageURL)
      }
    }
  }
}

Our drawing performance should be much better with the LazyVStack compared to the regular VStack approach.

In a LazyVStack, we’re free to use any type of view that we want, and we have full control over how the list ends up looking. We don’t gain any out of the box functionality which can be great if you require a higher level of customization of your list.

Next, let’s see how List is used to understand how this compares to LazyVStack.

Understanding when to use List

Where a LazyVStack provides us maximum control, a List provides us with useful features right of the box. Depending on where your list is used (for example a sidebar or just as a full screen), List will look and behave slightly differently.

When you use views like NavigationLink inside of a list, you gain some small design tweaks to make it clear that this list item navigates to another view.

This is very useful for most cases, but you might not need any of this functionality.

List also comes with some built-in designs that allow you to easily create something that either looks like the Settings app, or something a bit more like a list of contacts. It’s easy to get started with List if you don’t require lots of customization.

Just like LazyVStack, a List will lazily evaluate its contents which means it’s a good fit for larger sets of data.

A super basic example of using List in the example that we saw earlier would look like this:

List(models) { model in 
  HStack {
    Text(model.title)
    RemoteImage(url: model.imageURL)
  }
}

We don’t have to use a ForEach but we could if we wanted to. This can be useful when you’re using Sections in your list for example:

List {
  Section("General") {
    ForEach(model.general) { item in 
      GeneralItem(item)
    }
  }

  Section("Notifications") {
    ForEach(model.notifications) { item in 
      NotificationItem(item)
    }
  }
}

When you’re using List to build something like a settings page, it’s even allowed to skip using a ForEach altogether and hardcode your child views:

List {
  Section("General") {
    GeneralItem(model.colorScheme)
    GeneralItem(model.showUI)
  }

  Section("Notifications") {
    NotificationItem(model.newsletter)
    NotificationItem(model.socials)
    NotificationItem(model.iaps)
  }
}

The decision between a List and a LazyVStack for me usually comes down to whether or not I need or want List functionality. If I find that I want little to none of List's features odds are that I’m going to reach for LazyVStack in a ScrollView instead.

In Summary

In this post, you learned about VStack, LazyVStack and List. I explained some of the key considerations and performance characteristics for these components, without digging to deeply into solving every use case and possibility. Especially with List there’s a lot you can do. The key point is that List is a component that doesn’t always fit what you need from it. In those cases, it’s useful that we have a LazyVStack.

You learned that both List and LazyVStack are optimized for displaying large amounts of views, and that LazyVStack comes with the biggest amount of flexibility if you’re willing to implement what you need yourself.

You also learned that VStack is really only useful for smaller amounts of views. I love using it for layout purposes but once I start putting together a list of views I prefer a lazier approach. Especially when i’m dealing with an unknown number of items.

Differences between Thread.sleep and Task.sleep explained

In Swift, we have several ways to “suspend” execution of our code. While that’s almost always a bad practice, I’d like to explain why Task.sleep really isn’t as problematic as you might expect when you’re familiar with Thread.sleep.

When you look for examples of debouncing or implementing task timeout they will frequently use Task.sleep to suspend a task for a given amount of time.

The key difference is in how tasks and threads work in Swift.

In Swift concurrency, we often say that tasks replace threads. Or in other words, instead of worrying about threads, we worry about tasks.

While that not untrue, it’s also a little bit misleading. It sounds like tasks and threads are mostly analogous to each other and thats not the case.

A more accurate mental model is that without Swift concurrency you used Dispatch Queues to schedule work on threads. In Swift concurrency, you use tasks to schedule work on threads. In both cases, you don’t directly worry about thread management or creation.

Exploring Thread.sleep

When you suspend execution of a thread using Thread.sleep you prevent that thread from doing anything other than sleeping. It’s not working on dispatch queues, nor on tasks.

With GCD that’s bad but not hugely problematic because if there are no threads available to work on our queue, GCD will just spin up a new thread.

Swift Concurrency isn’t as eager to spin up threads; we only have a limited number of threads available.

This means that if you have 4 threads available to your program, Swift Concurrency can use those threads to run dozens of tasks efficiently. Sleeping one of these threads with Thread.sleep means that you now only have 3 threads available to run the same dozen of tasks.

If you hit a Thread.sleep in four tasks, that means you’re now sleeping every thread available to your program and your app will essentially stop performing any work at all until the threads resume.

What about Task.sleep?

Sleeping a task with Task.sleep is, in some ways, quite similar to Thread.sleep. You suspend execution of your task, preventing that task to make progress. The key difference in how that suspension happens. Sleeping a thread just stops it from working and reducing the number of threads available. Sleeping a task means you suspend the task, which allows the thread that was running your task to start running another task.

You’re not starving the system from resources with Task.sleep and you’re not preventing your code from making forward progress which is absolutely essential when you’re using Swift Concurrency.

If you find yourself needing to suspend execution in your Swift Concurrency app you should never use Thread.sleep and use Task.sleep instead. I don’t say never often, but this is one of those cases.

Also, when you find yourself adding a Task.sleep you should also make sure that you’re using it to solve a real problem and not just because “without sleeping for 0.01 seconds this didn’t work properly”. Those kinds of sleeps usually mask serialization and queueing issues that should be solved instead of hidden.