Choosing between LazyVStack, List, and VStack in SwiftUI

SwiftUI offers several approaches to building lists of content. You can use a VStack if your list consists of a bunch of elements that should be placed on top of each other. Or you can use a LazyVStack if your list is really long. And in other cases, a List might make more sense.

In this post, I’d like to take a look at each of these components, outline their strengths and weaknesses and hopefully provide you with some insights about how you can decide between these three components that all place content on top of each other.

We’ll start off with a look at VStack. Then we’ll move on to LazyVStack and we’ll wrap things up with List.

Understanding when to use VStack

By far the simplest stack component that we have in SwiftUI is the VStack. It simply places elements on top of each other:

VStack {
  Text("One")
  Text("Two")
  Text("Three")
}

A VStack works really well when you only have a handful of items, and you want to place these items on top of each other. Even though you’ll typically use a VStack for a small number of items, but there’s no reason you couldn’t do something like this:

ScrollView {
  VStack {
    ForEach(models) { model in 
      HStack {
        Text(model.title)
        Image(systemName: model.iconName)
      }
    }
  }
}

When there’s only a few items in models, this will work fine. Whether or not it’s the correct choice… I’d say it’s not.

If your models list grows to maybe 1000 items, you’ll be putting an equal number of views in your VStack. It will require a lot of work from SwiftUI to draw all of these elements.

Eventually this is going to lead to performance issues because every single item in your models is added to the view hierarchy as a view.

Now let's say these views also contain images that must be loaded from the network. SwiftUI is then going to load these images and render them too:

ScrollView {
  VStack {
    ForEach(models) { model in 
      HStack {
        Text(model.title)
        RemoteImage(url: model.imageURL)
      }
    }
  }
}

The RemoteImage in this case would be a custom view that enables loading images from the network.

When everything is placed in a VStack like I did in this sample, your scrolling performance will be horrendous.

A VStack is great for building a vertically stacked view hierarchy. But once your hierarchy starts to look and feel more like a scrollable list… LazyVStack might be the better choice for you.

Understanding when to use a LazyVStack

The LazyVStack components is functionally mostly the same as a regular VStack. The key difference is that a LazyVStack doesn’t add every view to the view hierarchy immediately.

As your user scrolls down a long list of items, the LazyVStack will add more and more views to the hierarchy. This means that you’re not paying a huge cost up front, and in the case of our RemoteImage example from earlier, you’re not loading images that the user might never see.

Swapping a VStack out for a LazyVStack is pretty straightforward:

ScrollView {
  LazyVStack {
    ForEach(models) { model in 
      HStack {
        Text(model.title)
        RemoteImage(url: model.imageURL)
      }
    }
  }
}

Our drawing performance should be much better with the LazyVStack compared to the regular VStack approach.

In a LazyVStack, we’re free to use any type of view that we want, and we have full control over how the list ends up looking. We don’t gain any out of the box functionality which can be great if you require a higher level of customization of your list.

Next, let’s see how List is used to understand how this compares to LazyVStack.

Understanding when to use List

Where a LazyVStack provides us maximum control, a List provides us with useful features right of the box. Depending on where your list is used (for example a sidebar or just as a full screen), List will look and behave slightly differently.

When you use views like NavigationLink inside of a list, you gain some small design tweaks to make it clear that this list item navigates to another view.

This is very useful for most cases, but you might not need any of this functionality.

List also comes with some built-in designs that allow you to easily create something that either looks like the Settings app, or something a bit more like a list of contacts. It’s easy to get started with List if you don’t require lots of customization.

Just like LazyVStack, a List will lazily evaluate its contents which means it’s a good fit for larger sets of data.

A super basic example of using List in the example that we saw earlier would look like this:

List(models) { model in 
  HStack {
    Text(model.title)
    RemoteImage(url: model.imageURL)
  }
}

We don’t have to use a ForEach but we could if we wanted to. This can be useful when you’re using Sections in your list for example:

List {
  Section("General") {
    ForEach(model.general) { item in 
      GeneralItem(item)
    }
  }

  Section("Notifications") {
    ForEach(model.notifications) { item in 
      NotificationItem(item)
    }
  }
}

When you’re using List to build something like a settings page, it’s even allowed to skip using a ForEach altogether and hardcode your child views:

List {
  Section("General") {
    GeneralItem(model.colorScheme)
    GeneralItem(model.showUI)
  }

  Section("Notifications") {
    NotificationItem(model.newsletter)
    NotificationItem(model.socials)
    NotificationItem(model.iaps)
  }
}

The decision between a List and a LazyVStack for me usually comes down to whether or not I need or want List functionality. If I find that I want little to none of List's features odds are that I’m going to reach for LazyVStack in a ScrollView instead.

In Summary

In this post, you learned about VStack, LazyVStack and List. I explained some of the key considerations and performance characteristics for these components, without digging to deeply into solving every use case and possibility. Especially with List there’s a lot you can do. The key point is that List is a component that doesn’t always fit what you need from it. In those cases, it’s useful that we have a LazyVStack.

You learned that both List and LazyVStack are optimized for displaying large amounts of views, and that LazyVStack comes with the biggest amount of flexibility if you’re willing to implement what you need yourself.

You also learned that VStack is really only useful for smaller amounts of views. I love using it for layout purposes but once I start putting together a list of views I prefer a lazier approach. Especially when i’m dealing with an unknown number of items.

Differences between Thread.sleep and Task.sleep explained

In Swift, we have several ways to “suspend” execution of our code. While that’s almost always a bad practice, I’d like to explain why Task.sleep really isn’t as problematic as you might expect when you’re familiar with Thread.sleep.

When you look for examples of debouncing or implementing task timeout they will frequently use Task.sleep to suspend a task for a given amount of time.

The key difference is in how tasks and threads work in Swift.

In Swift concurrency, we often say that tasks replace threads. Or in other words, instead of worrying about threads, we worry about tasks.

While that not untrue, it’s also a little bit misleading. It sounds like tasks and threads are mostly analogous to each other and thats not the case.

A more accurate mental model is that without Swift concurrency you used Dispatch Queues to schedule work on threads. In Swift concurrency, you use tasks to schedule work on threads. In both cases, you don’t directly worry about thread management or creation.

Exploring Thread.sleep

When you suspend execution of a thread using Thread.sleep you prevent that thread from doing anything other than sleeping. It’s not working on dispatch queues, nor on tasks.

With GCD that’s bad but not hugely problematic because if there are no threads available to work on our queue, GCD will just spin up a new thread.

Swift Concurrency isn’t as eager to spin up threads; we only have a limited number of threads available.

This means that if you have 4 threads available to your program, Swift Concurrency can use those threads to run dozens of tasks efficiently. Sleeping one of these threads with Thread.sleep means that you now only have 3 threads available to run the same dozen of tasks.

If you hit a Thread.sleep in four tasks, that means you’re now sleeping every thread available to your program and your app will essentially stop performing any work at all until the threads resume.

What about Task.sleep?

Sleeping a task with Task.sleep is, in some ways, quite similar to Thread.sleep. You suspend execution of your task, preventing that task to make progress. The key difference in how that suspension happens. Sleeping a thread just stops it from working and reducing the number of threads available. Sleeping a task means you suspend the task, which allows the thread that was running your task to start running another task.

You’re not starving the system from resources with Task.sleep and you’re not preventing your code from making forward progress which is absolutely essential when you’re using Swift Concurrency.

If you find yourself needing to suspend execution in your Swift Concurrency app you should never use Thread.sleep and use Task.sleep instead. I don’t say never often, but this is one of those cases.

Also, when you find yourself adding a Task.sleep you should also make sure that you’re using it to solve a real problem and not just because “without sleeping for 0.01 seconds this didn’t work properly”. Those kinds of sleeps usually mask serialization and queueing issues that should be solved instead of hidden.

Protecting mutable state with Mutex in Swift

Once you start using Swift Concurrency, actors will essentially become your standard choice for protecting mutable state. However, introducing actors also tends to introduce more concurrency than you intended which can lead to more complex code, and a much harder time transitioning to Swift 6 in the long run.

When you interact with state that’s protected by an actor, you have to to do so asynchronously. The result is that you’re writing asynchronous code in places where you might never have intended to introduce concurrency at all.

One way to resolve that is to annotate your let's say view model with the @MainActor annotation. This makes sure that all your code runs on the main actor, which means that it's thread-safe by default, and it also makes sure that you can safely interact with your mutable state.

That said, this might not be what you're looking for. You might want to have code that doesn't run on the main actor, that's not isolated by global actors or any actor at all, but you just want to have an old-fashioned thread-safe property.

Historically, there are several ways in which we can synchronize access to properties. We used to use Dispatch Queues, for example, when GCD was the standard for concurrency on Apple Platforms.

Recently, the Swift team added something called a Mutex to Swift. With mutexes, we have an alternative to actors for protecting our mutable state. I say alternative, but it's not really true. Actors have a very specific role in that they protect our mutable state for a concurrent environment where we want code to be asynchronous. Mutexes, on the other hand, are really useful when we don't want our code to be asynchronous and when the operation we’re synchronizing is quick (like assigning to a property).

In this post, we’ll explore how to use Mutex, when it's useful, and how you choose between a Mutex or an actor.

Mutex usage explained

A Mutex is used to protect state from concurrent access. In most apps, there will be a handful of objects that might be accessed concurrently. For example, a token provider, an image cache, and other networking-adjacent objects are often accessed concurrently.

In this post, I’ll use a very simple Counter object to make sure we don’t get lost in complex details and specifics that don’t impact or change how we use a Mutex.

When you increment or decrement a counter, that’s a quick operation. And in a codebase where. the counter is available in several tasks at the same time, we want these increment and decrement operations to be safe and free from data races.

Wrapping your counter in an actor makes sense from a theory point of view because we want the counter to be protected from concurrent accesses. However, when we do this, we make every interaction with our actor asynchronous.

To somewhat prevent this, we could constrain the counter to the main actor, but that means that we're always going to have to be on the main actor to interact with our counter. We might not always be on the same actor when we interact with our counter, so we would still have to await interactions in those situations, and that isn't ideal.

In order to create a synchronous API that is also thread-safe, we could fall back to GCD and have a serial DispatchQueue.

Alternatively, we can use a Mutex.

A Mutex is used to wrap a piece of state and it ensures that there's exclusive access to that state. A Mutex uses a lock under the hood and it comes with convenient methods to make sure that we acquire and release our lock quickly and correctly.

When we try to interact with the Mutex' state, we have to wait for the lock to become available. This is similar to how an actor would work with the key difference being that waiting for a Mutex is a blocking operation (which is why we should only use it for quick and efficient operations).

Here's what interacting with a Mutex looks like:

class Counter {
    private let mutex = Mutex(0)

    func increment() {
        mutex.withLock { count in
            count += 1
        }
    }

    func decrement() {
        mutex.withLock { count in
            count -= 1
        }
    }
}

Our increment and decrement functions both acquire the Mutex, and mutate the count that’s passed to withLock.

Our Mutex is defined by calling the Mutex initializer and passing it our initial state. In this case, we pass it 0 because that’s the starting value for our counter.

In this example, I’ve defined two functions that safely mutate the Mutex' state. Now let’s see how we can get the Mutex' value:

var count: Int {
    return mutex.withLock { count in
        return count
    }
}

Notice that reading the Mutex value is also done withLock. The key difference with increment and decrement here is that instead of mutating count, I just return it.

It is absolutely essential that we keep our operations inside of withLock short. We do not want to hold the lock for any longer than we absolutely have to because any threads that are waiting for our lock or blocked while we hold the lock.

We can expand our example a little bit by adding a get and set to our count. This will allow users of our Counter to interact with count like it’s a normal property while we still have data-race protection under the hood:

var count: Int {
    get {
        return mutex.withLock { count in
            return count
        }
    }

    set {
        mutex.withLock { count in
            count = newValue
        }
    }
}

We can now use our Counter as follows:

let counter = Counter()

counter.count = 10
print(counter.count)

That’s quite convenient, right?

While we now have a type that is free of data-races, using it in a context where there are multiple isolation contexts is a bit of an issue when we opt-in to Swift 6 since our Counter doesn’t conform to the Sendable protocol.

The nice thing about Mutex and sendability is that mutexes are defined as being Sendable in Swift itself. This means that we can update our Counter to be Sendable quite easily, and without needing to use @unchecked Sendable!

final class Counter: Sendable {
    private let mutex = Mutex(0)

    // ....
}

At this point, we have a pretty good setup; our Counter is Sendable, it’s free of data-races, and it has a fully synchronous API!

When we try and use our Counter to drive a SwiftUI view by making it @Observable, this get a little tricky:

struct ContentView: View {
    @State private var counter = Counter()

    var body: some View {
        VStack {
            Text("\(counter.count)")

            Button("Increment") {
                counter.increment()
            }

            Button("Decrement") {
                counter.decrement()
            }
        }
        .padding()
    }
}

@Observable
final class Counter: Sendable {
    private let mutex = Mutex(0)

    var count: Int {
        get {
            return mutex.withLock { count in
                return count
            }
        }

        set {
            mutex.withLock { count in
                count = newValue
            }
        }
    }
}

The code above will compile but the view won’t ever update. That’s because our computed property count is based on state that’s not explicitly changing. The Mutex will change the value it protects but that doesn’t change the Mutex itself.

In other words, we’re not mutating any data in a way that @Observable can “see”.

To make our computed property work @Observable, we need to manually tell Observable when we're accessing or mutating (in this case, the count keypath). Here's what that looks like:

var count: Int {
    get {
        self.access(keyPath: \.count)
        return mutex.withLock { count in
            return count
        }
    }

    set {
        self.withMutation(keyPath: \.count) {
            mutex.withLock { count in
                count = newValue
            }
        }
    }
}

By calling the access and withMutation methods that the @Observable macro adds to our Counter, we can tell the framework when we’re accessing and mutating state. This will tie into our Observable’s regular state tracking and it will allow our views to update when we change our count property.

Mutex or actor? How to decide?

Choosing between a mutex and an actor is not always trivial or obvious. Actors are really good in concurrent environments when you already have a whole bunch of asynchronous code. When you don't want to introduce async code, or when you're only protecting one or two properties, you're probably in the territory where a mutex makes more sense because the mutex will not force you to write asynchronous code anywhere.

I could pretend that this is a trivial decision and you should always use mutexes for simple operations like our counter and actors only make sense when you want to have a whole bunch of stuff working asynchronously, but the decision usually isn't that straightforward.

In terms of performance, actors and mutexes don't vary that much, so there's not a huge obvious performance benefit that should make you lean in one direction or the other.

In the end, your choice should be based around convenience, consistency, and intent. If you're finding yourself having to introduce a ton of async code just to use an actor, you're probably better off using a Mutex.

Actors should be considered an asynchronous tool that should only be used in places where you’re intentionally introducing and using concurrency. They’re also incredibly useful when you’re trying to wrap longer-running operations in a way that makes them thread-safe. Actors don’t block execution which means that you’re completely fine with having “slower” code on an actor.

When in doubt, I like to try both for a bit and then I stick with the option that’s most convenient to work with (and often that’s the Mutex...).

In Summary

In this post, you've learned about mutexes and how you can use them to protect mutable state. I showed you how they’re used, when they’re useful, and how a Mutex compares to an actor.

You also learned a little bit about how you can choose between an actor or a property that's protected by a mutex.

Making a choice between an actor or a Mutex is, in my opinion, not always easy but experimenting with both and seeing which version of your code comes out easier to work with is a good start when you’re trying to decide between a Mutex and an actor.

Using singletons in Swift 6

Singletons generally speaking get a bad rep. People don’t like them, they cause issues, and generally speaking it’s just not great practice to rely on globally accessible mutable state in your apps. Instead, it’s more favorable to practice explicit dependency passing which makes your code more testable and reliable overall.

That said, sometimes you’ll have singletons. Or, more likely, you’ll want to have a a shared instance of something that you need in a handful of places in your app:

class AuthProvider {
  static let shared = AuthProvider()

  // ...
}

In Swift 6, this will lead to issues because Swift 6 doesn’t like non-Sendable types, and it also doesn’t like global mutable state.

In this post, you’ll learn about the reasons that Swift 6 will flag your singletons and shared instances as problematic, and we’ll see what you can do to satisfy the Swift 6 compiler. We’ll run through several different errors that you can get for your shared instances depending on how you’ve structured your code.

Static property 'shared' is not concurrency-safe because it is nonisolated global shared mutable state

We’ll start off with an error that you’ll get for any static property that’s mutable regardless of whether this property is used for a shared instance or not.

For example:

class AuthProvider {
  // Static property 'shared' is not concurrency-safe because it 
  // is nonisolated global shared mutable state
  static var shared = AuthProvider()

  private init() {}
}

class GamePiece {
  // Static property 'power' is not concurrency-safe because it 
  // is nonisolated global shared mutable state
  static var power = 100
}

As you can see, both GamePiece and AuthProvider get the exact same error. They’re not concurrency-safe because they’re not isolated and they’re mutable. That means we might mutate this static let from multiple tasks and that would lead to data races (and crashes).

To resolve this error, we can take different approaches depending on the usage of our static var. If we really need our static member to be mutable, we should make sure that we can safely mutate and that means we need to isolate our mutable state somehow.

Resolving the error when our static var needs to be mutable

We’ll start off by looking at our GamePiece; it really needs power to be mutable because we can upgrade its value throughout the imaginary game I have in mind.

Isolating GamePiece to the main actor

One approach is to isolate our GamePiece or static var power to the main actor:

// we can isolate our GamePiece to the main actor
@MainActor
class GamePiece {
  static var power = 100
}

// or we isolate the static var to the main actor
class GamePiece {
  @MainActor
  static var power = 100
}

The first option makes sense when GamePiece is a class that’s designed to closely work with our UI layer. When we only ever work with GamePiece from the UI, it makes sense to isolate the entire object to the main actor. This simplifies our code and makes it so that we’re not going from the main actor’s isolation to some other isolation and back all the time.

Alternatively, if we don’t want or need the entire GamePiece to be isolated to the main actor we can also choose to only isolate our static var to the main actor. This means that we’re reading and writing power from the main actor at all times, but we can work with other methods an properties on GamePiece from other isolation contexts too. This approach generally leads to more concurrency in your app, and it will make your code more complex overall.

There’s a second option that we can reach for, but it’s one that you should only use if constraining your type to a global actor makes no sense.

It’s nonisolated(unsafe).

Allowing static var with nonisolated(unsafe)

Sometimes you’ll know that your code is safe. For example, you might know that power is only accessed from a single task at a time, but you don’t want to encode this into the type by making the property main actor isolated. This makes sense because maybe you’re not accessing it from the main actor but you’re using a global dispatch queue or a detached task.

In these kinds of situations the only real correct solution would be to make GamePiece an actor. But this is often non-trivial, introduces a lot of concurrency, and overall makes things more complex. When you’re working on a new codebase, the consequences wouldn’t be too bad and your code would be more “correct” overall.

In an existing app, you usually want to be very careful about introducing new actors. And if constraining to the main actor isn’t an option you might need an escape hatch that tells the compiler “I know you don’t like this, but it’s okay. Trust me.”. That escape hatch is nonisolated(unsafe):

class GamePiece {
  nonisolated(unsafe) static var power = 100
}

When you mark a static var as nonisolated(unsafe) the compiler will no longer perform data-race protection checks for that property and you’re free to use it however you please.

When things are working well, that’s great. But it’s also risky; you’re now taking on the manual responsibility of prevent data races. And that’s a shame because Swift 6 aims to help us catch potential data races at compile time!

So use nonisolated(unsafe) sparingly, mindfully, and try to get rid of it as soon as possible in favor of isolating your global mutable state to an actor.

Note that in Swift 6.1 you could make GamePiece an actor and the Swift compiler will allow you to have static var power = 100 without issues. This is a bug in the compiler and still counts as a potential data race. A fix has already been merged to Swift's main branch so I would expect that Swift 6.2 emits an appropriate error for having a static var on an actor.

Resolving the error for shared instances

When you’re working with a shared instance, you typically don’t need the static var to be a var at all. When that’s the case, you can actually resolve the original error quite easily:

class AuthProvider {
  static let shared = AuthProvider()

  private init() {}
}

Make the property a let instead of a var and Static property 'shared' is not concurrency-safe because it is nonisolated global shared mutable state goes away.

A new error will appear though…

Static property 'shared' is not concurrency-safe because non-'Sendable' type 'AuthProvider' may have shared mutable state

Let’s dig into that error next.

Static property 'shared' is not concurrency-safe because non-'Sendable' type may have shared mutable state

While the new error sounds a lot like the one we had before, it’s quite different. The first error complained that the static var itself wasn’t concurrency-safe, this new error isn’t complaining about the static let itself. It’s complaining that we have a globally accessible instance of our type (AuthProvider) which might not be safe to interact with from multiple tasks.

If multiple tasks attempt to read or mutate state on our instance of AuthProvider, every task would interact with the exact same instance. So if AuthProvider can’t handle that correctly, we’re in trouble.

The way to fix this, is to make AuthProvider a Sendable type. If you’re not sure that you fully understand Sendable just yet, make sure to read this post about Sendable so you’re caught up.

The short version of Sendable is that a Sendable type is a type that is safe to interact with from multiple isolation contexts.

Making AuthProvider Sendable

For reference types like our AuthProvider being Sendable would mean that:

  • AuthProvider can’t have any mutable state
  • All members of AuthProvider must also be Sendable
  • AuthProvider must be a final class
  • We manually conform AuthProvider to the Sendable protocol

In the sample code, AuthProvider didn’t have any state at all. So if we’d fix the error for our sample, I would be able to do the following:

final class AuthProvider: Sendable {
  static let shared = AuthProvider()

  private init() {}
}

By making AuthProvider a Sendable type, the compiler will allow us to have a shared instance without any issues because the compiler knows that AuthProvider can safely be used from multiple isolation contexts.

But what if we add some mutable state to our AuthProvider?

final class AuthProvider: Sendable {
  static let shared = AuthProvider()

  // Stored property 'currentToken' of 
  // 'Sendable'-conforming class 'AuthProvider' is mutable
  private var currentToken: String?

  private init() {}
}

The compiler does not allow our Sendable type to have mutable state. It doesn’t matter that this state is private, it’s simply not allowed.

Using nonisolated(unsafe) as an escape hatch again

If we have a shared instance with mutable state, we have several options available to us. We could remove the Sendable conformance and make our static let a nonisolated(unsafe) property:

class AuthProvider {
  nonisolated(unsafe) static let shared = AuthProvider()

  private var currentToken: String?

  private init() {}
}

This works but it’s probably the worst option we have because it doesn’t protect our mutable state from data races.

Leveraging a global actor to make AuthProvider Sendable

Alternatively, we could apply isolate our type to the main actor just like we did with our static var:

// we can isolate our class
@MainActor
class AuthProvider {
  static let shared = AuthProvider()

  private var currentToken: String?

  private init() {}
}

// or just the shared instance
class AuthProvider {
  @MainActor
  static let shared = AuthProvider()

  private var currentToken: String?

  private init() {}
}

The pros and cons of this solutions are the same as they were for the static var. If we mostly use AuthProvider from the main actor this is fine, but if we frequently need to work with AuthProvider from other isolation contexts it becomes a bit of a pain.

Making AuthProvider an actor

My preferred solution is to either make AuthProvider conform to Sendable like I showed earlier, or to make AuthProvider into an actor:

actor AuthProvider {
  static let shared = AuthProvider()

  private var currentToken: String?

  private init() {}
}

Actors in Swift are always Sendable which means that an actor can always be used as a static let.

There’s one more escape hatch…

Let’s say we can’t make AuthProvider an actor because we’re working with existing code and we’re not ready to pay the price of introducing loads of actor-related concurrency into our codebase.

Maybe you’ve had AuthProvider in your project for a while and you’ve taken appropriate measures to ensure its concurrency-safe.

If that’s the case, @unchecked Sendable can help you bridge the gap.

Using @unchecked Sendable as an escape hatch

Marking our class as @unchecked Sendable can be done as follows:

final class AuthProvider: @unchecked Sendable {
  static let shared = AuthProvider()

  private var currentToken: String?

  private init() {}
}

An escape hatch like this should be used carefully and should ideally be considered a temporary fix. The compiler won’t complain but you’re open to data-races that the compiler can help prevent altogether; it’s like a sendability force-unwrap.

In Summary

Swift 6 allows singletons, there’s no doubt about that. It does, however, impose pretty strict rules on how you define them, and Swift 6 requires you to make sure that your singletons and shared instances are safe to use from multiple tasks (isolation contexts) at the same time.

In this post, you’ve seen several ways to get rid of two shared instance related errors.

First, you saw how you can have static var members in a way that’s concurrency-safe by leveraging actor isolation.

Next, you saw that static let is another way to have a concurrency-safe static member as long as the type of your static let is concurrency-safe. This is what you’ll typically use for your shared instances.

I hope this post has helped you grasp static members and Swift 6 a bit better, and that you’re now able to leverage actor isolation where needed to correctly have global state in your apps.

Using Instruments to profile a SwiftUI app

A key skill for every app developer is being able to profile your app's performance. Your app might look great on the surface, but if it doesn’t perform well, it’s going to feel off—sometimes subtly, sometimes very noticeably. Beautiful animations, slick interactions, and large data sets all fall flat if the app feels sluggish or unresponsive.

Great apps respond instantly. They show that you’ve tapped something right away, and they make interactions feel smooth and satisfying.

To make sure your app behaves like that, you’ll need to keep an eye on its performance. In this post, we’ll look at how you can use Instruments to profile your SwiftUI app. We’ll cover how to detect slow code, track view redraws, and understand when and why your UI updates. If you're interested in a deeper dive into SwiftUI redraws or profiling slow code, check out these posts:

We’ll start by building your app for profiling, then look at how to use Instruments effectively—both for SwiftUI specifics and general performance tracking.

Building an app for profiling

The first step is to build your app using Product > Profile, or by pressing Cmd + I (sidenote: I highly recommend learning shortcuts for command you use frequently).

Use Product->Profile or cmd+i

This builds your app in Release mode, using the same optimizations and configurations as your production build.

This is important because your development build (Debug mode) isn’t optimized. You might see performance issues in Debug that don’t exist in Release. I recently had this happen while working with large data sets—code ran pretty horrible in Debug was optimized in Release to the point of no longer being a problem at all.

When this happens, it usually means there’s some inefficiency under the hood, but I wouldn’t spend too much time on issues that disappear in Release mode when you have bigger issues to work on.

Once your app is built and Instruments launches, you’ll see a bunch of templates. For SwiftUI apps, the SwiftUI template is usually the right choice—even if you’re not profiling SwiftUI-specific issues. It includes everything you need for a typical SwiftUI app.

Choosing a template

After picking your template, Instruments opens its main window. Hit the red record button to start profiling. Your app will launch, and Instruments will start collecting data in real-time based on the instruments you selected. The SwiftUI template collects everything in real-time.

Instruments overview

Reading the collected data

Instruments organizes its data into several lanes. You’ll see lanes like View Body, View Properties, and Core Animation Commits. Let’s go through them from top to bottom.

Viewing recorded data

Note that I’m testing on a physical device. Testing on the simulator can work okay for some use cases but results can vary wildly between simulators and devices due to the resources available to each. It’s always recommended to use a device when testing for performance.

The View Body lane

This lane shows how often a SwiftUI view’s body is evaluated. Whenever SwiftUI detects a change in your app’s data, it re-evaluates the body of any views that depend on that data. It then determines whether any child views need to be redrawn.

So, this lane essentially shows you which views are being redrawn and how often. If you click the timing summary, you’ll see how long these evaluations take—total, min, max, and average durations. This helps you identify whether a view’s body is quick or expensive to evaluate.

Exploring the view body lane

By default, Instruments shows data for the entire profiling session. That means a view that was evaluated multiple times may have been triggered by different interactions over time.

Usually, you’ll want to profile a specific interaction. You can do this by dragging across a timeframe in the lane. This lets you zoom in on a specific window of activity—like what happens when you tap a button.

Once you’ve zoomed in, you can start to form a mental model.

For example, if tapping a button increases a counter, you’d expect the counter view’s body to be evaluated. If other views like the button’s parent also redraw, that might be unexpected. Ask yourself: did I expect this body to be re-evaluated? If not, it’s time to look into your code.

In my post on SwiftUI view redraws, I explain more about what can cause SwiftUI to re-evaluate views. It’s worth a read if you want to dig deeper.

View Properties and Core Animation Commits

The View Properties and Core Animation Commits lanes are ones I don’t use very often.

In View Properties, you can see which pieces of state SwiftUI tracked for your views and what their values were. In theory, you can figure out how your data model changed between body evaluations—but in practice, it’s not always easy to read.

Core Animation Commits shows how much work Core Animation or the GPU had to do when redrawing views. Usually, it’s not too heavy, but if your view body takes a long time to evaluate, the commit tends to be heavier too.

I don’t look at this lane in isolation, but it helps to get a sense of how expensive redrawing became after a body evaluation.

Reading the Time Profiler

The Time Profiler might be the most useful lane in the SwiftUI Instruments template. It shows you which code was running on which thread, and how long it was running.

You’re essentially seeing snapshots of the CPU at short intervals. This gives you insight into how long specific functions were active.

When profiling SwiftUI apps, you’ll usually be interested in code related to your data model or views. If a function updates your data and appears slow, or if it’s called from a view body, that might explain a performance issue.

Configuring the time profiler

Getting comfortable with the time profiler takes a bit of practice. I recommend playing around with the call tree settings. I usually:

  • Separate by thread
  • Invert the call tree
  • Hide system libraries

Sometimes, I tweak these settings depending on what I’m trying to find. It’s worth exploring.

In summary

Profiling your code and understanding how to use Instruments is essential if you want to build responsive, high-quality apps. As your app grows, it gets harder to mentally track what should happen during an interaction.

The tricky part about using Instruments is that even with a ton of data, you need to understand what your app is supposed to be doing. Without that, it’s hard to tell which parts of the data matter. Something might be slow—but that might be okay if it’s processing a lot of data.

Still, getting into the habit of profiling your app regularly helps you build a sense of what’s normal and what’s not. The earlier and more often you do this, the better your understanding becomes.

Staying productive as an indie developer

Okay. I’m using the term indie developer loosely here. I don’t consider myself to be an indie developer. But I am independent, mostly. I run my own business where I work on client apps, workshops, my books, this website, my YouTube channel, and more. So I think I qualify as indie, partially.

Either way, in this post I’d like to explore something that I don’t write or talk about much. How do I, as someone that manages my own time and work, make sure that I stay productive (in a sustainable way).

A lot of folks that I’ve talked to over the years have their own systems that help them plan, structure, and execute work. Some folks are just so eager to always be working that they have no problems staying busy at all. Others, myself included, struggle with maintaining a rhythm, finding balance, and avoid procrastinating.

Over the last half year or so I’ve been very actively trying to figure out what works for me, and in this blog post I’d like to share some of the things that I’ve learned in that time.

It all starts with a plan

The quality of work that I’ll do in a day, week, or month, all depends on my plans for a given timeframe. If I leave my Friday afternoon with unfinished business, and I spend my Monday on trying to figure out where I’ve left off and I don’t make a plan for the week, it’s likely that the week will not be very productive.

That’s because once I start working without a plan I’m in a constant mode of trying to figure out what’s next. I’ll find something to do, do it, and then spend a lot of time wondering about “what’s next?”.

Usually that means I take a break, go on social media, or watch some YouTube videos. Basically, I start to procrastinate.

I’ve tried to manage this by allocating every morning and afternoon to a type of work. I started off by separating “client” work and “blog” work. This kind of worked but it was too ambiguous. I’d spend a lot of time trying to figure out which client to work for, and what to work on. Or I’d be thinking about whether I wanted to work on an app, a YouTube video, my blog, or something else.

The result?

I’d end up doing far less than I wanted to. Especially when it came to “blog” work because there’s virtually no accountability there.

So, I solved this problem by planning differently. Instead of allocating stuff to days I’d have a list of things to do for the week. In my calendar I would still block some times for types of work, but my list is leading.

When it’s time to do client work, I look at my list. Who’s on top (the list is sorted by urgency)? That’s what I’ll work on. It’s simple yet effective.

I thought that this system would be too loose for me to allow me to properly plan but I think it’s working well. I started doing this about three months ago and I can definitely tell that my output is getting better. I’m also feeling better about my work than I did half a year ago.

So, start with a plan. Make sure you always know what’s next. Knowing what you’re supposed to do makes doing it more straightforward.

Find the right environment

I know, we’re all supposed to love working from home. I’m not great at it. I love the idea of it, but I’m really not good at it.

Whenever I work from home, there’s loads of distractions. I can hang around and watch TV for a bit. Do some work in the garden. Run a quick errand. Fold laundry. Cook.

These are all things I either enjoy doing, or things that need to get done one way or another.

Focusing while working from home can be quite a challenge for me, and I’m finding out more and more that my desk isn’t always the place where I’m most productive.

Going some place that’s not the same desk every day can be a big boost for my productivity. I’m not sure whether a change of environment just keeps my brain active, or I’m better at performing certain tasks from a different location (sometimes it’s just a different place in my house), but changing it up works for me.

If you’re feeling unproductive and you feel like you’ve got a handle on knowing what you’re supposed to do but doing it seems hard, try and switch things up a bit. Work from a coffee shop, a co-working space, a library, or your dinner table instead of your desk and see how it feels. Maybe it can help you focus on different things.

Track your work

I used to think that having rigid systems for work would suck the joy out of doing the work. The more I rely on systems and routines the more I find that I enjoy the work I’m doing because it allows me to be more focused, less stressed, and feel more in control.

By making tickets for everything I do, I have a really good sense of where I should focus my attention. I can easily measure how productive I’ve been over time just by looking at the tickets I’ve closed.

I’m using GitHub projects for tracking my work, and I’m enjoying it a lot. I can link my tickets to issues which makes it easy to link everything together. By ordering my tickets based on their priority I’m basically able to run through my list from top to bottom throughout the week and know that I’m always doing the right thing.

In addition to tracking my work through tickets, I’ve created a utility app for myself to track with Chrona I can track pomodoro timers throughout my workday. Every timer gets a description of what I’ve worked on and a rating for how well I think I did productivity-wise.

The idea is that Chrona will eventually help me gain more understanding regarding the way I do my work, and whether I have any times of day where my productivity drops significantly. If there’s any patterns in my productivity or the type of work I do I can anticipate for that when I make my plans, or I could decide to structure my work week around times and days that work well for me.

Chrona is one of four apps that I intend to ship this year with the main thought being that these apps should be useful for myself before thinking about how to make them into products. That’s why Chrona was build to be a simple app that only does a few things. It’s not configurable nor customizable and that’s on purpose. I might add more features in the future if enough folks use the app.

In Summary

Productivity is a complex topic, and if there’s one thing I’ve learned over time it’s that anybody that claims to have the answer for all your productivity woes is lying to you. Yes, there are proven systems out there. But the effectiveness of these systems isn’t guaranteed for everybody. Some folks work different form others, and that’s completely fine.

If you’re struggling to keep yourself motivated and focused, I highly recommend to actively try and understand what’s making things hard for you. When and why are you procrastinating, and what can you do to fix that. For me, knowing what’s next (planning), where I work (environment), and having objective insights (tracking) seem to be having a positive impact.

The key factor for me seems to be that there’s no point in trying to force any particular system on myself. If it doesn’t work, I try and understand why, and then I make changes to my routines to see whether things improve. It’s an iterative process, and its not a quick one.

Implementing Task timeout with Swift Concurrency

Swift Concurrency provides us with loads of cool and interesting capabilities. For example, Structured Concurrency allows us to write a hierarchy of tasks that always ensures all child tasks are completed before the parent task can complete. We also have features like cooperative cancellation in Swift Concurrency which means that whenever we want to cancel a task, that task must proactively check for cancellation, and exit when needed.

One API that Swift Concurrency doesn't provide out of the box is an API to have tasks that timeout when they take too long. More generally speaking, we don't have an API that allows us to "race" two or more tasks.

In this post, I'd like to explore how we can implement a feature like this using Swift's Task Group. If you're looking for a full-blown implementation of timeouts in Swift Concurrency, I've found this package to handle it well, and in a way that covers most (if not all edge cases).

Racing two tasks with a Task Group

At the core of implementing a timeout mechanism is the ability to race two tasks:

  1. A task with the work you're looking to perform
  2. A task that handles the timeout

whichever task completes first is the task that dictates the outcome of our operation. If the task with the work completes first, we return the result of that work. If the task with the timeout completes first, then we might throw an error or return some default value.

We could also say that we don't implement a timeout but we implement a race mechanism where we either take data from one source or the other, whichever one comes back fastest.

We could abstract this into a function that has a signature that looks a little bit like this:

func race<T>(
  _ lhs: sending @escaping () async throws -> T,
  _ rhs: sending @escaping () async throws -> T
) async throws -> T {
  // ...
}

Our race function take two asynchronous closures that are sending which means that these closures closely mimic the API provided by, for example, Task and TaskGroup. To learn more about sending, you can read my post where I compare sending and @Sendable.

The implementation of our race method can be relatively straightforward:

func race<T>(
  _ lhs: sending @escaping () async throws -> T,
  _ rhs: sending @escaping () async throws -> T
) async throws -> T {
  return try await withThrowingTaskGroup(of: T.self) { group in
    group.addTask { try await lhs() }
    group.addTask { try await rhs() }

    defer { group.cancelAll() }

    return try await group.next()!
  }
}

We're creating a TaskGroup and add both closures to it. This means that both closures will start making progress as soon as possible (usually immediately). Then, I wrote return try await group.next()!. This line will wait for the next result in our group. In other words, the first task to complete (either by returning something or throwing an error) is the task that "wins".

The other task, the one that's still running, will be marked as cancelled and we ignore its result.

There are some caveats around cancellation that I'll get to in a moment. First, I'd like to show you how we can use this race function to implement a timeout.

Implementing timeout

Using our race function to implement a timeout means that we should pass two closures to race that do the following:

  1. One closure should perform our work (for example load a URL)
  2. The other closure should throw an error after a specified amount of time

We'll define our own TimeoutError for the second closure:

enum TimeoutError: Error {
  case timeout
}

Next, we can call race as follows:

let result = try await race({ () -> String in
  let url = URL(string: "https://www.donnywals.com")!
  let (data, _) = try await URLSession.shared.data(from: url)
  return String(data: data, encoding: .utf8)!
}, {
  try await Task.sleep(for: .seconds(0.3))
  throw TimeoutError.timeout
})

print(result)

In this case, we either load content from the web, or we throw a TimeoutError after 0.3 seconds.

This approach to implementing a timeout doesn't look very nice. We can define another function to wrap up our timeout pattern, and we can improve our Task.sleep by setting a deadline instead of duration. A deadline will ensure that our task never sleeps longer than we intended.

The key difference here is that if our timeout task starts running "late", it will still sleep for 0.3 seconds which means it might take a but longer than 0.3 second for the timeout to hit. When we specify a deadline, we will make sure that the timeout hits 0.3 seconds from now, which means the task might effectively sleep a bit shorter than 0.3 seconds if it started late.

It's a subtle difference, but it's one worth pointing out.

Let's wrap our call to race and update our timeout logic:

func performWithTimeout<T>(
  of timeout: Duration,
  _ work: sending @escaping () async throws -> T
) async throws -> T {
  return try await race(work, {
    try await Task.sleep(until: .now + timeout)
    throw TimeoutError.timeout
  })
}

We're now using Task.sleep(until:) to make sure we set a deadline for our timeout.

Running the same operation as before now looks as follows:

let result = try await performWithTimeout(of: .seconds(0.5)) {
  let url = URL(string: "https://www.donnywals.com")!
  let (data, _) = try await URLSession.shared.data(from: url)
  return String(data: data, encoding: .utf8)!
}

It's a little bit nicer this way since we don't have to pass two closures anymore.

There's one last thing to take into account here, and that's cancellation.

Respecting cancellation

Taks cancellation in Swift Concurrency is cooperative. This means that any task that gets cancelled must "accept" that cancellation by actively checking for cancellation, and then exiting early when cancellation has occured.

At the same time, TaskGroup leverages Structured Concurrency. This means that a TaskGroup cannot return until all of its child tasks have completed.

When we reach a timeout scenario in the code above, we make the closure that runs our timeout an error. In our race function, the TaskGroup receives this error on try await group.next() line. This means that the we want to throw an error from our TaskGroup closure which signals that our work is done. However, we can't do this until the other task has also ended.

As soon as we want our error to be thrown, the group cancels all its child tasks. Built in methods like URLSession's data and Task.sleep respect cancellation and exit early. However, let's say you've already loaded data from the network and the CPU is crunching a huge amount of JSON, that process will not be aborted automatically. This could mean that even though your work timed out, you won't receive a timeout until after your heavy processing has completed.

And at that point you might have still waited for a long time, and you're throwing out the result of that slow work. That would be pretty wasteful.

When you're implementing timeout behavior, you'll want to be aware of this. And if you're performing expensive processing in a loop, you might want to sprinkle some calls to try Task.checkCancellation() throughout your loop:

for item in veryLongList {
  await process(item)
  // stop doing the work if we're cancelled
  try Task.checkCancellation()
}

// no point in checking here, the work is already done...

Note that adding a check after the work is already done and just before you return results doesn't really do much. You've already paid the price and you might as well use the results.

In Summary

Swift Concurrency comes with a lot of built-in mechanisms but it's missing a timeout or task racing API.

In this post, we implemented a simple race function that we then used to implement a timeout mechanism. You saw how we can use Task.sleep to set a deadline for when our timeout should occur, and how we can use a task group to race two tasks.

We ended this post with a brief overview of task cancellation, and how not handling cancellation can lead to a less effective timeout mechanism. Cooperative cancellation is great but, in my opinion, it makes implementing features like task racing and timeouts a lot harder due to the guarantees made by Structured Concurrency.

How to plan a migration to Swift 6

Swift 6 has been available to us for the better part of a year now, and more and more teams are considering or looking at migrating to the Swift 6 language mode. This typically involves trying to turn on the language mode or turning on strict concurrency, seeing a whole bunch of warnings or errors, and then deciding that today is not the day to proceed with this migration.

Today I would like to propose an approach to how you can plan your migration in a way that won’t scare you out of attempting the migration before you’ve even started.

Before you go through this entire post expecting me to tell you how to go to Swift 6 within a matter of days or weeks, I'm afraid I'm going to have to disappoint you.

Migrating to Swift 6, for a lot of apps, is going to be a very slow and lengthy process and it's really a process that you don't want to rush.

Taking an initial inventory

Before you start to migrate your codebase, I would highly recommend that you take inventory of the state of your codebase. This means that you should take a look at how modularized your codebase is, which dependencies you have in your codebase, and maybe most importantly how much concurrency you’re really using right now. Find out how often you’re explicitly, and purposefully you’re leaving the main thread. And try to understand how much of your code will run on the main thread.

You should also look at your team and figure out how up-to-date your team is, how comfortable they are with Swift concurrency already. In the end, the entire team will need to be able to work on and with your Swift 6 codebase.

On a code level, it's essential to understand how much concurrency you actually need because Swift concurrency is, by design, going to introduce a lot of concurrency into your app where maybe you don't actually need all of that concurrency. That’s why it’s so important for you to figure the amount of concurrency you’ll require beforehand by analyzing what you have now.

For example, if you have a view and you have a view model, and that view model talks to another layer, then probably you are doing most of the work on the main thread right now.

Once you hit your networking layer, your network calls will run somewhere else, and when your networking related functions invoke their callbacks, those will typically run on a background thread, and then you come back to the main thread to update your UI.

In this scenario, you don't need a lot of concurrency; in fact, I would say that you don't need concurrency beyond what URLSession provides at all. So once you’re adopting Swift Concurrency, you’ll want to understand how you can structure your code to not leave the main thread for every async call.

You might already have adopted async-await, and that's completely fine—it probably means that you do have more concurrency than you actually need. Every nonisolated async function that you write will, by default, run on a background thread. You don’t always need this; you’ll most likely want to explicitly isolate some of your work to the main actor to prevent leveraging concurrency in places where it’s simply not benefitting you.

You'll also want to make sure that you understand how dependent or how coupled your codebase is because the more coupling you have and the less abstractions and modularization you have, the more complexities you might run into. Understanding your codebase deeply is a prerequisite to moving to Swift 6.

Once you understand your codebase, you might want to look at modularizing. I would say this is the best option. It does make migrating a little bit easier.

So let's talk about modularization next.

Modularizing your codebase

When you migrate to Swift 6, you'll find that a lot of objects in your code are being passed from one place to another, and when you start to introduce concurrency in one part of the code, you’re essentially forced to migrate anything that depends on that part of your codebase.

Having a modularized codebase means that you can take your codebase and migrate it over time. You can move component by component, rather than being forced to move everything all at once.

You can use features like @preconcurrency to make sure that your app can still use your Swift 6 modules without running into all kinds of isolation or sendability warnings until your app also opts in to strict concurrency.

If you don't want to modularize your codebase or you feel your codebase is way too small to be modularized, that's completely fine. I'm just saying that the smaller your components are, the easier your migration is going to be.

Once you know the state your codebase is in and you feel comfortable with how everything is, it's time to turn on strict concurrency checks.

Turning on strict concurrency checks

Before you turn on Swift 6 language mode, it is recommended to turn on strict concurrency checking for the modules that you want to migrate. You can do this for both SPM and in Xcode for your app target.

I would recommend to do this on a module by module basis.

So if you want to refactor your models package first, turn on strict concurrency checks for your model package, but not yet for your app. Turning on strict concurrency for only one module allows you to work on that package without forcing your app to opt into all of the sendability and isolation checks related to the package you’re refactoring.

Being able to migrate one package at a time is super useful because it makes everything a lot easier to reason about since you’re reasoning about smaller bits of your code.

Once you have your strict concurrency checks turned on you're going to see a whole bunch of warnings for the packages and targets where you've enabled strict concurrency and you can start solving them. For example, it’s likely that you'll run into issues like main actor isolated objects to sendable closures.

You'll want to make sure that you understand these errors before you proceed.

You want to make sure that all of your warnings are resolved before you turn on Swift 6 language mode, and you want to make sure that you've got a really good sense of how your code is supposed to work.

The hardest part in solving your strict concurrency warnings is that making the compiler happy sometimes just isn't enough. You'll frequently want to make sure that you actually reason about the intent of your code rather than just making the compiler happy.

Consider the following code example:

func loadPages() {
  for page in 0..<10 {
    loadPage(page) { result in 
      // use result
    }
  }
}

We're iterating over a list of numbers and we're making a bunch of network calls. These network calls happen concurrently and our function doesn't wait for them all to complete. Now, the quickest way to migrate this over to Swift concurrency might be to write an async function and a for loop that looks like this:

func loadPages() async throws {
  for page in 0..<10 {
    let result = try await loadPage(page)
    // use result
  }
}

The meaning of this code has now changed entirely. We're making network calls one by one and the function doesn't return until every call is complete. If we do want to introduce Swift concurrency here and keep the same semantics we would have to create unstructured tasks for every single network call or we could use a task group and kick off all our network calls in parallel that way.

Using a task group would change the way this function works, because the function would have to wait for the task group to complete rather than just letting unstructured tasks run. In this refactor, it’s crucial to understand what structured concurrency is and when it makes sense to create unstructured tasks.

You're having to think about what the intent of the code is before you migrate and then also how and if you want to change that during your migration. If you want to keep everything the same, it's often not enough to keep the compiler happy.

While teaching Teams about Swift Concurrency, I found it really important to know exactly which tools you have out there and to think about how you should be reasoning about your code.

Once you've turned on Swift Concurrency checks, it's time to make sure that your entire team knows what to do.

Ensuring your team has all the knowledge they need

I've seen several companies attempt migrations to SwiftUI, Swift Data, and Swift Concurrency. They often take approaches where a small team does all the legwork in terms of exploring and learning these technologies before the rest of the team is requested to learn them too and to adopt them. However, this often means that there's a small team inside of the company that you could consider to be experts. They'll have had access to resources, they'll have had time to train, and once they come up with the general big picture of how things should be done, the rest of the team kind of has to learn on the job. Sometimes this works well, but often this breaks down because the rest of the team simply needs to fully understand what they're dealing with before they can effectively learn.

So I always recommend if you want to migrate over to Swift Concurrency have your team enroll in one of my workshops or use my books or my course or find any other resource that will teach the team everything they need to know. It's really not trivial to pick up Swift Concurrency, especially not if you want to go into strict concurrency mode. Writing async-await functions is relatively easy, but understanding what happens is really what you need if you're planning to migrate and go all-in on concurrency.

Once you've decided that you are going to go for Swift 6 and did you want to level up your team's concurrency skills make sure you actually give everybody on the team a chance to properly learn!

Migrating from the outside in

Once you've started refactoring your packages and it's time to start working on your app target I found that it really makes sense to migrate from the outside in. You could also work from the inside out and in the end, it really depends on where you want to start. That said, I often like to start in the view layer once all the back-end stuff is done because it helps me determine at which point in the app I want to leave the main actor (or when yo apply a main actor annotation to stay on the main actor).

For example, if you’re using MVVM and you have a view model that holds a bunch of functions, where should these functions run?

This is where the work that you did up front comes into play because you already know that in the old way of doing things the view model would run its functions on the main thread. I would highly recommend that you do not change this. If your view model used to run on the main thread which is pretty much standard, keep it that way.

You'll want to apply a main actor annotation to your view model class.

This is not a bad thing by any means, it's not a hack either. It's a way for you to ensure that you're not switching isolation contexts all the time. You really don't need a ton of concurrency in your app code.

Apple is even considering introducing some language mode for Swift where everything is going to be on the main actor by default.

So for you to default your view models and maybe some other objects in your code base to the main actor simply makes a lot of sense. Once you start migrating like this you'll figure out that you really didn't need that much concurrency which you already should know because that's what you figured out early on into process.

This is also where you start to encounter warnings that are related to sendability and isolation contexts. Once you start to see these warnings and errors, you decide that the model should or shouldn't be sendable depending on whether the switch of isolation context that’s causing the warning is expected.

You can solve sendability problems with actors. That said, making things actors is usually not what you're looking for especially when it's related to models.

However, when you’re dealing with a reference type that has mutable state, that's where you might introduce actors. It’s all about figuring out whether you were expecting to use that type in multiple isolation contexts.

Having to deeply reason about every error and warning can sometimes feel tedious because it really slows you down. You can easily make something sendable, you could easily make something an actor, and it wouldn't impact your code that much. But you are introducing a lot of complexity into your codebase when you're introducing isolation contexts and when you're introducing concurrency.

So again, you really want to make sure that you limit the amount of concurrency in your app. You typically don't need a lot of concurrency inside an application. I can't stress this enough.

Pitfalls, caveats, and dangers

Migrating to Swift 6 definitely comes with its dangers and uncertainties. If you're migrating everything all at once, you're going to be embarking on a huge refactor that will involve touching almost every single object in your code. If you introduce actors where they really shouldn't belong, you suddenly have everything in your code becoming concurrent because interacting with actors is an asynchronous proces.

If you didn't follow the steps in this blog post, you're probably going to have asynchronous functions all over the place, and they might be members of classes or your view or anything else. Some of your async functions are going to be isolated to the main actor, but most of them will be non-isolated by default, which means that they can run anywhere. This also means that if you pass models or objects from your view to your few model to some other place that you're skipping isolation contexts all the time. Sometimes this is completely fine, and the compiler will figure out that things are actually safe, but in a lot of cases, the compiler is going to complain about this, and you will be very frustrated about this because you have no idea what's wrong.

There's also the matter of interacting with Apple's code. Not all of Apple's code is necessarily Swift 6 compatible or Swift 6 friendly. So you might find yourself having to write workarounds for interacting with things like a CLLocationManagerDelegate or other objects that come from Apple's frameworks. Sometimes it's trivial to know what to do once you fully understand how isolation works, but a lot of the times you're going to be left guessing about what makes the most sense.

This is simply unavoidable, and we need Apple to work on their code and their annotations to make sure that we can adopt Swift 6 with full confidence.

At the same time, Apple is looking at Swift as a language and figuring out that Swift 6 is really not in the place where they want it to be for general adoption.

If you're adopting Swift 6 right now, there are some things that might change down the line. You have to be willing to deal with that. If you're not willing to deal with that, I would recommend that you go for strict concurrency and don't go all-in on Swift 6 because things might change down the line and you don't want to be doing a ton of work that turns out to be obsolete. A couple versions of Swift down the line, and we're probably talking months, not years, before this happens.

In Summary

Overall, I think adopting Swift 6 is a huge undertaking for most teams. If you haven't started already and you're about to start now, I would urge you to take it slow - take it easy and make sure that you understand what you're doing as much as possible every step of the way.

Swift concurrency is pretty complicated, and Apple is still actively working on improving and changing it because they're still learning about things that are causing problems for people all the time. So for that reason, I'm not even sure that migrating to Swift 6 should be one of your primary goals at this point in time.

Understanding everything around Swift 6 I think is extremely useful because it does help you to write better and safer code. However, I do believe that sticking with the Swift 5 language mode and going for strict concurrency is probably your safest bet because it allows you to write code that may not be fully Swift 6 compliant but works completely fine (at least you can still compile your project even if you have a whole bunch of warnings).

I would love to know your thoughts and progress on migrating to Swift 6. In my workshops I always hear really cool stories about companies that are working on their migration and so if you have stories about your migration and your journey with Swift 6, I would love to hear that.

What’s new in Swift 6.1?

The Xcode 16.3 beta is out, which includes a new version of Swift. Swift 6.1 is a relatively small release that comes with bug fixes, quality of life improvements, and some features. In this post, I’d like to explore two of the new features that come with Swift 6.1. One that you can start using immediately, and one that you can opt-in on if it makes sense for you.

The features I’d like to explore are the following:

  1. Changes to Task Groups in Swift 6.1
  2. Changes to member visibility for imported code

We’ll start by looking at the changes in Concurrency’s TaskGroup and we’ll cover member visibility after.

Swift 6.1 and TaskGroup

There have been a couple of changes to concurrency in Swift 6.1. These were mainly small bug fixes and improvements but one improvement stood out to me and that’s the changes that are made to TaskGroup. If you're not familiar with task groups, go ahead and read up on them on my blog post right here.

Normally, a TaskGroup is created as shown below where we create a task group and specify the type of value that every child task is going to produce:

await withTaskGroup(of: Int.self) { group in
  for _ in 1...10 {
    group.addTask {
      return Int.random(in: 1...10)
    }
  }
}

Starting in Swift 6.1, Apple has made it so that we no longer have to explicitly define the return type for our child tasks. Instead, Swift can infer the return type of child tasks based on the first task that we add to the group.

That means that the compiler will useaddGroup it finds to determine the return type for all your child tasks.

In practice, that means that the code below is the equivalent of what we saw earlier:

await withTaskGroup { group in
  for _ in 1...10 {
    group.addTask {
      return Int.random(in: 1...10)
    }
  }
}

Now, as you might expect, this doesn't change the fact that our task groups have to return the same type for every child task.

The code above shows you how you can use this new return type inference in Swift 6.1. If you accidentally do end up with different return types for your child task like the code below shows, the compiler will present us with an error that will tell you that the return type of your call to addTask is incorrect.

await withTaskGroup { group in
  for _ in 1...10 {
    group.addTask {
      return Int.random(in: 1...10)
    }
  }

  group.addTask {
    // Cannot convert value of type 'String' to closure result type 'Int'
    return "Hello, world"
  }
}

Now, if you find that you do want to have multiple return types, I have a blog post on that. That approach still works. We can still use an enum as a return type for our task group for our child tasks, and that definitely still is a valid way to have multiple return types in a task group.

I’m quite happy with this change because having to specify the return type for my child tasks always felt a little tedious so it’s great to see the compiler take this job in Swift 6.1.

Next, let’s take a look at the changes to imported member visibility in Swift 6.1.

Imported member visibility in Swift 6.1

In Swift, we have the ability to add extensions to types to enhance or augment functionality that we already have. For example, you could add an extension to an Int to represent it as a currency string or something similar.

If I'm building an app where I'm dealing with currencies and purchases and handling money, I might have two packages that are imported by my app. Both packages could be dealing with currencies in some way shape or form and I might have an extension on Int that returns a String which is a currency string as I mentioned earlier.

Here's what that could look like.

// CurrencyKit
func price() -> String {
    let formatter = NumberFormatter()
    formatter.numberStyle = .currency
    formatter.locale = Locale.current

    let amount = Double(self) / 100.0 // Assuming the integer represents cents
    return formatter.string(from: NSNumber(value: amount)) ?? "$\(amount)"
}

// PurchaseParser
func price() -> String {
    let formatter = NumberFormatter()
    formatter.numberStyle = .currency
    formatter.locale = Locale.current

    let amount = Double(self) / 100.0 // Assuming the integer represents cents
    return formatter.string(from: NSNumber(value: amount)) ?? "$\(amount)"
}

The extension shown above exists in both of my packages, and the return types of these extensions are the exact same (i.e., strings). This means that I can have the following two files in my app, and it's going to be just fine.

// FileOne.swift
import PurchaseParser

func dealsWithPurchase() {
    let amount = 1000
    let purchaseString = amount.price()
    print(purchaseString)
}

// FileTwo.swift
import CurrencyKit

func dealsWithCurrency() {
    let amount = 1000
    let currencyString = amount.price()
    print(currencyString)
}

The compiler will know how to figure out which version of price should be used based on the import in my files and things will work just fine.

However, if I have two extensions on integer with the same function name but different return types, the compiler might actually get confused about which version of the extension I intended to use.

Consider the following changes to PurchaseParser's price method:

func price() -> Double {
    let formatter = NumberFormatter()
    formatter.numberStyle = .currency
    formatter.locale = Locale.current

    let amount = Double(self) / 100.0 // Assuming the integer represents cents
    return amount
}

Now, price returns a Double instead of a String. In my app code, I am able to use this extension from any file, even if that file doesn’t explicitly import PurchaseParser. As a result, the compiler isn’t sure what I mean when I write the following code in either of the two files that you saw earlier:

let amount = 1000
let currencyString = amount.price()

Am I expecting currencyString to be a String or am I expecting it to be a Double?

To help the compiler, I can explicitly type currencyString as follows:

let amount = 1000
let currencyString: String = amount.price()

This will tell the compiler which version of price should be used, and my code will work again. However, it’s kind of strange in a way that the compiler is using an extension on Int that’s defined in a module that I didn’t even import in this specific file.

In Swift 6.1, we can opt into a new member visibility mode. This member visibility mode is going to work a little bit more like you might expect.

When I import a specific module like CurrencyKit, I'm only going to be using extensions that were defined on CurrencyKit. This means that in a file that only imports CurrencyKit I won’t be able to use extensions defined in other packages unless I also import those. As a result, the compiler won’t be confused about having multiple extensions with the method name anymore since it can’t see what I don’t import.

Opting into this feature can be done by passing the corresponding feature flag to your package, here's what that looks like when you’re in a Swift package:

.executableTarget(
    name: "AppTarget",
    dependencies: [
        "CurrencyKit",
        "PurchaseParser"
    ],
    swiftSettings: [
        .enableExperimentalFeature("MemberImportVisibility")
    ]
),

In Xcode this can be done by passing the feature to the “Other Swift Flags” setting in your project settings. In this post I explain exactly how to do that.

While I absolutely love this feature, and I think it's a really good change in Swift, it does not solve a problem that I've had frequently. However, I can definitely imagine myself having that problem, so I'm glad that there's now a fix for that that we can opt into. Hopefully, this will eventually become a default in Swift.

In Summary

Overall, Swift 6.1 is a pretty lightweight release, and it has some nice improvements that I think really help the language be better than it was before.

What are your thoughts on these changes in Swift 6.1, and do you think that they will impact your work in any way at all?

Why you should keep your git commits small and meaningful

When you're using Git for version control, you're already doing something great for your codebase: maintaining a clear history of changes at every point in time. This helps you rewind to a stable state, track how your code has evolved, and experiment with new ideas without fully committing to them right away.

However, for many developers, Git is just another tool they have to use for work. They write a lot of code, make commits, and push their changes without giving much thought to how their commits are structured, how big their branches are, or whether their commit history is actually useful.

Why Commit Hygiene Matters

As projects grow in complexity and as you gain experience, you'll start seeing commits as more than just a step in pushing your work to GitHub. Instead, commits become checkpoints—snapshots of your project at specific moments. Ideally, every commit represents a logical stopping point where the project still compiles and functions correctly, even if a feature isn’t fully implemented. This way, you always have a reliable fallback when exploring new ideas or debugging issues.

Now, I’ll be honest—I’m not always perfect with my Git hygiene. Sometimes, I get deep into coding, and before I realize it, I should have committed ages ago. When working on something significant, I try to stage my work in logical steps so that I still have small, meaningful commits. If you don’t do this, the consequences can be frustrating—especially for your teammates.

The Pain of Messy Commits

Imagine you're debugging an issue, and you pinpoint that something broke between two commits. You start looking at the commit history and find something like:

  • wip (Work in Progress)
  • fixing things
  • more updates

None of these tell you what actually changed. Worse, if those commits introduce large, sweeping changes across the codebase, you’re left untangling a mess instead of getting helpful insights from Git’s history.

How Small Should Commits Be?

A good rule of thumb: your commits should be small but meaningful. A commit doesn’t need to represent a finished feature, but it should be a logical step forward. Typically, this means:

  • The project still builds (even if the feature is incomplete).
  • The commit has a clear purpose (e.g., “Refactor JSON parsing to use Decodable”).
  • If you’re adding a function, consider adding its corresponding test in the same commit.

For example, let’s say you’re refactoring JSON parsing to use Decodable and updating your networking client:

  1. Commit 1: Add the new function to the networking client.
  2. Commit 2: Add test scaffolding (empty test functions and necessary files).
  3. Commit 3: Write the actual test.
  4. Commit 4: Implement the feature.
  5. Commit 5: Rename a model or refactor unrelated code (instead of bundling this into Commit 4).

By structuring commits this way, you create a clear and understandable history. If a teammate needs to do something similar, they can look at your commits and follow your process step by step.

The Balance Between Clean Commits and Productivity

While good commit hygiene is important, don’t obsess over it. Some developers spend as much time massaging their Git history as they do writing actual code. Instead, strive for a balance: keep your commits clean and structured, but don’t let perfectionism slow you down.

You really don’t have to pick apart your changes just so you can have the cleanest commits ever. For example, if you’ve fixed a typo in a file that you were working on, you don’t have to make a separate commit for that if it means having to stage individual lines in a file.

On the other hand, if fixing that typo meant you also changed a handful of other files, you might want to put some extra work into splitting that commit up.

Commit Messages: Crafting a Meaningful Story

In addition to the size of your commits, your commit messages also matter. A good commit message should be concise but informative. Instead of vague messages like fix or updated code, consider something more descriptive, like:

  • Refactored JSON parsing to use Decodable
  • Fixed memory leak in caching logic
  • Added unit test for network error handling

By keeping your commit messages clear, you help yourself and others understand the progression of changes without having to dig into the code.

Rewriting Git History When Necessary

Sometimes, you may want to clean up your Git history before merging a branch. This is where tools like interactive rebase come in handy. Using git rebase -i HEAD~n, you can:

  • Squash multiple small commits into one.
  • Edit commit messages.
  • Reorder commits for better readability.

However, be cautious when rewriting history—once commits are pushed to a shared branch, rebasing can cause conflicts for your teammates.

Rebasing on the command line can be tricky but luckily most GUIs will have ways to perform interactive rebasing too. I personally use interactive rebasing a lot since I like rebasing my branches on main instead of merging main into my features. Merge commits aren’t that useful to have in my opinion and rebasing allows me to avoid them.

In Summary

In the end, it’s all about making sure that you end up having a paper trail that makes sense and that you have a paper trail that can actually be useful when you find yourself digging through history to see what you did and why.

The reality is, you won’t do this often. But when you do, you’ll feel glad that you took the time to keep your commits lean. By keeping your commits small, writing meaningful messages, and leveraging Git’s powerful tools, you ensure that your version control history remains a valuable resource rather than a tangled mess.