SwiftUI’s Bindable property wrapper explained

WIth the introduction of Xcode 15 beta and its corresponding beta OSses (I would say iOS 17 beta, but of course we also get macOS, iPadOS, and other betas...) Apple has introduced new state mangement tools for SwiftUI. One of these new tools is the @Bindable property wrapper. In an earlier post I explained that @Binding and @Bindable do not solve the same problem, and that they will co-exist in your applications. In this post, I would like to clarify the purpose and the use cases for @Bindable a little bit better so that you can make better decisions when picking your SwiftUI state property wrappers.

If you prefer learning by video, the key lessons from this blog post are also covered in this video:

The key purpose of the @Bindable is to allow developers to create bindings to properties that are part of a model that confoms to the Observable protocol. Typically you will create these models by annotating them with the @Observable macro:

@Observable
class SearchModel {
  var query: String = ""
  var results: [SearchResult] = []

  // ...
}

When you pass this model to a SwiftUI view, you might end up with something like this:

struct SearchView {
  let searchModel: SearchModel

  var body: some View {
    TextField("Search query", text: // ...??)
  }
}

Notice how the searchModel is defined as a plain let. We don't need to use @ObservedObject when a SwiftUI view receives an Observable model from one of its parent views. We also shouldn't be using @State because @State should only be used for model data that is owned by the view. Since we're passed our SearchModel by a parent view, that means we don't own the data source and we shouldn't use @State. Even without adding a property wrapper, the Observable model is able to tell the SwiftUI view when one of its properties has changed. How this works is a topic for a different post; your key takeaway for now is that you don't need to annotate your Observable with any property wrappers to have your view observe it.

Back to SearchView. In the SearchView body we create a TextField and this TextField needs to have a binding to a string value. If we'd be working with an @ObservedObject or if we owned the SearchModel and defined its proeprty as @State we would write $searchModel.query to obtain a binding.

When we attempt to do this for our current searchModel property now, we'd see the following error:

var body: some View {
  // Cannot find '$searchModel' in scope
  TextField("Search query", text: $searchModel.query)
}

Because we don't have a property wrapper to create a projected value for our search model, we can't use the $ prefix to create a binding.

To learn more about property wrappers and projected values, read this post.

In order to fix this, we need to annotate our searchModel with @Bindable:

struct SearchView {
  @Bindable var searchModel: SearchModel

  var body: some View {
    TextField("Search query", text: $searchModel.query)
  }
}

By applying the @Bindable property wrapper to the searchModel property, we gain access to the $searchModel property because the Bindable property wrapper can now provide a projected value in the form of a Binding.

Note that you only need the @Bindable property wrapper if:

  • You didn't create the model with @State (because you can create bindings to @State properties already)
  • You need to pass a binding to a property on your Observable model

Essentially, you will only need to use @Bindable if in your view you write $myModel.property and the compiler tells you it can't find $myModel. That's a good indicator that you're trying to create a binding to something that can't provide a binding out of the box, and thay you'll want to use @Bindable to be able to create bindings to your model.

Hopefully this post helps clear the purpose and usage of @Bindable up a little bit!

What’s the difference between @Binding and @Bindable

With iOS 17, macOS Sonoma and the other OSses from this year's generation, Apple has made a couple of changes to how we work with data in SwiftUI. Mainly, Apple has introduced a Combine-free version of @ObservableObject and @StateObject which takes the shape of the @Observable macro which is part of a new package called Observation.

One interesting addition is the @Bindable property wrapper. This property wrapper co-exists with @Binding in SwiftUI, and they cooperate to allow developers to create bindings to properties of observable classes. So what's the role of each of these property wrappers? What makes them different from each other?

If you prefer learning by video, the key lessons from this blog post are also covered in this video:

To start, let's look at the @Binding property wrapper.

When we need a view to mutate data that is owned by another view, we create a binding. For example, our binding could look like this:

struct MyButton: View {
    @Binding var count: Int

    var body: some View {
        Button(action: {
            count += 1
        }, label: {
            Text("Increment")
        })
    }
}

The example isn' t particularly interesting or clever, but it illustrates how we can write a view that reads and mutates a counter that is owned external to this view.

Data ownership is a big topic in SwiftUI and its property wrappers can really help us understand who owns what. In the case of @Binding all we know is that some other view will provide us with the ability to read a count, and a means to mutate this counter.

Whenever a user taps on my MyButton, the counter increments and the view updates. This includes the view that originally owned and used that counter.

Bindings are used in out of the box components in SwiftUI quite often. For example, TextField takes a binding to a String property that your view owns. This allows the text field to read a value that your view owns, and the text field can also update the text value in response to the user's input.

So how does @Bindable fit in?

If you're famliilar with SwiftUI on iOS 16 and earlier you will know that you can create bindings to @State, @StateObject, @ObservedObject, and a couple more, similar, objects. On iOS 17 we have access to the @Observable macro which doesn't enable us to create bindings in the same way that the ObservableObject does. Instead, if our @Observable object is a class, we can ask our views to make that object bindable.

This means that we can mark a property that holds an Observable class instance with the @Bindable property wrapper, allowing us to create bindings to properties of our class instance. Without @Bindable, we can't do that:

@Observable
class MyCounter {
    var count = 0
}

struct ContentView: View {
    var counter: MyCounter = MyCounter()

    init() {
        print("initt")
    }

    var body: some View {
        VStack {
            Text("The counter is \(counter.count)")
            // Cannot find '$counter' in scope
            MyButton(count: $counter.count)
        }
        .padding()
    }
}

When we make the var counter property @Bindable, we can create a binding to the counter's count property:

@Observable
class MyCounter {
    var count = 0
}

struct ContentView: View {
    @Bindable var counter: MyCounter

    init() {
        print("initt")
    }

    var body: some View {
        VStack {
            Text("The counter is \(counter.count)")
            // This now compiles
            MyButton(count: $counter.count)
        }
        .padding()
    }
}

Note that if your view owns the Observable object, you will usually mark it with @State and create the object instance in your view. When your Observable object is marked as @State you are able to create bindings to the object's properties. This is thanks to your @State property wrapper annotation.

However, if your view does not own the Observable object, it wouldn't be appropriate to use @State. The @Bindable property wrapper was created to solve this situation and allows you to create bindings to the object's properties.

Usage of Bindable is limited to classes that conform to the Observable protocol. The easiest way to create an Observable conforming object is with the @Observable macro.

Conclusion

In this post, you learned that the key difference between @Binding and @Bindable is in what they do. The @Binding property wrapper indicates that some piece of state on your view is owned by another view and you have both read and write access to the underlying data.

The @Bindable property wrapper allows you to create bindings for properties that are owned by Observable classes. As mentioned earlier,@Bindable is limted to classes that conform to Observable and the easiest way to make Observable objects is the @Observable macro.

As you now know, these two property wrappers co-exist to enable powerful data sharing behaviors.

Cheers!

What’s the difference between Macros and property wrappers?

With Swift 5.9 and Xcode 15, we have the ability to leverage Macros in Swift. Macros can either be written with at @ prefix or with a # prefix, depending on where they're being used. If you want to see some examples of Macros in Swift, you can take a look at this repository that sheds some light on both usage and structure of Macros.

When we look at Macros in action, they can look a lot like property wrappers:

@CustomCodable
struct CustomCodableString: Codable {

  @CodableKey(name: "OtherName")
  var propertyWithOtherName: String

  var propertyWithSameName: Bool

  func randomFunction() {

  }
}

The example above comes from the Macro examples repository. With no other context it's hard to determine whether CodableKey is a property wrapper or a Macro.

One way to find out is to option + click on a Macro which should bring up a useful dialog in Xcode that will make it clear that you're looking at a Macro.

Given how similar Macros and property wrappers look, you might be wondering whether Macros replace property wrappers. Or you might think that they're basically the same thing just with different names.

In reality, Macros are quite different from property wrappers. The key difference is when and where they affect your code and your app.

Property wrappers are executed at runtime. This means that any extra logic that you've added in your property wrapper is applied to your wrapped value while your app is running. This is powerful when you need to manipulate or work with wrapped values in a dynamic fashion.

Macros on the other hand are executed at compile time and they allow us to augment our code by rewriting or expanding code. In other words, Macros allow us to add, rewrite, and modify code at compile time.

For example, there's a #URL Macro that we can use in Xcode 15 to get non-optional URL objects that are validated at compile time. There's also an @Relationship Macro in Swift Data that allows us to generate all code that's needed to define a relationship between two models.

Without digging too deep in different kinds of Macros and how they are defined, the difference is that a Macro defined with a # sign are freestanding. This means that they generate code on their own and aren't applied to an object or property. Macros defined with an @ are applied to something and can't exist on their own like a freestanding Macro can.

Exploring Macros in-depth is a topic for another post.

We can even apply Macros to entire objects like when you apply the @Observable or @Model Macros to your model definitions. Applying a Macro to an object definition is very powerful and allows us to add tons of features and functionality to the object that the Macro is applied to.

For example, when we look at the @Model Macro we can see that it takes code defined like this:

@Model
final class Item {
    var timestamp: Date

    init(timestamp: Date) {
        self.timestamp = timestamp
    }
}

And transforms it into this:

@Model
final class Item {
    @PersistedProperty
    var timestamp: Date
    {
        get {
            _$observationRegistrar.access(self, keyPath: \.timestamp)
            return self.getValue(for: \.timestamp)
        }

        set {
            _$observationRegistrar.withMutation(of: self, keyPath: \.timestamp) {
                self.setValue(for: \.timestamp, to: newValue)
            }
        }
    }

    init(timestamp: Date) {
        self.timestamp = timestamp
    }

    @Transient
    public var backingData: any BackingData<Item> = CoreDataBackingData(for: Item.self)

    static func schemaMetadata() -> [(String, AnyKeyPath, Any?, Any?)] {
      return [
        ("timestamp", \Item.timestamp, nil, nil)
      ]
    }

    init(backingData: any BackingData<Item>, fromFetch: Bool = false) {
      self.backingData = backingData
      if !fromFetch {
        self.context?.insert(object: self)
      }
    }

    @Transient
    private let _$observationRegistrar = ObservationRegistrar()
}

extension Item : PersistentModel  {}

extension Item : Observable  {}

Notice how much more code that is, and imagine how tedious it would be to write and manage all this code for every Swift Data model or @Observable object you create.

Macros are a real powerhouse, and they will enable us to write shorter, more concise, and less boilerplate-heavy code. I'm excited to see where Macros go, and how they will make their way into more and more places of Swift.

Conclusion

As you learned in this post, the key difference between Macros and property wrappers in Swift is that Macros are evaluated at compile time while property wrappers are useful at runtime. This means that we can use Macros to generate code on our behalf while we compile our app and property wrappers can be used to change behavior and manipulate properties at runtime.

Even though they both share the @ annotation (and Macros can also have the # annotation in some cases), they do not cover the same kinds of features as you now know.

Cheers!

Tips and tricks for exploring a new codebase

As a developer, joining a new project or company is often a daunting and scary task. You have to get aquatinted with not just a whole new team of people, but you also have to familiarize yourself with an entirely new codebase that might use new naming conventions, follows patterns that you’re not familiar with, or even uses tooling that you’ve never seen before.

There are plenty of reasons to be overwhelmed when you’re a new member of any engineering team, and there’s no reason to feel bad about that.

In the past two years, I’ve done a lot of contracting and consulting which means that I’ve had to explore and understand lots of codebases in short amounts of time. Sometimes even having to explore multiple codebases at once whenever I’d start to work for more than one client in a given week or two.

I guess it's fair to say that I’ve had my fair share of confusion and feeling overwhelmed with new codebases.

In this post, I’d like to provide you with some tips and tricks that I use to get myself comfortable with codebases of any size in a reasonable amount of time.

If you prefer to watch this post as a video, check out the video below:

Meet the team

While it might be tempting to get through your introductory calls as soon as possible so you can spend as much time as possible on navigating and exploring a new codebase, I highly recommend letting the code wait for a little while. Meet the team first.

Getting to know the people that wrote the code that you’re working with can truly help to build a better understanding of the codebase as a whole. Ask questions about team dynamics, and ongoing projects, who’s an expert on what? Building empathy around the code you’ll be working with is a very valuable tool.

Knowing which team members know most about specific features, parts of the codebase, tools that are used in a company, and so on also helps you figure out the right person to ask any questions you might have while you explore the codebase.

For example, when I joined Disney almost six years ago I wasn’t all that familiar with Swiftlint. I had heard about it but I had no idea what it did exactly. In the codebase, I saw some comments that looked as follows:

// swiftlint:disable:next cyclomatic_complexity

Of course, I could paste this comment into Google and go down a rabbit hole on what’s happening and I’d probably have learned a lot about Swiftlint but instead, I chose to figure out who knows most about Swiftlint within the team. Surely that person could help me learn a lot about what Swiftlint was used for and how it works.

I asked my team lead and luckily it was my team lead that actually knew lots and lots of things about Swiftlint, how it was set up, which linter rules we used, and so on.

We had a good chat and by the end of it, I knew exactly why we had Swiftlint at Disney Streaming, which rules we had disabled or enabled and why, and why it was okay to disable certain rules sometimes.

Google could have taught me that the comment you saw earlier disabled a specific linter rule to allow one exception to the rule.

My coworker taught me not just what that comment did but also why it did that. And why that was okay. And when I should or shouldn’t disable certain linter rules myself.

Another example is a more recent one.

One of my clients had a pretty large codebase that has had many people working on it over the years. There’s some Objective-C in there, lots of Swift, it has UIKit and SwiftUI, multiple architecture patterns, and much more. It’s a proper legacy codebase.

Instead of figuring everything out on my own, I had conversations with lots of team members. Sometimes they were one-on-one conversations but other times I met with two or three people at once.

Through these conversations, I learned about various architectural patterns that existed in the codebase. Which ones they considered to be good fits, and which ones they were looking to phase out. I learned why certain bits of code were still in Objective-C, and which parts of the Objective-C codebase should be refactored eventually.

I learned that certain team members had spent a lot of time working on specific features, patterns, and services within the app. They would tell me why certain decisions were made, and which choices they were and weren’t particularly happy with.

After meeting the team I knew so much more about the project, the codebase, the people working on the project, and how things move and evolve within the team. This was incredibly helpful information to have once I started to explore the codebase. Through knowing the team I knew so much more about the why of some bits of code. And I knew that some code wasn’t worth exploring too much because it would be gone soon.

On top of that, through knowing the team, I felt more empathic about bits of code that I didn’t like or didn’t understand. I know who was likely to have worked on that code. So instead of getting frustrated about that bit of code, I knew who I could ask to learn more about the confusing section of code.

Break things

In addition to meeting the team behind your new codebase, you’ll want to start exploring the codebase itself sooner rather than later. One of the key things to figure out is how the project is set up. Which code is responsible for what? How does one thing impact the other?

Hopefully, the codebase follows some well-established patterns that help you figure this out. Regardless, I find it useful to try and break things while I explore.

By introducing flaws in the business logic for an app on purpose, you can learn a lot about the codebase. Sometimes it helps you uncover certain “this should never happen” crashes where a team member used a force unwrap or wrote a guard let with a fatalError inside.

Other times things break in more subtle ways where the app doesn’t quite work but no errors are shown. Or maybe the app is very good about handling errors and it indicates that something went wrong / not as expected but the app informs you about this.

When you break the networking layer in your app, you might uncover some hints about how the app handles caching.

By making small changes that most likely break the app you can learn tons. It’s a technique I often use just to see if there are any threads I should start unraveling to learn more and more about the cool details of a codebase.

Of course, you don’t want to go around and start poking at random things. Usually, when I start exploring I’ll choose one or two features that I want to focus on. This is exactly the focus of my next tip.

Focus on a narrow scope

When you join a large enough codebase, the idea of having all of that code in your head at some point sounds impossible. And honestly, it probably is. There’s a good chance that most developers on the team for a large project will have one or two parts of the codebase internalized. They know everything about it. For everything else, they’ll roughly know which patterns the code should follow (because the whole team follows the same patterns) and they might have some sense of how that code interacts with other modules.

Overall though, it’s just not realistic for any team member to know all of the ins and outs of every module or feature in the codebase.

So why would you be attempting to explore the entire codebase all at once?

If you’re hired on a specific team, focus on the code that would be maintained by that team. Start exploring and understanding that code in as much detail as possible, have team members show you how the code works, and see if you can break some of the code.

Sometimes there will be bug tickets or features that you can start looking at to give you a good starting point to begin learning more about a codebase. If that’s the case, you can use your tickets to help you determine your scope. If you’re working on a bug, focus on understanding everything you can about the section of code that seems most likely to be the source of the bug.

And as always, you’ll want to be in touch with the team. Ask them if they can help you find something to focus on initially. When you have a bug ticket to work on, see if somebody on the team can help you kickstart your research; maybe they have some thoughts on where you can start looking first.

And in an ideal world, leverage pair programming to double the speed at which you learn.

Leverage pair programming

One tool that I usually find to be immensely underused is pair programming. In lots of places where I have worked, developers prefer to work alone. Headphones on, deep in the zone. Questions should be initiated on Slack so you’re disturbed as little as possible. Disable notifications if you have to.

There’s absolutely a time and place for deep focused work where you’re not to be disturbed.

However, there’s an enormous benefit in pairing up with a teammate to explore topics and work on features. Especially when you’ve just joined a team, it’s super important you have access to your team members to help you navigate the company, team, and codebase.

When you’re pairing with a teammate during your exploration phase, you can take the wheel. You can start exploring the codebase, asking questions about what you’re seeing as you go. Especially when you have something to work on, this can be extremely useful.

Any question or thought you might have can immediately be bounced off of your programming partner.

Even if you’re not the person taking the wheel, there’s lots of benefit in seeing somebody else navigate the code and project you’ll work on. Pay close attention to certain utilities or tools they use. If you see something you haven’t seen before, ask about it. Maybe those git commands your coworker uses are used by everybody on the team.

Especially when there’s debugging involved it pays dividends to ask for a pairing session. Seeing somebody that’s experienced with a codebase navigate and debug their code will teach you tons about relationships between certain objects for example.

Two people know more than one, and this is especially true while onboarding a new coworker. So next time a new person joins your team, offer them a couple of pair programming sessions. Or if you’re the new joiner see if there’s somebody interested in spending some time with you while working through some problems and exploring the codebase.

Use breakpoints

When I was working on this post I asked the community how they like to explore a codebase and a lot of people mentioned using a symbolic breakpoint on viewDidLoad or viewDidAppear which I found a pretty cool approach to learning more about the different views and view controllers that are used in a project.

A symbolic breakpoint allows you to pause the execution of your program when a certain method is called on code you might not own. For example, you can have a symbolic breakpoint on UIViewController methods which allows you to see whenever a new subclass of UIViewController is added to the navigation hierarchy.

Knowing this kind of stuff is super useful because you’ll be able to learn which view controller(s) belong to which screen quite quickly.

I haven’t used this one a lot myself but I found it an interesting idea so I wanted to include it in this list of tips.

In Summary

When you join a new team, it’s tempting to keep your head down and study your new codebase. In your head, you might think that you’re expected to already know everything about the codebase even though you’re completely new to the project.

You might think that all patterns and practices in the project are industry standard and that you just haven’t worked in places as good as this one before.

All of these kinds of ideas exist in pretty much anybody’s head and they prevent you from properly learning and exploring a new codebase.

In this post, you have learned some tips about why human interaction is extremely important during your exploration phase. You also learned some useful tips for the more technical side of things to help you effectively tackle learning a new codebase.

Good luck on your next adventure into a new codebase!

Understanding unstructured and detached tasks in Swift

When you just start out with learning Swift Concurrency you’ll find that there are several ways to create new tasks. One approach creates a parent / child relationship between tasks, another creates tasks that are unstructured but do inherit some context and there’s an approach that creates tasks that are completely detached from all context.

In this post, I will focus on unstructured and detached tasks. If you’re interested in learning more about child tasks, I highly recommend that you read the following posts:

These two posts go in depth on the relationship between parent and child tasks in Swift Concurrency, how cancellation propagates between tasks, and more.

This post assumes that you understand the basics of structured concurrency which you can learn more about in this post. You don’t have to have mastered the topic of structured concurrency, but having some sense of what structured concurrency is all about will help you understand this post much better.

Creating unstructured tasks with Task.init

The most common way in which you’ll be creating tasks in Swift will be with Task.init which you’ll probably write as follows without spelling out the .init:

Task {
  // perform work
}

An unstructured task is a task that has no parent / child relationship with the place it called from, so it doesn’t participate in structured concurrency. Instead, we create a completely new island of concurrency with its own scopes and lifecycle.

However, that doesn’t mean an unstructured task is created entirely independent from everything else.

An unstructured task will inherit two pieces of context from where it’s created:

  • The actor we’re currently running on (if any)
  • Task local values

The first point means that any tasks that we create inside of an actor will participate in actor isolation for that specific actor. For example, we can safely access an actor’s methods and properties from within a task that’s created inside of an actor:

actor SampleActor {
  var someCounter = 0

  func incrementCounter() {
    Task {
      someCounter += 1
    }
  }
}

If we were to mutate someCounter from a context that is not running on this specific actor we’d have to prefix our someCounter += 1 line with an await since we might have to wait for the actor to be available.

This is not the case for an unstructured task that we’ve created from within an actor.

Note that our task does not have to complete before the incrementCounter() method returns. That shows us that the unstructured task that we created isn’t participating in structured concurrency. If it were, incrementCounter() would not be able to complete before our task completed.

Similarly, if we spawn a new unstructured task from a context that is annotated with @MainActor, the task will run its body on the main actor:

@MainActor
func fetchData() {
  Task {
    // this task runs its body on the main actor
    let data = await fetcher.getData()

    // self.models is updated on the main actor
    self.models = data
  }
}

It’s important to note that the await fetcher.getData() line does not block the main actor. We’re calling getData() from a context that’s running on the main actor but that does not mean that getData() itself will run its body on the main actor. Unless getData() is explicitly associated with the main actor it will always run on a background thread.

However, the task does run its body on the main actor so once we’re no longer waiting for the result of getData(), our task resumes and self.models is updated on the main actor.

Note that while we await something, our task is suspended which allows the main actor to do other work while we wait. We don’t block the main actor by having an await on it. It’s really quite the opposite.

When to use unstructured tasks

You will most commonly create unstructured tasks when you want to call an async annotated function from a place in your code that is not yet async. For example, you might want to fetch some data in a viewDidLoad method, or you might want to start iterating over a couple of async sequences from within a single place.

Another reason to create an unstructured task might be if you want to perform a piece of work independently of the function you’re in. This could be useful when you’re implementing a fire-and-forget style logging function for example. The log might need to be sent off to a server, but as a caller of the log function I’m not interested in waiting for that operation to complete.

func log(_ string: String) {
  print("LOG", string)
  Task {
    await uploadMessage(string)
    print("message uploaded")
  }
}

We could have made the method above async but then we wouldn’t be able to return from that method until the log message was uploaded. By putting the upload in its own unstructured task we allow log(_:) to return while the upload is still ongoing.

Creating detached tasks with Task.detached

Detached tasks are in many ways similar to unstructured tasks. They don’t create a parent / child relationship, they don’t participate in structured concurrency and they create a brand new island of concurrency that we can work with.

The key difference is that a detached task will not inherit anything from the context that it was created in. This means that a detached task will not inherit the current actor, and it will not inherit task local values.

Consider the example you saw earlier:

actor SampleActor {
  var someCounter = 0

  func incrementCounter() {
    Task {
      someCounter += 1
    }
  }
}

Because we used a unstructed task in this example, we were able to interact with our actor’s mutable state without awaiting it.

Now let’s see what happens when we make a detached task instead:

actor SampleActor {
  var someCounter = 0

  func incrementCounter() {
    Task.detached {
      // Actor-isolated property 'someCounter' can not be mutated from a Sendable closure
      // Reference to property 'someCounter' in closure requires explicit use of 'self' to make capture semantics explicit
      someCounter += 1
    }
  }
}

The compiler now sees that we’re no longer on the SampleActor inside of our detached task. This means that we have to interact with the actor by calling its methods and properties with an await.

Similarly, if we create a detached task from an @MainActor annotated method, the detached task will not run its body on the main actor:

@MainActor
func fetchData() {
  Task.detached {
    // this task runs its body on a background thread
    let data = await fetcher.getData()

    // self.models is updated on a background thread
    self.models = data
  }
}

Note that detaching our task has no impact at all on where getData() executed. Since getData() is an async function it will always run on a background thread unless the method was explicitly annotated with an @MainActor annotation. This is true regardless of which actor or thread we call getData() from. It’s not the callsite that decides where a function runs. It’s the function itself.

When to use detached tasks

Using a detached task only makes sense when you’re performing work inside of the task body that you want to run away from any actors no matter what. If you’re awaiting something inside of the detached task to make sure the awaited thing runs in the background, a detached task is not the tool you should be using.

Even if you only have a slow for loop inside of a detached task, or you're encoding a large amount of JSON, it might make more sense to put that work in an async function so you can get the benefits of structured concurrency (the work must complete before we can return from the calling function) as well as the benefits of running in the background (async functions run in the background by default).

So a detached task really only makes sense if the work you’re doing should be away from the main thread, doesn’t involve awaiting a bunch of functions, and the work you’re doing should not participate in structured concurrency.

As a rule of thumb I avoid detached tasks until I find that I really need one. Which is only very sporadically.

In Summary

In this post you learned about the differences between detached tasks and unstructured tasks. You learned that unstructured tasks inherit context while detached tasks do not. You also learned that neither a detached task nor an unstructured task becomes a child task of their context because they don’t participate in structured concurrency.

You learned that unstructured tasks are the preferred way to create new tasks. You saw how unstructured tasks inherit the actor they are created from, and you learned that awaiting something from within a task does not ensure that the awaited thing runs on the same actor as your task.

After that, you learned how detached tasks are unstructured, but they don’t inherit any context from when they are created. In practice this means that they always run their bodies in the background. However, this does not ensure that awaited functions also run in the background. An @MainActor annotated function will always run on the main actor, and any async method that’s not constrained to the main actor will run in the background. This behavior makes detached tasks a tool that should only be used when no other tool solves the problem you’re solving.

The basics of structured concurrency in Swift explained

Swift Concurrency heavily relies on a concept called Structured Concurrency to describe the relationship between parent and child tasks. It finds its basis in the fork join model which is a model that stems from the sixties.

In this post, I will explain what structured concurrency means, and how it plays an important role in Swift Concurrency.

Note that this post is not an introduction to using the async and await keywords in Swift. I have lots of posts on the topic of Swift Concurrency that you can find right here. These posts all help you learn specific bits and pieces of modern Concurrency in Swift. For example, how you can use task groups, actors, async sequences, and more.

If you're looking for a full introduction to Swift Concurrency, I recommend you check out my book. In my book I go in depth on all the important parts of Swift Concurrency that you need to know in order to make the most out of modern concurrency features in Swift.

Anyway, back to structured concurrency. We’ll start by looking at the concept from a high level before looking at a few examples of Swift code that illustrates the concepts of structured concurrency nicely.

Understanding the concept of structured concurrency

The concepts behind Swift’s structured concurrency are neither new nor unique. Sure, Swift implements some things in its own unique way but the core idea of structured concurrency can be dated back all the way to the sixties in the form of the fork join model.

The fork join model describes how a program that performs multiple pieces of work in parallel (fork) will wait for all work to complete, receiving the results from each piece of work (join) before continuing to the next piece of work.

We can visualize the fork join model as follows:

Fork Join Model example

In the graphic above you can see that the first task kicks off three other tasks. One of these tasks kicks off some sub-tasks of its own. The original task cannot complete until it has received the results from each of the tasks it spawned. The same applies to the sub-task that kicks of its own sub-tasks.

You can see that the two purple colored tasks must complete before the task labelled as Task 2 can complete. Once Task 2 is completed we can proceed with allowing Task 1 to complete.

Swift Concurrency is heavily based on this model but it expands on some of the details a little bit.

For example, the fork join model does not formally describe a way for a program to ensure correct execution at runtime while Swift does provide these kinds of runtime checks. Swift also provides a detailed description of how error propagation works in a structured concurrency setting.

When any of the child tasks spawned in structured concurrency fails with an error, the parent task can decide to handle that error and allow other child tasks to resume and complete. Alternatively, a parent task can decide to cancel all child tasks and make the error the joined result of all child tasks.

In either scenario, the parent task cannot complete while the child tasks are still running. If there’s one thing you should understand about structured concurrency that would be it. Structured concurrency’s main focus is describing how parent and child tasks relate to each other, and how a parent task can not complete when one or more of its child tasks are still running.

So what does that translate to when we explore structured concurrency in Swift specifically? Let’s find out!

Structured concurrency in action

In its simplest and most basic form structured concurrency in Swift means that you start a task, perform some work, await some async calls, and eventually your task completes. This could look as follows:

func parseFiles() async throws -> [ParsedFile] {
  var parsedFiles = [ParsedFile]()

  for file in list {
    let result = try await parseFile(file)
    parsedFiles.append(result)
  }

  return parsedFiles
}

The execution for our function above is linear. We iterate over a list of files, we await an asynchronous function for each file in the list, and we return a list of parsed files. We only work on a single file at a time and at no point does this function fork out into any parallel work.

We know that at some point our parseFiles() function was called as part of a Task. This task could be part of a group of child tasks, it could be task that was created with SwiftUI’s task view modifier, it could be a task that was created with Task.detached. We really don’t know. And it also doesn’t really matter because regardless of the task that this function was called from, this function will always run the same.

However, we’re not seeing the power of structured concurrency in this example. The real power of structured concurrency comes when we introduce child tasks into the mix. Two ways to create child tasks in Swift Concurrency are to leverage async let or TaskGroup. I have detailed posts on both of these topics so I won’t go in depth on them in this post:

Since async let has the most lightweight syntax of the two, I will illustrate structured concurrency using async let rather than through a TaskGroup. Note that both techniques spawn child tasks which means that they both adhere to the rules from structured concurrency even though there are differences in the problems that TaskGroup and async let solve.

Imagine that we’d like to implement some code that follows the fork join model graphic that I showed you earlier:

Fork Join Model example

We could write a function that spawns three child tasks, and then one of the three child tasks spawns two child tasks of its own.

The following code shows what that looks like with async let. Note that I’ve omitted various details like the implementation of certain classes or functions. The details of these are not relevant for this example. The key information you’re looking for is how we can kick off lots of work while Swift makes sure that all work we kick off is completed before we return from our buildDataStructure function.

func buildDataStructure() async -> DataStructure {
  async let configurationsTask = loadConfigurations()
  async let restoredStateTask = loadState()
  async let userDataTask = fetchUserData()

  let config = await configurationsTask
  let state = await restoredStateTask
  let data = await userDataTask

  return DataStructure(config, state, data)
}

func loadConfigurations() async -> [Configuration] {
  async let localConfigTask = configProvider.local()
  async let remoteConfigTask = configProvider.remote()

  let (localConfig, remoteConfig) = await (localConfigTask, remoteConfigTask)

  return localConfig.apply(remoteConfig)
}

The code above implements the same structure that is outlined in the fork join sample image.

We do everything exactly as we’re supposed to. All tasks we create with async let are awaited before the function that we created them in returns. But what happens when we forget to await one of these tasks?

For example, what if we write the following code?

func buildDataStructure() async -> DataStructure? {
  async let configurationsTask = loadConfigurations()
  async let restoredStateTask = loadState()
  async let userDataTask = fetchUserData()

  return nil
}

The code above will compile perfectly fine. You would see a warning about some unused properties but all in all your code will compile and it will run just fine.

The three async let properties that are created each represent a child task and as you know each child task must complete before their parent task can complete. In this case, that guarantee will be made by the buildDataStructure function. As soon as that function returns it will cancel any running child tasks. Each child task must then wrap up what they’re doing and honor this request for cancellation. Swift will never abruptly stop executing a task due to cancellation; cancellation is always cooperative in Swift.

Because cancellation is cooperative Swift will not only cancel the running child tasks, it will also implicitly await them. In other words, because we don’t know whether cancellation will be honored immediately, the parent task will implicitly await the child tasks to make sure that all child tasks are completed before resuming.

How unstructured and detached tasks relate to structured concurrency

In addition to structured concurrency, we have unstructured concurrency. Unstructured concurrency allows us to create tasks that are created as stand alone islands of concurrency. They do not have a parent task, and they can outlive the task that they were created from. Hence the term unstructured. When you create an unstructured task, certain attributes from the source task are carried over. For example, if your source task is main actor bound then any unstructured tasks created from that task will also be main actor bound.

Similarly if you create an unstructured task from a task that has task local values, these values are inherited by your unstructured task. The same is true for task priorities.

However, because an unstructured task can outlive the task that it got created from, an unstructured task will not be cancelled or completed when the source task is cancelled or completed.

An unstructured task is created using the default Task initializer:

func spawnUnstructured() async {
  Task {
    print("this is printed from an unstructured task")
  }
}

We can also create detached tasks. These tasks are both unstructured as well as completely detached from the context that they were created from. They do not inherit any task local values, they do not inherit actor, and they do not inherit priority.

I cover detached and unstructured tasks more in depth right here.

In Summary

In this post, you learned what structured concurrency means in Swift, and what its primary rule is. You saw that structured concurrency is based on a model called the fork join model which describes how tasks can spawn other tasks that run in parallel and how all spawned tasks must complete before the parent task can complete.

This model is really powerful and it provides a lot of clarity and safety around the way Swift Concurrency deals with parent / child tasks that are created with either a task group or an async let.

We explored structured concurrency in action by writing a function that leveraged various async let properties to spawn child tasks, and you learned that Swift Concurrency provides runtime guarantees around structured concurrency by implicitly awaiting any running child tasks before our parent task can complete. In our example this meant awaiting all async let properties before returning from our function.

You also learned that we can create unstructured or detached tasks with Task.init and Task.detached. I explained that both unstructured and detached tasks are never child tasks of the context that they were created in, but that unstructured tasks do inherit some context from the context they were created in.

All in all the most important thing to understand about structured concurrency is that it provide clear and rigid rules around the relationship between parent and child tasks. In particular it describes how all child tasks must complete before a parent task can complete.

Setting up a simple local web socket server

Every once in a while I find myself writing about or experimenting with web sockets. As an iOS developer, I’m not terribly familiar with setting up and programming servers that leverage web sockets beyond some toy projects in college.

Regardless, I figured that since I have some posts that cover web sockets on my blog, I should show you how I set up the socket servers that I use in those posts. Before you read on, I’m going to need you to promise me you won’t take the code I’m about to show you to a production environment…

You promise? Good.

I generally use the [WebSocket (or ws) package from npm](https://www.npmjs.com/package/ws) along with node.js. I chose these technologies because that’s what was around when I first learned about web sockets, and because it works well for my needs. If you prefer different tools and languages that’s perfectly fine of course; I just won’t cover them on here.

Once you have node installed on your machine (go here if you haven’t already installed node.js) you can create a new folder somewhere on your machine, and navigate to that folder in your terminal. Then type npm install ws to install the ws package in your current directory (so make sure you’re in your project folder when typing this!).

After that, create a file called index.mjs (that’s not a typo; it’s a fancy new JavaScript module extension) and add the following contents to it:

import WebSocket, { WebSocketServer } from 'ws';

const wss = new WebSocketServer({port: 8080});

Usually when I’m experimenting I like to do something simple like:

  • For a new connection, start listening for incoming messages and do something in response; for example, close the connection.
  • Send a “connection received” message
  • Send a new message every second
  • When the received connection is closed, stop sending messages over the socket (nobody is listening anymore)

The code to do this looks a bit as follows:

const wss = new WebSocketServer({port: 8080});

wss.on('connection', function connection(wss) {
    wss.on('message', function message(data) {
        console.log('received %s', data);
        wss.close();
    });

    wss.send('connection received');

    var t = setInterval(function() {
        console.log("sending message");
        wss.send('sending message!');
    }, 1000);

    wss.on('close', function close() {
        console.log("received close");
        clearInterval(t);
      });
});

Again, I’m not a professional JavaScript developer so there might be much nicer ways to do the above but this is what works for the purposes I tend to use web sockets for which is always purely experimental.

For a full overview of web socket events that you might want to add handlers for, I highly recommend you take a look at the docs for ws.

Iterating over web socket messages with async / await in Swift

In iOS 13, we gained the ability to easily send and receive data using web sockets through URLSession. With async/await, we gained the ability to fetch data from servers using the await keyword and we can iterate over asynchronous sequences using async for loops.

We can even read data from a URL one line at a time by calling the lines property on URL:

let url = URL(string: "https://donnywals.com")!

for try await line in url.lines {
    // use line
}

While this is really cool and allows us to build apps that ingest data in real time if the server supports streaming bodies, we cannot use the lines property to set up a web socket connection and listen for incoming messages and potentially send messages over the same connection too.

In this post, you will learn everything you need to know about building your own mechanism to conveniently iterate over messages from a web socket asynchronously. We will leverage some existing functionality from URLSessionWebSocketTask and AsyncThrowingStream to build our own AsyncSequence that conveniently wraps our URLSessionWebSocketTask.

Note that the resulting code has only had relatively limited testing done so I cannot guarantee that the provided solution will be 100% correct for everything you throw at it. If you find any issues with the final code, feel free to contact me. Bonus points if you’re able to provide some ideas for a potential fix.

Using a web socket without async / await

Before we get started, let's quickly review how to use a web socket without async/await. The code details are outlined in this post. Be sure to read it if you want to learn more about using web sockets in your apps.

let url = URL(string: "ws://127.0.0.1:8080")!
let socketConnection = URLSession.shared.webSocketTask(with: url)
socketConnection.resume()

func setReceiveHandler() {
    socketConnection.receive { result in
        defer { self.setReceiveHandler() }

        do {
            let message = try result.get()
            switch message {
            case let .string(string):
                print(string)
            case let .data(data):
                print(data)
            @unknown default:
                print("unkown message received")
            }
        } catch {
            // handle the error
            print(error)
        }
    }
}

setReceiveHandler()

Notice how, to receive messages from the socket, I must call receive with a completion handler. This method only allows me to receive a single incoming message, so I must re-set my handler after receiving a message to automatically begin listening for the next message.

This is a great example of a situation where an async for loop such as for try await message in socketConnection would make a lot of sense. Unfortunately, this is not possible out of the box. However, URLSessionWebSocketTask provides some form of support for async / await so we’re not entirely out of luck.

A basic implementation of web sockets with async / await

While URLSessionWebSocketTask doesn’t expose an AsyncSequence that emits incoming messages out of the box, it does come with an async version of the receive method you saw earlier.

This allows us to rewrite the example above as an async method as follows:

func setReceiveHandler() async {
    do {
        let message = try await socketConnection.receive()

        switch message {
        case let .string(string):
          print(string)
        case let .data(data):
          print(data)
        @unknown default:
          print("unkown message received")
        }
    } catch {
        print(error)
    }

    await setReceiveHandler()
}

This code works just fine, except we don’t really have a means to stop the recursion here. The code you saw earlier actually has the exact same issue; there’s no condition to stop listening for web socket messages even if the web socket connection has already been closed.

We could improve our code by only recursing if:

  1. We didn’t encounter any errors
  2. The socket connection is still active

This would look a bit as follows:

func setReceiveHandler() async {
    guard socketConnection.closeCode == .invalid else {
        return
    }

    do {
        let message = try await socketConnection.receive()

        switch message {
        case let .string(string):
          print(string)
        case let .data(data):
          print(data)
        @unknown default:
          print("unkown message received")
        }

        await setReceiveHandler()
    } catch {
        print(error)
    }
}

An open web socket’s closed code is always said to invalid to signal that the connection has not (yet) been closed. We can leverage this to check that our connection is still active before waiting for the next message to be received.

This is much better already because we respect closed sockets and failures much nicer now, but we could improve the readability of this code a tiny bit by leveraging a while loop instead of recursively calling the setReceiveHandler function:

func setReceiveHandler() async {
    var isActive = true

    while isActive && socketConnection.closeCode == .invalid {
        do {
            let message = try await socketConnection.receive()

            switch message {
            case let .string(string):
              print(string)
            case let .data(data):
              print(data)
            @unknown default:
              print("unkown message received")
            }
        } catch {
            print(error)
            isActive = false
        }
    }
}

To me, this version of the code is slightly easier to read but that might not be the case for you. It’s functionally equivalent so you can choose to use whichever option suits you best.

While this code works, I’m not quite happy with where we’ve landed right now. There’s a lot of logic in this function and I would prefer to separate handling the incoming values from the calls to socketConnection.receive() somehow. Ideally, I should be able to write the following:

do {
    for try await message in socketConnection {
        switch message {
        case let .string(string):
            print(string)
        case let .data(data):
            print(data)
        @unknown default:
            print("unkown message received")
      }
} catch {
    // handle error
}

This is much, much nicer from a call-site perspective and it would allow us to put the ugly bits elsewhere.

To do this, we can leverage the power of AsyncStream which allows us to build a custom async sequence of values.

Using AsyncStream to emit web socket messages

Given our end goal, there are a few ways for us to get where we want to be. The easiest way would be to write a function in an extension on URLSessionWebSocketTask that would encapsulate the while loop you saw earlier. This implementation would look as follows:

typealias WebSocketStream = AsyncThrowingStream<URLSessionWebSocketTask.Message, Error>

public extension URLSessionWebSocketTask {    
    var stream: WebSocketStream {
        return WebSocketStream { continuation in
            Task {
                var isAlive = true

                while isAlive && closeCode == .invalid {
                    do {
                        let value = try await receive()
                        continuation.yield(value)
                    } catch {
                        continuation.finish(throwing: error)
                        isAlive = false
                    }
                }
            }
        }
    }
}

To make the code a little bit easier to read, I’ve defined a typealias for my AsyncThrowingStream so we don’t have to look at the same long type signature all over the place.

The code above creates an instance of AsyncThrowingStream that asynchronously awaits new values from the web socket as long as the web socket is considered active and hasn't been closed. To emit incoming messages and potential errors, the continuation's yield and finish methods are used. These methods will either emit a new value (yield) or end the stream of values with an error (finish).

This code works great in many situations, but there is one issue. If we decide to close the web socket connection from the app's side by calling cancel(with:reason:) on our socketConnection, our WebSocketStream does not end. Instead, it will be stuck waiting for messages, and the call site will be stuck too.

Task {
    try await Task.sleep(for: .seconds(5))
    try await socketConnection.cancel(with: .goingAway, reason: nil)
}

Task {    
    do {
        for try await message in socketConnection.stream {
            // handle incoming messages
        }
    } catch {
        // handle error
    }

    print("this would never be printed")
}

If everything works as expected, our web socket connection will close after five seconds. At that point, our for loop should end and our print statement should execute, since the asynchronous stream is no longer active. Unfortunately, this is not the case, so we need to find a better way to model our stream.

URLSessionWebSocketTask does not provide a way for us to detect cancellation. So, I have found that it is best to use an object that wraps the URLSessionWebSocketTask, and to cancel the task through that object. This allows us to both end the async stream we are providing to callers and close the web socket connection with one method call.

Here’s what that object looks like:

class SocketStream: AsyncSequence {
    typealias AsyncIterator = WebSocketStream.Iterator
    typealias Element = URLSessionWebSocketTask.Message

    private var continuation: WebSocketStream.Continuation?
    private let task: URLSessionWebSocketTask

    private lazy var stream: WebSocketStream = {
        return WebSocketStream { continuation in
            self.continuation = continuation

            Task {
                var isAlive = true

                while isAlive && task.closeCode == .invalid {
                    do {
                        let value = try await task.receive()
                        continuation.yield(value)
                    } catch {
                        continuation.finish(throwing: error)
                        isAlive = false
                    }
                }
            }
        }
    }()

    init(task: URLSessionWebSocketTask) {
        self.task = task
        task.resume()
    }

    deinit {
        continuation?.finish()
    }

    func makeAsyncIterator() -> AsyncIterator {
        return stream.makeAsyncIterator()
    }

    func cancel() async throws {
        task.cancel(with: .goingAway, reason: nil)
        continuation?.finish()
    }
}

There’s a bunch of code here, but it’s not too bad. The first few lines are all about setting up some type aliases and properties for convenience. The lazy var stream is essentially the exact same code that you’ve already in the URLSessionWebSocketTask extension from before.

When our SocketStream's deinit is called we make sure that we end our stream. There’s also a cancel method that closes the socket connection as well as the stream. Because SocketStream conforms to AsyncSequence we must provide an Iterator object that’s used when we try to iterate over our SocketStreams. We simply ask our internal stream object to make an iterator and use that as our return value.

Using the code above looks as follows:

let url = URL(string: "ws://127.0.0.1:8080")!
let socketConnection = URLSession.shared.webSocketTask(with: url)
let stream = SocketStream(task: socketConnection)

Task {  
    do {
        for try await message in stream {
            // handle incoming messages
        }
    } catch {
        // handle error
    }

    print("this will be printed once the stream ends")
}

To cancel our stream after 5 seconds just like before, you can run the following task in parallel with our iterating task:

Task {
    try await Task.sleep(for: .seconds(5))
    try await stream.cancel()
}

Task {
    // iterate...
}

While this is pretty cool, we do have a bit of an issue here on older iOS versions because of the following bit of code. By older I mean pre-iOS 17.0.

If you're targetting iOS 17 or newer you can ignore this next part

private lazy var stream: WebSocketStream = {
    return WebSocketStream { continuation in
        self.continuation = continuation

        Task {
            var isAlive = true

            while isAlive && task.closeCode == .invalid {
                do {
                    let value = try await task.receive()
                    continuation.yield(value)
                } catch {
                    continuation.finish(throwing: error)
                    isAlive = false
                }
            }
        }
    }
}()

The task that we run our while loop in won’t end unless we end our stream from within our catch block. If we manually close the web socket connection using the cancel method we write earlier, the call to receive() will never receive an error nor a value which means that it will be stuck forever. This was fixed in iOS 17 but is still a problem in older iOS versions.

The most reliable way to fix this is to go back to the callback based version of receive to drive your async stream:

private lazy var stream: WebSocketStream = {
    return WebSocketStream { continuation in
        self.continuation = continuation
        waitForNextValue()
    }
}()

private func waitForNextValue() {
    guard task.closeCode == .invalid else {
        continuation?.finish()
        return
    }

    task.receive(completionHandler: { [weak self] result in
        guard let continuation = self?.continuation else {
            return
        }

        do {
            let message = try result.get()
            continuation.yield(message)
            self?.waitForNextValue()
        } catch {
            continuation.finish(throwing: error)
        }
    })
}

With this approach we don’t have any lingering tasks, and our call site is as clean and concise as ever; we’ve only changed some of our internal logic.

In Summary

Swift Concurrency provides many useful features for writing better code, and Apple quickly adopted async / await for existing APIs. However, some APIs that would be useful are missing, such as iterating over web socket messages.

In this post, you learned how to use async streams to create an async sequence that emits web socket messages. You first saw a fully async / await version that was neat, but had memory and task lifecycle issues. Then, you saw a version that combines a callback-based approach with the async stream.

The result is an easy way to iterate over incoming web socket messages with async / await. If you have any questions, comments, or improvements for this post, please don't hesitate to reach out to me on Twitter.

Understanding Swift Concurrency’s AsyncStream and AsyncThrowingStream

In an earlier post, I wrote about different ways that you can bridge your existing asynchronous code over to Swift’s new Concurrency system that leverages async / await. The mechanisms shown there work great for code where your code produces a single result that can be modeled as a single value.

However in some cases this isn’t possible because your existing code will provide multiple values over time. This is the case for things like download progress, the user’s current location, and other similar situations.

Generally speaking, these kinds of patterns would be modeled as AsyncSequence objects that you can iterate over using an asynchronous for loop. A basic example of this would be the lines property on URL:

let url = URL(string: "https://donnywals.com")!

for try await line in url.lines {
    // use line
}

But what’s the best way to build your own async sequences? Implementing the AsyncSequence protocol and building your on AsyncIterator sounds tedious and error-prone. Luckily, there’s no reason for you to be doing any of that.

In this post, I will show you how you can leverage Swift’s AsyncStream to build custom async sequences that produce values whenever you need them to.

Producing a simple async stream

An async stream can be produced in various ways. The easiest way to create an async stream is to use the AsyncStream(unfolding:) initializer. Its usage looks a bit as follows:

let stream = AsyncStream(unfolding: {
    return Int.random(in: 0..<Int.max)
})

Of course, this example isn’t particularly useful on its own but it does show how simple the concept of AsyncStream(unfolding:) is. We use this version of AsyncStream whenever we can produce and return return values for our async stream. The closure that’s passed to unfolding is async so this means that we can await asynchronous operations from within our unfolding closure. Your unfolding closure will be called every time you’re expected to begin producing a value for your stream. In practice this means that your closure will be called, you perform some work, you return a value and then your closure is called. This repeats until the for loop is cancelled, the task that contains your async for loop is cancelled, or until you return nil from your unfolding closure.

The AsyncStream(unfolding:) way to produce a stream of values is quite convenient but it’s particularly useful in situations where:

  • You want to perform async work that needs to be awaited to produce elements
  • You have a need to handle back pressure when bridging an API you own

When you’re bridging an existing API that’s based on delegates or for APIs that leverage callbacks to communicate results, you probably won’t be able to use AsyncStream(unfolding:). While it’s the simplest and least error-prone way to build an async stream, it’s also the way that I’ve found to be most limiting and it doesn’t often fit well with bridging existing code over to Swift Concurrency.

More flexibility can be found in the continuation based API for AsyncStream.

Producing an async stream with a continuation

When an asynchronous closure doesn’t quite fit your use case for creating your own async stream, a continuation based approach might be a much better solution for you. With a continuation you have the ability to construct an async stream object and send values over the async stream whenever values become available.

We can do this by creating an AsyncStream using the AsyncStream(build:) initializer:

let stream2 = AsyncStream { cont in
    cont.yield(Int.random(in: 0..<Int.max))
}

The example above creates an AsyncStream that produces a single integer value. This value is produced by calling yield on the continuation. Every time we have a value to send, we should call yield on the continuation with the value that we want to send.

If we’re building an AsyncStream that wraps a delegate based API, we can hold on to our continuation in the delegate object and call yield whenever a relevant delegate method is called.

For example, we could call continuation.yield from within a CLLocationManagerDelegate whenever a new user location is made available to us:

class AsyncLocationStream: NSObject, CLLocationManagerDelegate {
    lazy var stream: AsyncStream<CLLocation> = {
        AsyncStream { (continuation: AsyncStream<CLLocation>.Continuation) -> Void in
            self.continuation = continuation
        }
    }()
    var continuation: AsyncStream<CLLocation>.Continuation?

    func locationManager(_ manager: CLLocationManager, didUpdateLocations locations: [CLLocation]) {

        for location in locations {
            continuation?.yield(location)
        }
    }
}

The example above is a very naive starting point for creating an async stream of user locations. There are a couple of things we don’t fully take into account such as cancelling and starting location observation or asking for location permissions.

At its core though, this example is a great starting point for experimenting with async streams.

Note that this approach will not wait for consumers of your async stream to consume a value fully before you can send your next value down the stream. Instead, all values that you send will be buffered in your async stream by default which may or may not be what you want.

In practical terms this means that when you send values down your stream faster than the consuming for loop can process these values, you will end up with a buffer filled with values that will be delivered to the consuming for loop with a delay. This might be exactly what you need, but if the values you send are somewhat time sensitive and ephemeral it would potentially make sense to drop values if the consuming for loop isn’t ready to receive values.

We could decide that we never want to hold on to more than 1 location and that we only want to buffer the last known location to avoid processing stale data. We can do this by setting a buffering policy on our async stream:

lazy var stream: AsyncStream<CLLocation> = {
    AsyncStream(bufferingPolicy: .bufferingNewest(1)) { (continuation: AsyncStream<CLLocation>.Continuation) -> Void in
        self.continuation = continuation
    }
}()

This code passes a bufferingPolicy of .bufferingNewest(1) to our AsyncStream. This means that we will only buffer a single value if the consuming for loop isn’t processing items fast enough, and we will discard older values in favor of keeping only the latest location.

If our stream comes to a natural close, you can call finish() on your continuation to end the stream of values.

If your stream might fail with an error, you can also choose to create an AsyncThrowingStream instead of an AsyncStream. The key difference is that consumers of a throwing stream must await new values using try await instead just await. To make your stream throw an error you can either call finish(throwing:) on your continuation or you can call yield(with:) using a Result object that represents a failure.

While the basics of building an AsyncStream aren’t particularly complex, we do need to think about how we manage the lifecycles of the things we create carefully. Especially because we’re not supposed to make our continuations outlive our streams which is a very easy mistake to make when you’re bridging existing delegate based code.

Managing your stream’s lifecycle

There are essentially two ways for an async stream to end. First, the stream might naturally end producing values because no further values can be produced. You will call finish on your continuation and you can provide any cleanup that you need to do at the same time. For example, you could set the continuation that you’re holding on to to nil to make sure you can’t accidentally use it anymore.

Alternatively, your stream can end because the task that’s used to run your async stream is cancelled. Consider the following:

let locations = AsyncLocationStream()

let task = Task {
    for await location in locations.stream {
        print(location)
    }
}

task.cancel()

When something like the above happens, we will want to make sure that we don’t call yield on our continuation anymore unless we start a new stream with a new, active, continuation.

We can detect and respond to the end of our stream by setting an onTermination handler on our continuation:

self.continuation?.onTermination = { result in
    print(result)
    self.continuation = nil
}

Ideally we set this handler immediately when we first create our async stream.

In addition to the stream being cancelled or otherwise going out of scope, we could break out of our loop which will eventually cause our task to finish. This is generally speaking not something this will end your async stream so if you want breaking out of your loop to end your stream, you will need to take this into account yourself.

Personally, I’ve found that the easiest way to make sure you do some cleanup is to have some method on your stream producing object to cancel the stream instead of just breaking out of an async for loop. That way, you can perform cleanup and not have a stream that’s sending values even though nobody is listening.

It’s also important to bear in mind that the pattern I showed earlier will only work if one consumer uses your location stream object. You cannot have multiple for loops iterating over a single stream in Swift Concurrency because by default, async sequences lack the ability to share their iterations with multiple loops.

If you're interested in seeing a practical application of async streams to bridge existing code into Swift Concurrency, take a look at this post where I use AsyncStream to iterate over incoming web socket messages.

In Summary

In this post, you learned a lot about async streams and how you can produce your own async sequences. First, you saw the unfolding approach of building an async stream and you learned that this approach is relatively straightforward but might not be very useful for people that need to bridge existing delegate or callback based APIs.

After exploring unfolding for a bit, we took a look at the build closure for async streams. You learned that this approach leverages a continuation object that can be called to produce values if and when needed.

You saw a very rudimentary example of an object that would bridge a CLLocationManager into async await, and you learned a but about correctly managing your continuations to prevent sending values into an already completed stream.

If you have any questions or comments for me about this post, please feel free to reach out on Twitter or on Mastodon.

Providing a default value for a SwiftUI Binding

Sometimes in SwiftUI apps I’ll find that I have a model with an optional value that I’d like to pass to a view that requires a non optional value. This is especially the case when you’re using Core Data in your SwiftUI apps and use auto-generated models.

Consider the following example:

class SearchService: ObservableObject {
  @Published var results: [SearchResult] = []
  @Published var query: String?
}

Let me start by acknowledging that yes, this object can be written with a query: String = "" instead of an optional String?. Unfortunately, we don’t always own or control the models and objects that we’re working with. In these situations we might be dealing with optionals where we’d rather have our values be non-optional. Again, this can be especially true when using generated code (like when you’re using Core Data).

Now let’s consider using the model above in the following view:

struct MyView: View {
  @ObservedObject var searchService: SearchService

  var body: some View {
      TextField("Query", text: $searchService.query)
  }
}

This code will not compile because we need to pass a binding to a non optional string to our text field. The compiler will show the following error:

Cannot convert value of type Binding<String?> to expected argument type Binding<String>

One of the ways to fix this is to provide a custom instance of Binding that can provide a default value in case query is nil. Making it a Binding<String> instead of Binding<String?>.

Defining a custom binding

A SwiftUI Binding instance is nothing more than a get and set closure that are called whenever somebody tries to read the current value of a Binding or when we assign a new value to it.

Here’s how we can create a custom binding:

Binding(get: {
  return "Hello, world"
}, set: { _ in
  // we can update some external or captured state here
})

The example above essentially recreates Binding's .constant which is a binding that will always provide the same pre-determined value.

If we were to write a custom Binding that allows us to use $searchService.query to drive our TextField it would look a bit like this:

struct MyView: View {
  @ObservedObject var searchService: SearchService

  var customBinding: Binding<String> {
    return Binding(get: {
      return searchService.query ?? ""
    }, set: { newValue in
      searchService.query = newValue
    })
  }

  var body: some View {
    TextField("Query", text: customBinding)
  }
}

This compiles, and it works well, but if we have several occurrences of this situation in our codebase, it would be nice if had a better way of writing this. For example, it would neat if we could write the following code:

struct MyView: View {
  @ObservedObject var searchService: SearchService

  var body: some View {
    TextField("Query", text: $searchService.query.withDefault(""))
  }
}

We can achieve this by adding an extension on Binding with a method that’s available on existing bindings to optional values:

extension Binding {
  func withDefault<T>(_ defaultValue: T) -> Binding<T> where Value == Optional<T> {
    return Binding<T>(get: {
      self.wrappedValue ?? defaultValue
    }, set: { newValue in
      self.wrappedValue = newValue
    })
  }
}

The withDefault(_:) function we wrote here can be called on Binding instances and in essence it does the exact same thing as the original Binding already did. It reads and writes the original binding’s wrappedValue. However, if the source Binding has nil value, we provide our default.

What’s nice is that we can now create bindings to optional values with a pretty straightforward API, and we can use it for any kind of optional data.