Setting up a delivery pipeline for your agentic iOS projects

A while back, my app crashed mid-workout at the gym. I uploaded the crash report, gave my AI agent some context, and went back to my set. By the time I finished, there was a pull request waiting for me. I reviewed it, merged it, and had a fixed TestFlight build on my device shortly after — without ever opening Xcode.

That kind of turnaround is only possible because of the delivery pipeline I've built around agentic engineering. And that's what this post is about. Know that this post doesn't introduce anything revolutionary in terms of how I work. But this is a setup that works well for me, and I think in this day and age, it's important for folks to get some insights into what others are doing instead of seeing yet another "I SHIP TONS OF AI CODE" post.

I'm hoping to be a little more balanced than that...

Agentic engineering (aka vibe coding) is becoming more popular by the day. More and more developers are letting AI agents handle large parts of their iOS projects, and honestly, I get it. It's incredibly productive. But it comes with a real risk: when you hand off the coding to an agent, quality and architecture can degrade fast if you don't have the right guardrails in place.

In this post, I want to walk you through the pipeline I use to make sure that even though I do agentic engineering, my product quality stays solid (yes, it involves me reading the code and sometimes tweaking by hand).

We'll cover setting up your local environment, why planning mode matters, automated PR reviews with Cursor's BugBot, running CI builds and tests with Bitrise, and the magic of having TestFlight builds land on your device almost immediately after merging.

If you're interested in my broader thoughts on balancing AI and quality, you might enjoy my earlier post on the importance of human touch in AI-driven development.

Setting up your local environment for agentic engineering

Everything starts locally. Before you even think about CI/CD or automated reviews, you need to make sure your AI agent knows how to write code the way you want it written. The most important tool for this is an agents.md file (or your editor's equivalent, like Cursor rules).

Think of agents.md as a coding standards document for your agent. It tells the agent what language features to prefer, how to structure code, and what conventions to follow. Here's an example of what mine looks like for an iOS project:

## Swift code conventions

- Use 2-space indentation
- Prefer SwiftUI over UIKit unless explicitly targeting UIKit
- Target iOS 26 and Swift 6.2
- Use async/await over completion handlers
- Prefer structured concurrency over unstructured tasks

## Architecture

- Use MVVM with Observable for view models
- Keep views thin; move logic into view models or dedicated services
- Never put networking code directly in a view

## Testing

- Write tests for all new logic using Swift Testing
- Run tests before creating a pull request
- Prefer testing behavior over implementation details

Add this file to the root of your Xcode project, and Xcode 26.3's agent will pick up your rules too.

This file is just a starting point. The thing is, your agents.md is a living document. Every time the agent does something you don't like, you add a rule. Every time you notice a pattern that works well, you codify it. I update mine constantly.

For example, I might notice my agent creating new networking helper classes instead of using the APIClient I already had. So I can add a rule: "Always use the existing APIClient for network requests. Never create new networking helpers.". From that moment on, the agent should honor my preferences and use existing code instead of adding new code.

Beyond rules, you can also equip your agent with skills. A skill is a standalone Markdown file that teaches the agent about a specific topic in depth. Where agents.md sets broad rules and conventions, a skill usually contains detailed patterns for how to structure things like SwiftUI navigation, handle Swift Concurrency safely, or work with Core Data. Xcode 26.3 even has an MCP (you can more or less think of that as a predecessor of skills) that can help agents find documentation, best practices, and more.

Your local environment is the foundation. Everything that comes after (PR reviews, CI, TestFlight) depends on the agent producing reasonable code in the first place.

Planning before building

This is the step that, in my opinion, carries a ton of value but is easy to skip.

If you use Cursor (or a similar tool), you probably have access to a planning mode. Instead of letting the agent jump straight into writing code, you ask it to make a plan first. The agent outlines what it intends to do — which files it'll change, what approach it'll take, what tradeoffs it's considering — and you review that plan before giving the green light.

The difference between "fire off a prompt and hope for the best" and "review a plan, then execute" is huge. When you review the plan, you catch bad architectural decisions before they become bad code. You can steer the agent toward the right approach without having to undo a bunch of work.

Planning will also make it more obvious if the agent misunderstood you. For example, if your prompt isn't super targeted to tackle all ambiguity up-front, the agent might confidently think you meant one thing while you meant another. A funny example is "persist this data on device" and the agent assumes "write to user default" when you meant "create Swift Data models". You can often catch these things in planning mode and fix the agent's trajectory.

In practice, my workflow looks like this: I describe what I want in planning mode, the agent proposes an approach, I give feedback or approve, and only then does the agent switch to implementation. Going through planning first can feel slow but usually I find that it makes the output so much better that it's 100% worth it.

For example, when I wanted to add a streaks feature to Maxine, the agent proposed creating an entirely new data model and view model from scratch. In the plan review, I noticed it was going to duplicate logic I already had in my workout history queries. I steered it toward reusing that existing data layer, and the result was cleaner and more maintainable. Without the planning step, I would have ended up with redundant code that I'd have to clean up later.

Automated PR reviews with BugBot

Once the agent has written code and I've done a quick check to review changes, I run the code on my device to make sure things look and feel right. Once I sign off, the agent can make a PR on my repo. If the agent is running in the cloud, I skip this step entirely and the agent will make a PR immediately when it thinks it's done.

This is where BugBot comes in. BugBot is part of Cursor's ecosystem and it automatically reviews your pull requests. It looks for logic issues, edge cases, and unintended changes that I might miss during a quick scan. It can even push fixes directly to the PR branch.

BugBot has been invaluable in my process because even though I do my own PR review, the whole point of agentic engineering is to let the agent handle as much as possible. My goal is to kick off a prompt, quickly eyeball the result, run it on my device, and move on. BugBot acts as an automated safety net that catches what I might not.

Let me give you two examples from Maxine. The first is about edge cases. Maxine recovers your workout if the app crashes. BugBot flagged that there was a possible condition where, if the user tapped "start workout" before the recovery completed, the app would attempt to start a Watch workout twice. Honestly, I considered this scenario nearly impossible in practice — but the code allowed it. Instead of relying on what I couldn't realistically test, BugBot added safeguards to make sure this path was handled properly. That's exactly the kind of thing I'd never catch during a quick eyeball review.

The second is about unintended changes. I once had a PR where I had left behind a few orphaned debugging properties. BugBot spotted them as "probably not part of this change" — the PR description the agent had written didn't mention them (because I did the debugging myself), and no code actually referenced these properties. BugBot removed them. Small thing, but it's the kind of cleanup that keeps your codebase tidy when you're moving fast and reviewing quickly.

Running builds and tests with Bitrise

Even though the agent runs tests locally before I ever see the code, I want a second layer of confidence. That's where CI comes in. I use Bitrise for this, but the same workflow concepts apply to Xcode Cloud, GitHub Actions, or any CI provider that can run xcodebuild.

This step is even more important for my cloud based agents because those don't get access to xcodebuild at all.

I have two Bitrise workflows set up for my projects, each triggered by different events.

The test workflow (runs on every PR)

The first workflow is a test-only pipeline that triggers whenever a pull request is opened or updated. The steps are minimal:

  1. Clone the repository
  2. Resolve Swift packages
  3. Run the test suite with xcodebuild test

That's it. No archiving, no signing, no uploading. The only job of this workflow is to answer one question: do the tests still pass? If something the agent wrote (or something BugBot fixed) breaks a test, I know before I merge. And I can tell an agent to go fix whatever Bitrise reported.

I set this up as a trigger on pull requests targeting my main branch. Bitrise picks up the PR automatically, runs the workflow, and reports the result back as a GitHub status check. If it's red, I don't merge.

The release workflow (runs on merge to main)

The second workflow triggers when something is pushed to main — which in practice means when a PR is merged. This one does significantly more:

  1. Clone the repository
  2. Resolve Swift packages
  3. Run the full test suite
  4. Archive the app with release signing
  5. Upload the build to App Store Connect

The test step might feel redundant since we already tested on the PR, but I like having it here as a final safety net. Merges can occasionally introduce issues (especially if multiple PRs land close together), and I'd rather catch that before uploading a broken build.

The archive and upload steps use Bitrise's built-in steps for Xcode archiving and App Store Connect deployment. You set up your signing certificates and provisioning profiles once in Bitrise's code signing tab, and from that point on, every merge produces a signed build that goes straight to TestFlight.

Why tests matter even more with AI

Having a solid test suite is probably the most impactful thing you can do for agentic engineering. Your tests act as a contract. They tell the agent what correct behavior looks like, and they catch regressions in CI even if the agent's local run somehow missed something. Better tests mean more confidence, which means you can let the agent handle more.

By the time I actually hit "merge" on a pull request, the code has been through: local tests by the agent, my own quick review, BugBot's automated review, and a green Bitrise build. That's a lot of confidence for very little manual effort.

The magic of fast TestFlight feedback

This is where everything I wrote about so far comes together. Because the release workflow uploads every merge to App Store Connect automatically, every single merge to main results in a TestFlight build — no manual intervention required. You don't open Xcode, you don't archive locally, nothing. You merge, and a few minutes later there's a new build in TestFlight. This closes the loop from "I had an idea" to "I have a build on my device" with minimal friction.

When you're testing your app in the field and you notice something you want to tweak — a layout that feels off, a label that's unclear, a flow that's clunky — you can often just tell your agent what to fix. If the change is simple enough and you're good at prompting and planning, you can have a new build on your device surprisingly quickly. Through your local planning, through the PR, through Bitrise, and onto your device via TestFlight.

Let's go back to the example from the intro of the post...

During one of my workouts with Maxine the app crashed. Right there in the gym, I pulled up Cursor, uploaded the crash report that TestFlight gave me, added some context about what I was doing in the app, and kicked off a prompt. Then I just resumed my workout.

By the time I was done, there was a PR waiting for me. The fix wasn't perfect — I had to nudge a few things — but the bulk of the work was done. I merged it, Bitrise picked it up, and I had a new TestFlight build shortly after. All while I was focused on my workout, not on debugging.

That's what happens when every piece of the pipeline is automated. The agent writes the fix, BugBot reviews it, Bitrise tests and builds it, and TestFlight delivers it. Your job is to steer, not to crank.

Summary

Agentic engineering doesn't mean giving up on quality. It means building the right guardrails so you can move fast without breaking things.

The pipeline I use looks like this: a well-maintained agents.md and AI skills set the foundation locally. Planning mode ensures the agent's approach is sound before it writes a line of code. BugBot catches issues in pull requests that I might miss. Bitrise runs tests on every PR and archives plus uploads on every merge to main. And TestFlight delivers the result to my device automatically.

Each piece reinforces the others. Without good local setup, the agent writes worse code. Without planning, it makes bad architectural decisions. Without BugBot and Bitrise, bugs slip through. Without automatic TestFlight uploads, the feedback loop is too slow to be useful.

To be clear: this pipeline doesn't catch everything. An agent can still write code that passes all tests but is architecturally questionable, and BugBot won't always flag it. You still need to review and think critically. But the combination of all these layers seriously cuts down the risk of shipping something broken — and that's the point. It's about reducing risk, not eliminating it.

If you're prototyping or just exploring an idea, you probably don't need all of this right away. But the moment you have real users depending on your app, this kind of pipeline pays for itself. Set it up once, iterate on your agents.md as you go, and you'll be able to move fast without sacrificing the quality your users expect.

Expand your learning with my books

Practical Swift Concurrency (the video course) header image

Learn everything you need to know about Swift Concurrency and how you can use it in your projects with Practical Swift Concurrency the video course. It contains:

  • About ten hours worth of videos and exercises
  • Sample projects that use the code shown in the videos.
  • FREE access to the Practical Swift Concurrency book
  • Free updates for future iOS and Swift versions.

The course is available on Teachable for just $89

Enroll now

The importance of human touch in AI-driven development

AI is changing how we build apps. That's not news. What might be less obvious is how this shift is forcing us to think differently about what actually matters in development. In this post, I want to share my balanced thoughts on AI-driven coding. I'd like to give you my perspective on why tons of new apps on the store isn't as scary as it might seem, and how AI is making iteration cheaper than ever. I'd also like to explore how the human touch separates polished products from vibe-coded slop when your driven by providing a good UX and doing proper user research.

By the end, you'll hopefully be able to make up your own mind on how AI is changing what development means to us.

AI slop and the idea glut

Over the past months, there's been a huge increase in apps being submitted to the App Store. Most of these apps are vibe-coded, built super quickly, and shipped to the store as soon as possible (and often come with a subscription).

To some, this proves that development is now easy. Anybody can convert their ideas into apps in a day. To me, it just proves that ideas were always cheap. Ideas were never the differentiator of what makes an app great. It was always execution.

Before AI, anyone could have an idea for an app. The barrier was building it. Now AI has lowered that barrier significantly, so more ideas are making it to the store. But that doesn't change the fundamental truth: a good idea poorly executed loses to a mediocre idea brilliantly executed. And let's be honest, if you've been around in development long enough you'll know that most ideas aren't really that good anyway. At least not good enough to spend weeks of time building. With AI we can execute any idea in a matter of days or less.

If you ask me, what we're seeing is a flood of apps built quickly without much thought put into the actual user experience. AI can help you ship fast, but it can't tell you whether what you're shipping is actually good. That still requires human judgment, taste, and care. And more importantly, it requires a target audience.

AI makes iteration cheap

Here's where things get interesting. If you care about your product, have a target audience, and want to make something that works incredibly well, AI becomes an invaluable tool for exploration.

You're now able to iterate faster than ever. You can look at more ideas, explore more options, try more possibilities. And you can do all of this in your actual app before you commit. The cost of experimentation has dropped to near zero.

Let me give you a concrete example from Maxine, my fitness app. I recently added a streaks view. Without AI, this would have taken me a day or two to build. With AI, I built it in about 10 minutes. That's essentially a zero-cost investment to try something new and see how it feels.

Now, I did have to make manual adjustments afterward. The AI got me 80% of the way there, but the last 20% needed human attention. And that's fine. That's the point. I ended up with a feature I really like, and it cost me a fraction of the time it would have otherwise. To be perfectly honest, this feature would have been very low on my effort vs. value list even though, once I built it, it started feeling valuable immediately.

I've been doing this constantly with Maxine's UI. Between workout sessions (I work out three times a week), I'll tweak the interface. One day I'll try a new layout, get AI to change things, then actually use it during my next workout. If something feels off, I iterate again. This rapid cycle of build-try-refine is only possible because AI has made the building part so cheap.

The code was never the hard part. The code was, in many ways, the boring part. It was the thing we had to do, the part where we spent hours articulating our thoughts and plans into actual implementation. Now AI handles that, and we can focus on what really matters: building great experiences and tweaking the details to fit our goals.

Using your own app is the differentiator

Building fast is great. But when you're building fast and you're not using the app you're building, you're not seeing what your users see. You're not living life with your app. Often that leads to a pretty average experience.

AI often won't really do much more than average. It can generate reasonable code (if you've set up good guard rails), it'll make sensible layouts, decent flows. But it doesn't know what it feels like to actually use your app in the real world. Only you (and your users) do.

Often when I try a new UI in Maxine, I create something that looks pretty in isolation. It might have gotten some feedback from peers based on images, and then I actually use it during a workout and I realize it's not quite right. Maybe it's too bright when I'm tired and sweaty. Maybe a button is too small when I'm rushing between sets. Maybe the information hierarchy makes sense when you're looking at a screenshot but falls apart when you're actually in the moment.

Users will tell you this too. They'll say something is too bright, too big, too small, whatever. And sometimes you can't quite figure out why until you put yourself in their shoes.

This brings me to a UX principle I learned a long time ago when I was in college. Imagine you're building an app that helps users check train times. Your user wants to know: can I still make that train, or do I need to catch the next one?

The instinct might be to build an information-dense screen. Show them all their options. Five trains, departure times, platform numbers, transfer information.

But think about who's using this app. There's a good chance they're in a rush. They might literally be running toward the station, phone in hand, trying to figure out where to go. They don't have time to read five options and compare them.

So instead of giving them everything, give them the one option that's probably what they need. Reduce steps. Make things obvious. Your app might not end up looking as sophisticated or comprehensive when you're sitting at your desk designing it. But once you start using it, once you put yourself in your user's situation, you'll build something that actually serves them.

If you want your UI to be good for what the app does, you have to use your app. You have to build domain knowledge. Sometimes you'll build something that doesn't look right in isolation but works perfectly in context. That's not a bug. That's craft.

The human touch vs. vibe-coded slop

My only real argument against shipping fast with AI is this: don't ship fast for the sake of shipping fast.

AI can help you build slop really quickly. And you don't want to build AI slop. You want to build something good.

Sure, you might be tempted to chase quick releases for quick money. But that doesn't make your apps better. It doesn't build something you're proud of. It doesn't create something users will actually love and stick with.

The strongest argument in favor of AI-driven coding, and one I completely agree with, is that it allows us to focus on what really matters. The implementation details were never the interesting part of building software. The interesting part is figuring out what to build and how it should feel.

AI lets us iterate faster than ever. It lets us try things we wouldn't have bothered trying before because the cost was too high. But the human judgment, the taste, the empathy for your users—that's still entirely on you.

When you combine fast iteration with genuine care for the product, you get something special. When you combine fast iteration with indifference, you get slop.

Summary

AI is changing development, but not in the way the doomsayers suggest. Yes, there are more apps than ever. But ideas were never the differentiator—execution was, and execution still requires human judgment.

The real opportunity here is that AI makes iteration nearly free. You can try things, use them, refine them, and try again. The developers who will stand out are the ones who actually use their own apps, who put themselves in their users' shoes, who care enough to polish beyond what AI generates.

Use AI to speed up the boring parts. Use your own judgment for everything else. That human touch is what separates craft from slop.

Migrating an iOS app from Paid up Front to Freemium

Paid up front apps can be a tough sell on the App Store. You might be getting plenty of views on your product page, but if those views aren't converting to downloads, something has to change. That's exactly where I found myself with Maxine: decent traffic, almost no sales.

So I made the switch to freemium, even though I didn't really want to. In the end, the data was pretty obvious and I've been getting feedback from other devs too. Free downloads with optional in-app purchases convert better and get more users through the door. After thinking about the best way to make the switch, I decided that existing users get lifetime access for free, and new users get 5 workouts before they need to subscribe or unlock a lifetime subscription. That should give them plenty of time to properly try and test the app before they commit to buying.

In this post, we'll explore the following topics:

  • How to grandfather in existing users using StoreKit receipt data
  • Testing gotchas you'll run into and how to work around them
  • The release sequence that ensures a smooth transition

By the end, you'll know how to migrate your own paid app to freemium without leaving your loyal early adopters behind.

Grandfathering in users through StoreKit

Regardless of how you implement in-app purchases, you can use StoreKit to check when a user first installed your app. This lets you identify users who paid for the app before it went free and automatically grant them lifetime access.

You can do this using the AppTransaction API in StoreKit. It gives you access to the original app version and original purchase date for the current device. It's a pretty good way to detect users that have bought your app pre-freemium.

Here's how to check the first installed version (which is what I did for Maxine):

import StoreKit

func isLegacyPaidUser() async -> Bool {
  do {
    let appTransaction = try await AppTransaction.shared

    switch appTransaction {
    case .verified(let transaction):
      // The version string from the first install
      let originalVersion = transaction.originalAppVersion

      // Compare against your last paid version
      // For example, if build 27 was your first free release
      if let version = Double(originalVersion), version < 27 {
        return true
      }
      return false

    case .unverified:
      // Transaction couldn't be verified, treat as new user
      return false
    }
  } catch {
    // No transaction available
    return false
  }
}

Since this logic could potentially cause you missing out on revenue, I highly recommend writing a couple of unit tests to ensure your legacy checks work as intended. My approach to testing the legacy check involved having a method that would take the version string from AppTransaction and check it against my target version. That way I know that my test is solid. I also made sure to have tests like making sure that users that were marked pro due to version numbering were able to pass all checks done in my ProAccess helper. For example, by checking that they're allowed to start a new workout.

If you want to learn more about Swift Testing, I have a couple of posts in the Testing category to help you get started.

I opted to go for version checking, but you could also use the original purchase date if that fits your situation better:

import StoreKit

func isLegacyPaidUser(cutoffDate: Date) async -> Bool {
  do {
    let appTransaction = try await AppTransaction.shared

    switch appTransaction {
    case .verified(let transaction):
      // When the user first installed (purchased) the app
      let originalPurchaseDate = transaction.originalPurchaseDate

      // If they installed before your freemium launch date, they're legacy
      return originalPurchaseDate < cutoffDate

    case .unverified:
      return false
    }
  } catch {
    return false
  }
}

// Usage: check if installed before your freemium release
let isLegacy = await isLegacyPaidUser(
  cutoffDate: DateComponents(
    calendar: .current,
    year: 2026,
    month: 1,
    day: 30
  ).date!
)

Again, if you decide to ship a solution like this I highly recommend that you add some unit tests to avoid mistakes that could cost you revenue.

The version approach works well when you have clear version boundaries. The date approach is useful if you're not sure which version number will ship or if you want more flexibility.

Once you've determined the user's status, you'll want to persist it locally so you don't have to check the receipt every time:

import StoreKit

actor EntitlementManager {
  static let shared = EntitlementManager()

  private let defaults = UserDefaults.standard
  private let legacyUserKey = "isLegacyProUser"

  var hasLifetimeAccess: Bool {
    defaults.bool(forKey: legacyUserKey)
  }

  func checkAndCacheLegacyStatus() async {
    // Only check if we haven't already determined status
    guard !defaults.bool(forKey: legacyUserKey) else { return }

    let isLegacy = await isLegacyPaidUser()
    if isLegacy {
      defaults.set(true, forKey: legacyUserKey)
    }
  }

  private func isLegacyPaidUser() async -> Bool {
    do {
      let appTransaction = try await AppTransaction.shared

      switch appTransaction {
      case .verified(let transaction):
        if let version = Double(transaction.originalAppVersion), version < 27 {
          return true
        }
        return false
      case .unverified:
        return false
      }
    } catch {
      return false
    }
  }
}

My app is a single-device app, so I don't have multi-device scenarios to worry about. If your app syncs data across devices, you might want a more involved solution. For example, you could store a "legacy pro" marker in CloudKit or on your server so the entitlement follows the user's iCloud account rather than being tied to a single device.

Also, storing in UserDefaults is a somewhat naive approach. Depending on your minimum OS version, you might run your app in a potentially jailbroken environment; this would allow users to tamper with UserDefaults quite easily and it would be much more secure to store this information in the keychain, or to check your receipt every time instead. For simplicity I'm using UserDefaults in this post, but I recommend you make a proper security risk assessment on which approach works for you.

With this code in place, you're all set up to start testing...

Testing gotchas

Testing receipt-based grandfathering has some quirks you should know about before you ship.

TestFlight always reports version 1.0

When your app runs via TestFlight it runs in a sandboxed environment and AppTransaction.originalAppVersion returns "1.0" regardless of which build the tester actually installed. This makes it impossible to test version-based logic through TestFlight alone.

You can get around this using debug builds with a manual toggle that lets you simulate being a legacy user. Add a hidden debug menu or use launch arguments to override the legacy check during development.

#if DEBUG
var debugOverrideLegacyUser: Bool? = nil
#endif

func isLegacyPaidUser() async -> Bool {
  #if DEBUG
  if let override = debugOverrideLegacyUser {
    return override
  }
  #endif

  // Normal receipt-based check...
}

Review sandbox also reports version 1.0...

When Apple reviews your app, they will want to try the in-app purchase, but due to how the sandbox works, that might be tricky. I worked around this by detecting the sandbox environment for installs, and always offer the option to buy premium to these users. Here's how you can do that:

private func isLegacyUser() async -> Bool {
  let result = await appTransactionProvider.fetchOriginalAppVersion()
  switch result {
  case .verified(let originalVersion, let isSandbox):
    guard !isSandbox else {
      return false
    }

    // check the app version

  case .unverified, .error:
    return false
  }
}

Apps in production will report a build number; not a version number

To make things extra confusing, Testflight will give 1.0 as the original version while in the wild, you will get your build number as the original app version. This makes testing extra confusing, so make sure you compare build numbers, not your version number.

Reinstalls reset the original version.

If a user deletes and reinstalls your app, the originalAppVersion reflects the version they reinstalled, not their very first install. This is a limitation of on-device receipt data. If you've written the user's pro-status to the keychain, you would actually be able to pull the pro status from there.

Sadly I haven't found a fail-proof way to get around reinstalls and receipts resetting. For my app, this is acceptable. I don't have that many users so I think we'll be okay in terms of risk of someone losing their legacy pro access.

Device clock manipulation.

Users with incorrect device clocks could work their way around your date-based checks. That's why I went with version-based checking but again, it's all a matter of determining what an acceptable risk is for you and your app.

Making the move

When you're ready to release, the sequence matters. Here's what I did:

  1. Set your app to manual release. In App Store Connect, configure your new version for manual release rather than automatic. This gives you control over timing.

  2. Add a note for App Review. In the reviewer notes, explain that you'll switch the app's price to free before releasing. Something like: "This update transitions the app from paid to freemium. I will change the price to free in App Store Connect before releasing this version to ensure a smooth transition for users."

  3. Wait for approval. Let App Review approve your build while it's still technically a paid app.

  4. Make the app free first. Once approved, go to App Store Connect and change your app's price to free (or set up your freemium pricing tiers).

  5. Then release. After the price change is live, manually release your approved build.

I'm not 100% sure the order matters, but making the app free before releasing felt like the safest approach. It ensures that the moment users can download your new freemium version, they're not accidentally charged for the old paid model.

In Summary

Grandfathering paid users when switching to freemium comes down to checking AppTransaction for the original install version or date. Cache the result locally, and consider CloudKit or server-side storage if you need cross-device entitlements.

Testing is tricky because TestFlight always reports version 1.0 and sandbox receipts don't perfectly mirror production. Use debug toggles and, ideally, a real device with an older App Store build for thorough testing.

When you release, set your build to manual release, add a note for App Review explaining the transition, then make the app free before you tap the release button.

Changing your monetization strategy can feel like admitting defeat, but it's really just iteration. The App Store is competitive, user expectations shift, and what worked at launch might not work six months later. Pay attention to your conversion data, be willing to adapt, and don't let sunk-cost thinking keep you stuck with a model that isn't serving your users or your business.

A Deep Dive into SwiftData migrations

SwiftData migrations are one of those things that feel optional… right until you ship an update and real users upgrade with real data on disk.

In this post we’ll dig into:

  • How to implement schema versions with VersionedSchema
  • When you should introduce new schema versions
  • When SwiftData can migrate automatically and when you’ll have to write manual migrations with SchemaMigrationPlan and MigrationStage
  • How to handle extra complex migrations where you need “bridge” versions

By the end of this post you should have a pretty solid understanding of SwiftData’s migration rules, possibilities, and limitations. More importantly: you’ll know how to keep your migration work proportional. Not every change needs a custom migration stage, but some changes absolutely do.

Implementing simple versions with VersionedSchema

Every data model should have at least one VersionedSchema. What I mean by that is that even if you haven’t introduced any model updates yet, your initial model should be shipped using a VersionedSchema.

That gives you a stable starting point. Introducing VersionedSchema after you’ve already shipped is possible, but there's some risk involved with not getting things right from the start.

In this section, I’ll show you how to define an initial schema, how you can reference “current” models cleanly, and when you should introduce new versions.

Defining your initial model schema

If you’ve never worked with versioned SwiftData models before, the nested types that you'll see in a moment can look a little odd at first. The idea is simple though:

  • Each schema version defines its own set of @Model types, and those types are namespaced to that schema (for example ExerciseSchemaV1.Exercise).
  • Your app code typically wants to work with “the current” models without spelling SchemaV5.Exercise everywhere.
  • A typealias lets you keep your call sites clean while still being explicit about which schema version you’re using.

One very practical consequence of this is that you’ll often end up with two kinds of “models” in your codebase:

  • Versioned models: ExerciseSchemaV1.Exercise, ExerciseSchemaV2.Exercise, etc. These exist so SwiftData can reason about schema evolution.
  • Current models: typealias Exercise = ExerciseSchemaV2.Exercise. These exist so the rest of your app stays readable and you don't need to refactor half your code when you introduce a new schema version.

Every model schema that you define will conform to the VersionedSchema protocol and contain the following two fields:

  • versionIdentifier a semantic versioning identifier for your schema
  • models a list of model objects that are part of this schema

A minimal V1 → V2 example

To illustrate a simple VersionedSchema definition, we'll use a tiny Exercise model as our V1.

In V2 we’ll add a notes field. This kind of change is pretty common in my experience and it's a good example of a so-called lightweight migration because existing rows can simply have their notes set to nil.

import SwiftData

enum ExerciseSchemaV1: VersionedSchema {
  static var versionIdentifier = Schema.Version(1, 0, 0)
  static var models: [any PersistentModel.Type] = [Exercise.self]

  @Model
  final class Exercise {
    var name: String

    init(name: String) {
      self.name = name
    }
  }
}

enum ExerciseSchemaV2: VersionedSchema {
  static var versionIdentifier = Schema.Version(2, 0, 0)
  static var models: [any PersistentModel.Type] = [Exercise.self]

  @Model
  final class Exercise {
    var name: String
    var notes: String?

    init(name: String, notes: String? = nil) {
      self.name = name
      self.notes = notes
    }
  }
}

In the rest of your app, you’ll usually want to work with the latest schema’s model types:

typealias Exercise = ExerciseSchemaV2.Exercise

That way you can write Exercise(...) instead of ExerciseSchemaV2.Exercise(...).

Knowing when to introduce new VersionedSchemas

Personally, I only introduce a new version when I make model changes in between App Store releases. For example, I'll ship my app v1.0 with model v1.0. When I want to make any number of model changes in my app version 1.1, I will introduce a new model version too. Usually I'll name the model version 2.0 since that just makes sense to me. Even if I end up making loads of changes in separate steps, I rarely create more than one model version for a single app update. As we'll see in the complex migrations sections there might be exceptions if I need a multi-stage migration but those are very rare.

So, introduce a new VersionedSchema when you make model changes after you've already shipped a model version.

One thing that you'll want to keep in mind is that users can have different migration paths. Some users will update to every single model you release, others will skip versions.

SwiftData handles these migrations out of the box so you don't have to worry about them which is great. It's still good to be aware of this though. Your model should be able to migrate from any old version to any new version.

Often, SwiftData will figure out the migration path on its own, let's see how that works next.

Automatic migration rules

When you define all of your versioned schemas correctly, SwiftData can easily migrate your data from one version to another. Sometimes, you might want to help SwiftData out by providing a migration plan. I typically only do this for my custom migrations but it's possible to optimize your migration paths by providing migration plans for lightweight migrations too.

What “automatic migration” means in SwiftData

SwiftData can infer certain schema changes and migrate your store without any custom logic. In a migration plan, this is represented as a lightweight stage.

One nuance that’s worth calling out: SwiftData can perform lightweight migrations without you writing a SchemaMigrationPlan at all. But once you do adopt versioned schemas and you want predictable, testable upgrades between shipped versions, explicitly defining stages is the most straightforward way to make your intent unambiguous.

I recommend going for both approaches (with and without plans) at least once so you can experience them and you can decide what works best for you. When in doubt, it never hurts to build migration plans for lightweight migrations even if it's not strictly needed.

Let's see how you would define a migration plan for your data store, and how you can use your migration plan.

enum AppMigrationPlan: SchemaMigrationPlan {
  static var schemas: [any VersionedSchema.Type] = [ExerciseSchemaV1.self, ExerciseSchemaV2.self]
  static var stages: [MigrationStage] = [v1ToV2]

  static let v1ToV2 = MigrationStage.lightweight(
    fromVersion: ExerciseSchemaV1.self,
    toVersion: ExerciseSchemaV2.self
  )
}

In this migration plan, we've defined our model versions, and we've created a lightweight migration stage to go from our v1 to our v2 models. Note that we technically didn't have to build this migration plan because we're doing lightweight migrations only, but for completeness sake you can make sure you define migration steps for every model change.

When you create your container, you can tell it to use your plan as follows:

typealias Exercise = ExerciseSchemaV2.Exercise

let container = try ModelContainer(
  for: Exercise.self,
  migrationPlan: AppMigrationPlan.self
)

Knowing when a lightweight migration can be used

The following changes are lightweight changes and don't require any custom logic:

  • Add an optional property (like notes: String?)
  • Remove a property (data is dropped)
  • Make a property optional (non-optional → optional)
  • Rename a property if you map the original stored name

These changes don’t require SwiftData to invent new values. It can either keep the old value, move it, or accept a nil where no value existed before.

Safely renaming values

When you rename a model property, the store still contains the old attribute name. Use @Attribute(originalName:) so SwiftData can convert from old property names to new ones.

@Model
final class Exercise {
  @Attribute(originalName: "name")
  var title: String

  init(title: String) {
    self.title = title
  }
}

When you should not rely on lightweight migration

Lightweight migrations break down when your new schema introduces a new requirement that old data can't satisfy. Or in other words, if SwiftData can't automatically determine how to move from the old model to the new one.

Some examples of model changes that will require a heavy migration are:

  • Adding non-optional properties without a default value
  • Any change that requires a transformation step:
    • parsing / composing values
    • merging or splitting entities
    • changing a value's type
    • data cleanup (dedupe, normalizing strings, fixing invalid states)

If you're making a change that SwiftData can't migrate on its own, you're in manual migration land and you'll want to pay close attention to this section.

A quick note on “defaults”

You’ll sometimes see advice like “just add a default value and you’re fine”. That can be true, but there’s a subtle trap: a default value in your Swift initializer does not necessarily mean existing rows get a value during migration.

If you’re introducing a required field, assume you need to explicitly backfill it unless you’ve tested the migration from a real on-disk store. That's where manual migrations become important.

Performing manual migrations using a migration plan

As you've seen before, a migration plan allows you to describe how you can migrate from one model version to the next. Our example from before leveraged a lightweight migration. We're going to set up a custom migration for this section.

We'll walk through a couple of scenarios with increasing complexity so you can ease into harder migration paths without being overwhelmed.

Assigning defaults for new, non-optional properties

Scenario: you add a new required field like createdAt: Date to an existing model. Existing rows don’t have a value for it. To migrate this, we have two options

  • Option A: make the property optional and accept “unknown”. This would allow us to use a lightweight migration but we might have nil values for createdAt
  • Option B: write a manual migration and keep the property as non-optional

Option B is the cleaner option since it allows us to have a more robust data model. Here’s what this looks like when you actually wire it up. First, define schemas where V2 introduces our createdAt property:

import SwiftData

enum ExerciseCreatedAtSchemaV1: VersionedSchema {
  static var versionIdentifier = Schema.Version(1, 0, 0)
  static var models: [any PersistentModel.Type] = [Exercise.self]

  @Model
  final class Exercise {
    var name: String

    init(name: String) {
      self.name = name
    }
  }
}

enum ExerciseCreatedAtSchemaV2: VersionedSchema {
  static var versionIdentifier = Schema.Version(2, 0, 0)
  static var models: [any PersistentModel.Type] = [Exercise.self]

  @Model
  final class Exercise {
    var name: String
    var createdAt: Date

    init(name: String, createdAt: Date = .now) {
      self.name = name
      self.createdAt = createdAt
    }
  }
}

Next we can add a custom stage that sets createdAt for existing rows. We'll talk about what the willMigrate and didMigrate closure are in a moment; let's look at the migration logic first:

enum AppMigrationPlan: SchemaMigrationPlan {
  static var schemas: [any VersionedSchema.Type] = [ExerciseCreatedAtSchemaV1.self, ExerciseCreatedAtSchemaV2.self]
  static var stages: [MigrationStage] = [v1ToV2]

  static let v1ToV2 = MigrationStage.custom(
    fromVersion: ExerciseCreatedAtSchemaV1.self,
    toVersion: ExerciseCreatedAtSchemaV2.self,
    willMigrate: { _ in },
    didMigrate: { context in
      let exercises = try context.fetch(FetchDescriptor<ExerciseCreatedAtSchemaV2.Exercise>())
      for exercise in exercises {
        exercise.createdAt = Date()
      }
      try context.save()
    }
  )
}

With this change, we can assign a sensible default to createdAt. As you saw we have two migration stages; willMigrate and didMigrate. Let's see what those are about next.

Taking a closer look at complex migration stages

willMigrate

willMigrate is run before your schema is applied and should be used to clean up your "old" (existing) data if needed. For example, if you're introducing unique constraints you can remove duplicates from your original store in willMigrate. Note that willMigrate only has access to your old data store (the "from" model). So you can't assign any values to your new models in this step. You can only clean up old data here.

didMigrate

After applying your new schema, didMigrate is called. You can assign your required values here. At this point you only have access to your new model versions.

I’ve found that I typically do most of my work in didMigrate, because I'm able to assign data there; I don't often have to prepare my old data for migration.

Setting up extra complex migrations

Sometimes you'll have to do migrations that reshape your data. A common case is introducing a new model where one of the new model’s fields is composed from values that used to be stored somewhere else.

To make this concrete, imagine you started with a model that stores “summary” workout data in a single model:

import SwiftData

enum WeightSchemaV1: VersionedSchema {
  static var versionIdentifier = Schema.Version(1, 0, 0)
  static var models: [any PersistentModel.Type] = [WeightData.self]

  @Model
  final class WeightData {
    var weight: Float
    var reps: Int
    var sets: Int

    init(weight: Float, reps: Int, sets: Int) {
      self.weight = weight
      self.reps = reps
      self.sets = sets
    }
  }
}

Now you want to introduce PerformedSet, and have WeightData contain a list of performed sets instead. You could try to remove weight/reps/sets from WeightData in the same version where you add PerformedSet, but that makes migration unnecessarily hard: you still need the original values to create your first PerformedSet.

The reliable approach here is the same bridge-version strategy we used earlier:

  • V2 (bridge): keep the old fields around under legacy names, and add the relationship
  • V3 (cleanup): remove the legacy fields once the new data is populated

Here’s what the bridge schema could look like. Notice how the legacy values are kept around with @Attribute(originalName:) so they still read from the same stored columns:

enum WeightSchemaV2: VersionedSchema {
  static var versionIdentifier = Schema.Version(2, 0, 0)
  static var models: [any PersistentModel.Type] = [WeightData.self, PerformedSet.self]

  @Model
  final class WeightData {
    @Attribute(originalName: "weight")
    var legacyWeight: Float

    @Attribute(originalName: "reps")
    var legacyReps: Int

    @Attribute(originalName: "sets")
    var legacySets: Int

    @Relationship(inverse: \WeightSchemaV2.PerformedSet.weightData)
    var performedSets: [PerformedSet] = []

    init(legacyWeight: Float, legacyReps: Int, legacySets: Int) {
      self.legacyWeight = legacyWeight
      self.legacyReps = legacyReps
      self.legacySets = legacySets
    }
  }

  @Model
  final class PerformedSet {
    var weight: Float
    var reps: Int
    var sets: Int

    var weightData: WeightData?

    init(weight: Float, reps: Int, sets: Int, weightData: WeightData? = nil) {
      self.weight = weight
      self.reps = reps
      self.sets = sets
      self.weightData = weightData
    }
  }
}

Now you can migrate by fetching WeightSchemaV2.WeightData in didMigrate and inserting a PerformedSet for each migrated WeightData:

static let migrateV1toV2 = MigrationStage.custom(
  fromVersion: WeightSchemaV1.self,
  toVersion: WeightSchemaV2.self,
  willMigrate: nil,
  didMigrate: { context in
    let allWeightData = try context.fetch(FetchDescriptor<WeightSchemaV2.WeightData>())

    for weightData in allWeightData {
      let performedSet = WeightSchemaV2.PerformedSet(
        weight: weightData.legacyWeight,
        reps: weightData.legacyReps,
        sets: weightData.legacySets,
        weightData: weightData
      )

      weightData.performedSets.append(performedSet)
    }

    try context.save()
  }
)

Once you’ve shipped this and you’re confident the data is in the new shape, you can introduce V3 to remove legacyWeight, legacyReps, and legacySets entirely. Because the data now lives in PerformedSet, V2 → V3 is typically a lightweight migration.

When you find yourself having to perform a migration like this, it can be quite scary and complex so I highly recommend properly testing your app before shipping. Try testing migrations from and to different model versions to make sure that you don't lose any data.

Summary

SwiftData migrations become a lot less stressful when you treat schema versions as a release artifact. Introduce a new VersionedSchema when you ship model changes to users, not for every little iteration you do during development. That keeps your migration story realistic, testable, and manageable over time.

When you do ship a change, start by asking whether SwiftData can reasonably infer what to do. Lightweight migrations work well when no new requirements are introduced: adding optional fields, dropping fields, or renaming fields (as long as you map the original stored name). The moment your change requires SwiftData to invent or derive a value—like introducing a new non-optional property, changing types, or composing values—you’re in manual migration land, and a SchemaMigrationPlan with custom stages is the right tool.

For the truly tricky cases, don’t force everything into one heroic migration. Add a bridge version, populate the new data shape first, then clean up old fields in a follow-up version. And whatever you do, test migrations the way users experience them: migrate a store created by an older build with messy data, not a pristine simulator database you can delete at will.

A deep dive into Collections, Sequences, and Iterators in Swift

When you write for item in list the compiler quietly sets a lot of machinery in motion. Usually writing a for loop is a pretty mundane task, it's not that complex of a syntax to write. However, it's always fun to dig a bit deeper and see what happens under the hood. In this post I’ll unpack the pieces that make iteration tick so you can reason about loops with the same confidence you already have around optionals, enums, or result builders.

Here’s what you’ll pick up:

  • What Sequence and Collection promise—and why iterators are almost always structs.
  • How for … in desugars, plus the pitfalls of mutating while you loop.
  • How async iteration and custom collections extend the same core ideas.

Understanding Sequence

Sequence is the smallest unit of iteration in Swift and it comes with a very intentional contract: "when somebody asks for an iterator, give them one that can hand out elements until you’re out". That means a conforming type needs to define two associated types (Element and Iterator) and return a fresh iterator every time makeIterator() is called.

public protocol Sequence {
    associatedtype Element
    associatedtype Iterator: IteratorProtocol where Iterator.Element == Element

    func makeIterator() -> Iterator
}

The iterator itself conforms to IteratorProtocol and exposes a mutating next() function:

public protocol IteratorProtocol {
    associatedtype Element
    mutating func next() -> Element?
}

You’ll see most iterators implemented as structs. next() is marked mutating, so a value-type iterator can update its position without any extra ceremony. When you copy the iterator, you get a fresh cursor that resumes from the same point, which keeps iteration predictable and prevents shared mutable state from leaking between loops. Classes can adopt IteratorProtocol too, but value semantics are a natural fit for the contract.

There are two important implications to keep in mind:

  • A sequence only has to be single-pass. It’s perfectly valid to hand out a "consumable" iterator that can be used once and then returns nil forever. Lazy I/O streams or generator-style APIs lean on this behaviour.
  • makeIterator() should produce a fresh iterator each time you call it. Some sequences choose to store and reuse an iterator internally, but the contract encourages the "new iterator per loop" model so for loops can run independently without odd interactions.

If you’ve ever used stride(from:to:by:) you’ve already worked with a plain Sequence. The standard library exposes it right next to ranges, and it’s perfect for walking an arithmetic progression without allocating an array. For example:

for angle in stride(from: 0, through: 360, by: 30) {
    print(angle)
}

This prints 0, 30, 60 … 360 and then the iterator is done. If you ask for another iterator you’ll get a new run, but there’s no requirement that the original one resets itself or that the sequence stores all of its values. It just keeps the current step and hands out the next number until it reaches the end. That’s the core Sequence contract in action.

So to summarize, a Sequence contains n items (we don't know how many because there's no concept of count in a Sequence), and we can ask the Sequence for an Iterator to receive items until the Sequence runs out. As you saw with stride, the Sequence doesn't have to hold all values it will send in memory. It can generate the values every time its Iterator has its next() function called.

If you need multiple passes, random access, or counting, Sequence won’t give you that by itself. The protocol doesn’t forbid throwing the elements away after the first pass; AsyncStream-style sequences do exactly that. An AsyncStream will vend a new value to an async loop, and then it discards the value forever.

In other words, the only promise is "I can vend an iterator". Nothing says the iterator can be rewound or that calling makeIterator() twice produces the same results. That’s where Collection steps in.

Collection’s Extra Guarantees

Collection refines Sequence with the promises we lean on day-to-day: you can iterate as many times as you like, the order is stable (as long as the collection’s own documentation says so), and you get indexes, subscripts, and counts. Swift's Array, Dictionary, and Set all conform to the Collection protocol for example.

public protocol Collection: Sequence {
    associatedtype Index: Comparable

    var startIndex: Index { get }
    var endIndex: Index { get }

    func index(after i: Index) -> Index
    subscript(position: Index) -> Element { get }
}

These extra requirements unlock optimisations. map can preallocate exactly the right amount of storage. count doesn’t need to walk the entire data set. If a Collection also implements BidirectionalCollection or RandomAccessCollection the compiler can apply even more optimizations for free.

Worth noting: Set and Dictionary both conform to Collection even though their order can change after you mutate them. The protocols don’t promise order, so if iteration order matters to you make sure you pick a type that documents how it behaves.

How for … in Actually Works

Now that you know a bit more about collections and iterating them in Swift, here’s what a simple loop looks like if you were to write one without using for x in y:

var iterator = container.makeIterator()
while let element = iterator.next() {
    print(element)
}

To make this concrete, here’s a small custom sequence that will count down from a given starting number:

struct Countdown: Sequence {
    let start: Int

    func makeIterator() -> Iterator {
        Iterator(current: start)
    }

    struct Iterator: IteratorProtocol {
        var current: Int

        mutating func next() -> Int? {
            guard current >= 0 else { return nil }
            defer { current -= 1 }
            return current
        }
    }
}

Running for number in Countdown(start: 3) executes the desugared loop above. Copy the iterator halfway through and each copy continues independently thanks to value semantics.

One thing to avoid: mutating the underlying storage while you’re in the middle of iterating it. An array iterator assumes the buffer stays stable; if you remove an element, the buffer shifts and the iterator no longer knows where the next element lives, so the runtime traps with Collection modified while enumerating. When you need to cull items, there are safer approaches: call removeAll(where:) which handles the iteration for you, capture the indexes first and mutate after the loop, or build a filtered copy and replace the original once you’re done.

Here’s what a real bug looks like. Imagine a list of tasks where you want to strip the completed ones:

struct TodoItem {
    var title: String
    var isCompleted: Bool
}

var todoItems = [
    TodoItem(title: "Ship blog post", isCompleted: true),
    TodoItem(title: "Record podcast", isCompleted: false),
    TodoItem(title: "Review PR", isCompleted: true),
]

for item in todoItems {
    if item.isCompleted,
       let index = todoItems.firstIndex(where: { $0.title == item.title }) {
        todoItems.remove(at: index) // ⚠️ Fatal error: Collection modified while enumerating.
    }
}

Running this code crashes the moment the first completed task is removed because the iterator still expects the old layout. It also calls firstIndex on every pass, so each iteration scans the whole array again—an easy way to turn a quick cleanup into O(n²) work. A safer rewrite delegates the traversal:

todoItems.removeAll(where: \.isCompleted)

Because removeAll(where:) owns the traversal, it walks the array once and removes matches in place.

If you prefer to keep the originals around, build a filtered copy instead:

let openTodos = todoItems.filter { !$0.isCompleted }

Both approaches keep iteration and mutation separated, which means you won’t trip over the iterator mid-loop. Everything we’ve looked at so far assumes the elements are ready the moment you ask for them. In modern apps, it's not uncommon to want to iterate over collections (or streams) that generate new values over time. Swift’s concurrency features extend the exact same iteration patterns into that world.

Async Iteration in Practice

Swift Concurrency introduces AsyncSequence and AsyncIteratorProtocol. These look familiar, but the iterator’s next() method can suspend and throw.

public protocol AsyncSequence {
    associatedtype Element
    associatedtype AsyncIterator: AsyncIteratorProtocol where AsyncIterator.Element == Element

    func makeAsyncIterator() -> AsyncIterator
}

public protocol AsyncIteratorProtocol {
    associatedtype Element
    mutating func next() async throws -> Element?
}

You consume async sequences with for await:

for await element in stream {
    print(element)
}

Under the hood the compiler builds a looping task that repeatedly awaits next(). If next() can throw, switch to for try await. Errors propagate just like they would in any other async context.

Most callback-style APIs can be bridged with AsyncStream. Here’s a condensed example that publishes progress updates:

func makeProgressStream() -> AsyncStream<Double> {
    AsyncStream { continuation in
        let token = progressManager.observe { fraction in
            continuation.yield(fraction)
            if fraction == 1 { continuation.finish() }
        }

        continuation.onTermination = { _ in
            progressManager.removeObserver(token)
        }
    }
}

for await fraction in makeProgressStream() now suspends between values. Don’t forget to call finish() when you’re done producing output, otherwise downstream loops never exit.

Since async loops run inside tasks, they should play nicely with cancellation. The easiest pattern is to check for cancellation inside next():

struct PollingIterator: AsyncIteratorProtocol {
    mutating func next() async throws -> Item? {
        try Task.checkCancellation()
        return await fetchNextItem()
    }
}

If the task is cancelled you’ll see CancellationError, which ends the loop automatically unless you decide to catch it.

Implementing your own collections

Most of us never have to build a collection from scratch—and that’s a good thing. Arrays, dictionaries, and sets already cover the majority of cases with battle-tested semantics. When you do roll your own, tread carefully: you’re promising index validity, multi-pass iteration, performance characteristics, and all the other traits that callers expect from the standard library. A tiny mistake can corrupt indices or put you in undefined territory.

Still, there are legitimate reasons to create a specialised collection. You might want a ring buffer that overwrites old entries, or a sliding window that exposes just enough data for a streaming algorithm. Whenever you go down this path, keep the surface area tight, document the invariants, and write exhaustive tests to prove the collection acts like a standard one.

Even so, it's worth exploring a custom implementation of Collection for the sake of studying it. Here’s a lightweight ring buffer that conforms to Collection:

struct RingBuffer<Element>: Collection {
    private var storage: [Element?]
    private var head = 0
    private var tail = 0
    private(set) var count = 0

    init(capacity: Int) {
        storage = Array(repeating: nil, count: capacity)
    }

    mutating func enqueue(_ element: Element) {
        storage[tail] = element
        tail = (tail + 1) % storage.count
        if count == storage.count {
            head = (head + 1) % storage.count
        } else {
            count += 1
        }
    }

    // MARK: Collection
    typealias Index = Int

    var startIndex: Int { 0 }
    var endIndex: Int { count }

    func index(after i: Int) -> Int {
        precondition(i < endIndex, "Cannot advance past endIndex")
        return i + 1
    }

    subscript(position: Int) -> Element {
        precondition((0..<count).contains(position), "Index out of bounds")
        let actual = (head + position) % storage.count
        return storage[actual]!
    }
}

A few details in that snippet are worth highlighting:

  • storage stores optionals so the buffer can keep a fixed capacity while tracking empty slots. head and tail advance as you enqueue, but the array never reallocates.
  • count is maintained separately. A ring buffer might be partially filled, so relying on storage.count would lie about how many elements are actually available.
  • index(after:) and the subscript accept logical indexes (0 through count) and translate them to the right slot in storage by offsetting from head and wrapping with the modulo operator. That bookkeeping keeps iteration stable even after the buffer wraps around.
  • Each accessor defends the invariants with precondition. Skip those checks and a stray index can pull stale data or walk off the end without warning.

Even in an example as small as the one above, you can see how much responsibility you take on once you adopt Collection.

In Summary

Iteration looks simple because Swift hides the boilerplate, but there’s a surprisingly rich protocol hierarchy behind every loop. Once you know how Sequence, Collection, and their async siblings interact, you can build data structures that feel natural in Swift, reason about performance, and bridge legacy callbacks into clean async code.

If you want to keep exploring after this, revisit the posts I’ve written on actors and data races to see how iteration interacts with isolation. Or take another look at my pieces on map and flatMap to dig deeper into lazy sequences and functional pipelines. Either way, the next time you reach for for item in list, you’ll know exactly what’s happening under the hood and how to choose the right approach for the job.

Using Observations to observe @Observable model properties

Starting with Xcode 26, there's a new way to observe properties of your @Observable models. In the past, we had to use the withObservationTracking function to access properties and receive changes with willSet semantics. In Xcode 26 and Swift 6.2, we have access to an entirely new approach that will make observing our models outside of SwiftUI much simpler.

In this post, we'll take a look at how we can use Observations to observe model properties. We'll also go over some of the possible pitfalls and caveats associated with Observations that you should be aware of.

Setting up an observation sequence

Swift's new Observations object allows us to build an AsyncSequence based on properties of an @Observable model.

Let's consider the following @Observable model:

@Observable 
class Counter {
  var count: Int
}

Let's say we'd like to observe changes to the count property outside of a SwiftUI view. Maybe we're building something on the server or command line where SwiftUI isn't available. Or maybe you're observing this model to kick off some non-UI related process. It really doesn't matter that much. The point of this example is that we're having to observe our model outside of SwiftUI's automatic tracking of changes to our model.

To observe our Counter without the new Observations, you'd write something like the following:

class CounterObserver {
  let counter: Counter

  init(counter: Counter) {
    self.counter = counter
  }

  func observe() {
    withObservationTracking { 
      print("counter.count: \(counter.count)")
    } onChange: {
      self.observe()
    }
  }
}

This uses withObservationTracking which comes with its own caveats as well as a pretty clunky API.

When we refactor the above to work with the new Observations, we get something like this:

class CounterObserver {
  let counter: Counter

  init(counter: Counter) {
    self.counter = counter
  }

  func observe() {
    Task { [weak self] in
      let values = Observations { [weak self] in
        guard let self else { return 0 }
        return self.counter.count 
      }

      for await value in values {
        guard let self else { break }
        print("counter.count: \(value)")
      }
    }
  }
}

There are two key steps to observing changes with Observations:

  1. Setting up your async sequence of observed values
  2. Iterate over your observation sequence

Let's take a closer look at both steps to understand how they work.

Setting up an async sequence of observed values

The Observations object that we created in the example is an async sequence. This sequence will emit values whenever a change to our model's values is detected. Note that Observations will only inform us about changes that we're actually interested in. This means that the only properties that we're informed about are properties that we access in the closure that we pass to Observations.

This closure also returns a value. The returned value is the value that's emitted by the async sequence that we create.

In this case, we created our Observations as follows:

let values = Observations { [weak self] in
  guard let self else { return 0 }
  return self.counter.count 
}

This means that we observe and return whatever value our count is.

We could also change our code as follows:

let values = Observations { [weak self] in
  guard let self else { return "" }
  return "counter.count is \(self.counter.count)"
}

This code observes counter.count but our async sequence will provide us with strings instead of just the counter's value.

There are two things about this code that I'd like to focus on: memory management and the output of our observation sequence.

Let's look at the output first, and then we can talk about the memory management implications of using Observations.

Sequences created by Observations will automatically observe all properties that you accessed in your Observations closure. In this case we've only accessed a single property so we're informed whenever count is changed. If we accessed more properties, a change to any of the accessed properties will cause us to receive a new value. Whatever we return from Observations is what our async sequence will output. In this case that's a string but it can be anything we want. The properties we access don't have to be part of our return value. Accessing the property is enough to have your closure called, even when you don't use that property to compute your return value.

You have probably noticed that my Observations closure contains a [weak self]. Every time a change to our observed properties happens, the Observations closure gets called. That means that internally, Observations will have to somehow retain our closure. As a result of that, we can create a retain cycle by capturing self strongly inside of an Observations closure. To break that, we should use a weak capture.

This weak capture means that we have an optional self to deal with. In my case, I opted to return an empty string instead of nil. That's because I don't want to have to work with an optional value later on in my iteration, but if you're okay with that then there's nothing wrong with returning nil instead of a default value. Do note that returning a default value does not do any harm as long as you're setting up your iteration of the async sequence correctly.

Speaking of which, let's take a closer look at that.

Iterating over your observation sequence

Once you've set up your Observations, you have an async sequence that you can iterate over. This sequence will output the values that you return from your Observations closure. As soon as you start iterating, you will immediately receive the "current" value for your observation.

Iterating over your sequence is done with an async for loop which is why we're wrapping this all in a Task:

Task { [weak self] in
  let values = Observations { [weak self] in
    guard let self else { return 0 }
    return self.counter.count 
  }

  for await value in values {
    guard let self else { break }
    print("counter.count: \(value)")
  }
}

Wrapping our work in a Task, means that our Task needs a [weak self] just like our Observations closure does. The reason is slightly different though. If you want to learn more about memory management in tasks that contain async for loops, I highly recommend you read my post on the topic.

When iterating over our Observations sequence we'll receive values in our loop after they've been assigned to our @Observable model. This means that Observations sequences have "did set semantics" while withObservationTracking would have given us "will set semantics".

Now that we know about the happy paths of Observations, let's talk about some caveats.

Caveats of Observations

When you observe values with Observations, the first and main caveat that I'd like to point out is that memory management is crucial to avoiding retain cycles. You've learned about this in the previous section, and getting it all right can be tricky. Especially because how and when you unwrap self in your Task is essential. Do it before the for loop and you've created a memory leak that'll run until the Observations sequence ends (which it won't).

A second caveat that I'd like to point out is that you can miss values from your Observable sequence if it produces values faster than you're consuming them.

So for example, if we introduce a sleep of three seconds in our loop we'll end up with missed values when we produce a new value every second:

for await value in values {
  guard let self else { break }
  print(value)
  try await Task.sleep(for: .seconds(3))
}

The result of sleeping in this loop while we produce more values is that we will miss values that were sent during the sleep. Every time we receive a new value, we receive the "current" value and we'll miss any values that were sent in between.

Usually this is fine, but if you want to process every value that got produced and processing might take some time, you'll want to make sure that you implement some buffering of your own. For example, if every produced value would result in a network call you'd want to make sure that you don't await the network call inside of your loop since there's a good chance that you'd miss values when you do that.

Overall, I think Observations is a huge improvement over the tools we had before Observations came around. Improvements can be made in the buffering department but I think for a lot of applications the current situation is good enough to give it a try.

How to unwrap [weak self] in Swift Concurrency Tasks?

As a developer who uses Swift regularly, [weak self] should be something that's almost muscle memory to you. I've written about using [weak self] before in the context of when you should generally capture self weakly in your closures to avoid retain cycles. The bottom line of that post is that closures that aren't @escaping will usually not need a [weak self] because the closures aren't retained beyond the scope of the function you're passing them to. In other words, closures that aren't @escaping don't usually cause memory leaks. I'm sure there are exceptions but generally speaking I've found this rule of thumb to hold up.

This idea of not needing [weak self] for all closures is reinforced by the introduction of SE-0269 which allows us to leverage implicit self captures in situations where closures aren’t retained, making memory leaks unlikely.

Later, I also wrote about how Task instances that iterate async sequences are fairly likely to have memory leaks due to this implicit usage of self.

So how do we use [weak self] on Task? And if we shouldn't, how do we avoid memory leaks?

In this post, I aim to answer these questions.

The basics of using [weak self] in completion handlers

As Swift developers, our first instinct is to do a weak -> strong dance in pretty much every closure. For example:

loadData { [weak self] data in 
  guard let self else { return }

  // use data
}

This approach makes a lot of sense. We start the call to loadData, and once the data is loaded our closure is called. Because we don't need to run the closure if self has been deallocated during our loadData call, we use guard let self to make sure self is still there before we proceed.

This becomes increasingly important when we stack work:

loadData { [weak self] data in 
  guard let self else { return }

  processData(data) { [weak self] models in 
    // use models
  }
}

Notice that we use [weak self] in both closures. Once we grab self with guard let self our reference is strong again. This means that for the rest of our closure, self is held on to as a strong reference. Due to SE-0269 we can call processData without writing self.processData if we have a strong reference to self.

The closure we pass to processData also captures self weakly. That's because we don't want that closure to capture our strong reference. We need a new [weak self] to prevent the closure that we passed to processData from creating a (shortly lived) memory leak.

When we take all this knowledge and we transfer it to Task, things get interesting...

Using [weak self] and unwrapping it immediately in a Task

Let's say that we want to write an equivalent of our loadData and processData chain, but they're now async functions that don't take a completion handler.

A common first approach would be to do the following:

Task { [weak self] in
  guard  let self else { return }

  let data = await loadData()
  let models = await processData(data)
}

Unfortunately, this code does not solve the memory leak that we solved in our original example.

An unstructured Task you create will start running as soon as possible. This means that if we have a function like below, the task will run as soon as the function reaches the end of its body:

func loadModels() {
  // 1
  Task { [weak self] in
    // 3: _immediately_ after the function ends
    guard  let self else { return }

    let data = await loadData()
    let models = await processData(data)
  }
  // 2
}

More complex call stacks might push the start of our task back by a bit, but generally speaking, the task will run pretty much immediately.

The problem with guard let self at the start of your Task

Because Task in Swift starts running as soon as possible, the chance of self getting deallocated in the time between creating and starting the task is very small. It's not impossible, but by the time your Task starts, it's likely self is still around no matter what.

After we make our reference to self strong, the Task holds on to self until the Task completes. In our call that means that we retain self until our call to processData completes. If we translate this back to our old code, here's what the equivalent would look like in callback based code:

loadData { data in 
  self.processData(data) { models in 
    // for example, self.useModels
  }
}

We don't have [weak self] anywhere. This means that self is retained until the closure we pass to processData has run.

The exact same thing is happening in our Task above.

Generally speaking, this isn't a problem. Your work will finish and self is released. Maybe it sticks around a bit longer than you'd like but it's not a big deal in the grand scheme of things.

But how would we prevent kicking off processData if self has been deallocated in this case?

Preventing a strong self inside of your Task

We could make sure that we never make our reference to self into a strong one. For example, by checking if self is still around through a nil check or by guarding the result of processData. I'm using both techniques in the snippet above but the guard self != nil could be omitted in this case:

Task { [weak self] in
  let data = await loadData()
  guard self != nil else { return }

  guard let models = await self?.processData(data) else {
    return
  }

  // use models
}

The code isn't pretty, but it would achieve our goal.

Let's take a look at a slightly more complex issue that involves repeatedly fetching data in an unstructured Task.

Using [weak self] in a longer running Task

Our original example featured two async calls that, based on their names, probably wouldn't take all that long to complete. In other words, we were solving a memory leak that would typically solve itself within a matter of seconds and you could argue that's not actually a memory leak worth solving.

A more complex and interesting example could look as follows:

func loadAllPages() {
  // only fetch pages once
  guard fetchPagesTask == nil else { return }

  fetchPagesTask = Task { [weak self] in
    guard let self else { return }

    var hasMorePages = true
    while hasMorePages && !Task.isCancelled {
      let page = await fetchNextPage()
      hasMorePages = !page.isLastPage
    }

    // we're done, we could call loadAllPages again to restart the loading process
    fetchPagesTask = nil
  }
}

Let's remove some noise from this function so we can see the bits that are actually relevant to whether or not we have a memory leak. I wanted to show you the full example to help you understand the bigger picture of this code sample...

 Task { [weak self] in
  guard let self else { return }

  var hasMorePages = true
  while hasMorePages {
    let page = await fetchNextPage()
    hasMorePages = !page.isLastPage
  }
}

There. That's much easier to look at, isn't it?

So in our Task we have a [weak self] capture and immediately we unwrap with a guard self. You already know this won't do what we want it to. The Task will start running immediately, and self will be held on to strongly until our task ends. That said, we do want our Task to end if self is deallocated.

To achieve this, we can actually move our guard let self into the while loop:

Task { [weak self] in
  var hasMorePages = true

  while hasMorePages {
    guard let self else { break }
    let page = await fetchNextPage()
    hasMorePages = !page.isLastPage
  }
}

Now, every iteration of the while loop gets its own strong self that's released at the end of the iteration. The next one attempts to capture its own strong copy. If that fails because self is now gone, we break out of the loop.

We fixed our problem by capturing a strong reference to self only when we need it, and by making it as short-lived as possible.

In Summary

Most Task closures in Swift don't strictly need [weak self] because the Task generally only exists for a relatively short amount of time. If you find that you do want to make sure that the Task doesn't cause memory leaks, you should make sure that the first line in your Task isn't guard let self else { return }. If that's the first line in your Task, you're capturing a strong reference to self as soon as the Task starts running which usually is almost immediately.

Instead, unwrap self only when you need it and make sure you only keep the unwrapped self around as short as possible (for example in a loop's body). You could also use self? to avoid unwrapping altogether, that way you never grab a strong reference to self. Lastly, you could consider not capturing self at all. If you can, capture only the properties you need so that you don't rely on all of self to stick around when you only need parts of self.

Should you opt-in to Swift 6.2’s Main Actor isolation?

Swift 6.2 comes with some interesting Concurrency improvements. One of the most notable changes is that there's now a compiler flag that will, by default, isolate all your (implicitly nonisolated) code to the main actor. This is a huge change, and in this post we'll explore whether or not it's a good change. We'll do this by taking a look at some of the complexities that concurrency introduces naturally, and we'll assess whether moving code to the main actor is the (correct) solution to these problems.

By the end of this post, you should hopefully be able to decide for yourself whether or not main actor isolation makes sense. I encourage you to read through the entire post and to carefully think about your code and its needs before you jump to conclusions. In programming, the right answer to most problems depends on the exact problems at hand. This is no exception.

We'll start off by looking at the defaults for main actor isolation in Xcode 26 and Swift 6. Then we'll move on to determining whether we should keep these defaults or not.

Understanding how Main Actor isolation is applied by default in Xcode 26

When you create a new project in Xcode 26, that project will have two new features enabled:

  • Global actor isolation is set to MainActor.self
  • Approachable concurrency is enabled

If you want to learn more about approachable concurrency in Xcode 26, I recommend you read about it in my post on Approachable Concurrency.

The global actor isolation setting will automatically isolate all your code to either the Main Actor or no actor at all (nil and MainActor.self are the only two valid values).

This means that all code that you write in a project created with Xcode 26 will be isolated to the main actor (unless it's isolated to another actor or you mark the code as nonisolated):

// this class is @MainActor isolated by default
class MyClass {
  // this property is @MainActor isolated by default
  var counter = 0

  func performWork() async {
    // this function is @MainActor isolated by default
  }

  nonisolated func performOtherWork() async {
    // this function is nonisolated so it's not @MainActor isolated
  }
}

// this actor and its members won't be @MainActor isolated
actor Counter {
  var count = 0
}

The result of your code bein main actor isolated by default is that your app will effectively be single threaded unless you explicitly introduce concurrency. Everything you do will start off on the main thread and stay there unless you decide you need to leave the Main Actor.

Understanding how Main Actor isolation is applied for new SPM Packages

For SPM packages, it's a slightly different story. A newly created SPM Package will not have its defaultIsolation flag set at all. This means that a new SPM Package will not isolate your code to the MainActor by default.

You can change this by passing defaultIsolation to your target's swiftSettings:

swiftSettings: [
    .defaultIsolation(MainActor.self)
]

Note that a newly created SPM Package also won't have Approachable Concurrency turned on. More importantly, it won't have NonIsolatedNonSendingByDefault turned on by default. This means that there's an interesting difference between code in your SPM Packages and your app target.

In your app target, everything will run on the Main Actor by default. Any functions that you've defined in your app target and are marked as nonisolated and async will run on the caller's actor by default. So if you're calling your nonisolated async functions from the main actor in your app target they will run on the Main Actor. Call them from elsewhere and they'll run there.

In your SPM Packages, the default is for your code to not run on the Main Actor by default, and for nonisolated async functions to run on a background thread no matter what.

Confusing isn't it? I know...

The rationale for running code on the Main Actor by default

In a codebase that relies heavily on concurrency, you'll have to deal with a lot of concurrency-related complexity. More specifically, a codebase with a lot of concurrency will have a lot of data race potential. This means that Swift will flag a lot of potential issues (when you're using the Swift 6 language mode) even when you never really intended to introduce a ton of concurrency. Swift 6.2 is much better at recognizing code that's safe even though it's concurrent but as a general rule you want to manage the concurrency in your code carefully and avoid introducing concurrency by default.

Let's look at a code sample where we have a view that leverages a task view modifier to retrieve data:

struct MoviesList: View {
  @State var movieRepository = MovieRepository()
  @State var movies = [Movie]()

  var body: some View {
    Group {
      if movies.isEmpty == false {
        List(movies) { movie in
          Text(movie.id.uuidString)
        }
      } else {
        ProgressView()
      }
    }.task {
      do {
        // Sending 'self.movieRepository' risks causing data races
        movies = try await movieRepository.loadMovies()
      } catch {
        movies = []
      }
    }
  }
}

This code has an issue: sending self.movieRepository risks causing data races.

The reason we're seeing this error is due to us calling a nonisolated and async method on an instance of MovieRepository that is isolated to the main actor. That's a problem because inside of loadMovies we have access to self from a background thread because that's where loadMovies would run. We also have access to our instance from inside of our view at the exact same time so we are indeed creating a possible data race.

There are two ways to fix this:

  1. Make sure that loadMovies runs on the same actor as its callsite (this is what nonisolated(nonsending) would achieve)
  2. Make sure that loadMovies runs on the Main Actor

Option 2 makes a lot of sense because, as far as this example is concerned, we always call loadMovies from the Main Actor anyway.

Depending on the contents of loadMovies and the functions that it calls, we might simply be moving our compiler error from the view over to our repository because the newly @MainActor isolated loadMovies is calling a non-Main Actor isolated function internally on an object that isn't Sendable nor isolated to the Main Actor.

Eventually, we might end up with something that looks as follows:

class MovieRepository {
  @MainActor
  func loadMovies() async throws -> [Movie] {
    let req = makeRequest()
    let movies: [Movie] = try await perform(req)

    return movies
  }

  func makeRequest() -> URLRequest {
    let url = URL(string: "https://example.com")!
    return URLRequest(url: url)
  }

  @MainActor
  func perform<T: Decodable>(_ request: URLRequest) async throws -> T {
    let (data, _) = try await URLSession.shared.data(for: request)
    // Sending 'self' risks causing data races
    return try await decode(data)
  }

  nonisolated func decode<T: Decodable>(_ data: Data) async throws -> T {
    return try JSONDecoder().decode(T.self, from: data)
  }
}

We've @MainActor isolated all async functions except for decode. At this point we can't call decode because we can't safely send self into the nonisolated async function decode.

In this specific case, the problem could be fixed by marking MovieRepository as Sendable. But let's assume that we have reasons that prevent us from doing so. Maybe the real object holds on to mutable state.

We could fix our problem by actually making all of MovieRepository isolated to the Main Actor. That way, we can safely pass self around even if it has mutable state. And we can still keep our decode function as nonisolated and async to prevent it from running on the Main Actor.

The problem with the above...

Finding the solution to the issues I describe above is pretty tedious, and it forces us to explicitly opt-out of concurrency for specific methods and eventually an entire class. This feels wrong. It feels like we're having to decrease the quality of our code just to make the compiler happy.

In reality, the default in Swift 6.1 and earlier was to introduce concurrency by default. Run as much as possible in parallel and things will be great.

This is almost never true. Concurrency is not the best default to have.

In code that you wrote pre-Swift Concurrency, most of your functions would just run wherever they were called from. In practice, this meant that a lot of your code would run on the main thread without you worrying about it. It simply was how things worked by default and if you needed concurrency you'd introduce it explicitly.

The new default in Xcode 26 returns this behavior both by running your code on the main actor by default and by having nonisolated async functions inherit the caller's actor by default.

This means that the example we had above becomes much simpler with the new defaults...

Understanding how default isolation simplifies our code

If we turn set our default isolation to the Main Actor along with Approachable Concurrency, we can rewrite the code from earlier as follows:

class MovieRepository {
  func loadMovies() async throws -> [Movie] {
    let req = makeRequest()
    let movies: [Movie] = try await perform(req)

    return movies
  }

  func makeRequest() -> URLRequest {
    let url = URL(string: "https://example.com")!
    return URLRequest(url: url)
  }

  func perform<T: Decodable>(_ request: URLRequest) async throws -> T {
    let (data, _) = try await URLSession.shared.data(for: request)
    return try await decode(data)
  }

  @concurrent func decode<T: Decodable>(_ data: Data) async throws -> T {
    return try JSONDecoder().decode(T.self, from: data)
  }
}

Our code is much simpler and safer, and we've inverted one key part of the code. Instead of introducing concurrency by default, I had to explicitly mark my decode function as @concurrent. By doing this, I ensure that decode is not main actor isolated and I ensure that it always runs on a background thread. Meanwhile, both my async and my plain functions in MoviesRepository run on the Main Actor. This is perfectly fine because once I hit an await like I do in perform, the async function I'm in suspends so the Main Actor can do other work until the function I'm awaiting returns.

Performance impact of Main Actor by default

While running code concurrently can increase performance, concurrency doesn't always increase performance. Additionally, while blocking the main thread is bad we shouldn't be afraid to run code on the main thread.

Whenever a program runs code on one thread, then hops to another, and then back again, there's a performance cost to be paid. It's a small cost usually, but it's a cost either way.

It's often cheaper for a quick operation that started on the Main Actor to stay there than it is for that operation to be performed on a background thread and handing the result back to the Main Actor. Being on the Main Actor by default means that it's much more explicit when you're leaving the Main Actor which makes it easier for you to determine whether you're ready to pay the cost for thread hopping or not. I can't decide for you what the cutoff is for it to be worth paying a cost, I can only tell you that there is a cost. And for most apps the cost is probably small enough for it to never matter. By defaulting to the Main Actor you can avoid paying the cost accidentally and I think that's a good thing.

So, should you set your default isolation to the Main Actor?

For your app targets it makes a ton of sense to run on the Main Actor by default. It allows you to write simpler code, and to introduce concurrency only when you need it. You can still mark objects as nonisolated when you find that they need to be used from multiple actors without awaiting each interaction with those objects (models are a good example of objects that you'll probably mark nonisolated). You can use @concurrent to ensure certain async functions don't run on the Main Actor, and you can use nonisolated on functions that should inherit the caller's actor. Finding the correct keyword can sometimes be a bit of a trial and error but I typically use either @concurrent or nothing (@MainActor by default). Needing nonisolated is more rare in my experience.

For your SPM Packages the decision is less obvious. If you have a Networking package, you probably don't want it to use the main actor by default. Instead, you'll want to make everything in the Package Sendable for example. Or maybe you want to design your Networking object as an actor. Its' entirely up to you.

If you're building UI Packages, you probably do want to isolate those to the Main Actor by default since pretty much everything that you do in a UI Package should be used from the Main Actor anyway.

The answer isn't a simple "yes, you should", but I do think that when you're in doubt isolating to the Main Actor is a good default choice. When you find that some of your code needs to run on a background thread you can use @concurrent.

Practice makes perfect, and I hope that by understanding the "Main Actor by default" rationale you can make an educated decision on whether you need the flag for a specific app or Package.

What is Approachable Concurrency in Xcode 26?

Xcode 26 allows developers to opt-in to several of Swift 6.2’s features that will make concurrency more approachable to developers through a compiler setting called “Approachable Concurrency” or SWIFT_APPROACHABLE_CONCURRENCY. In this post, we’ll take a look at how to enable approachable concurrency, and which compiler settings are affected by it.

How to enable approachable concurrency in Xcode?

To enable approachable concurrency, you should go to your project’s build settings and perform a search for “approachable concurrency” or just the word “approachable”. This will filter all available settings and should show you the setting you’re interested in:

By default, this setting will be set to No which means that you’re not using Approachable Concurrency by default as of Xcode 26 Beta 2. This might change in a future release and this post will be updated if that happens.

The exact settings that you see enabled under Swift Compiler - Upcoming Features will be different depending on your Swift Language Version. If you’re using the Swift 6 Language Version, you will see everything except the following two settings set to Yes:

  • Infer isolated conformances
  • nonisolated(nonsending) By Default

If you’re using the Swift 5 Language Version like I am in my sample project, you will see everything set to Yes by default if you've created your project in Xcode 26. If you've created your project with Xcode 16 and are using the Swift 5 language mode, you'll find that Approachable Concurrency is not on by default.

To turn on approachable concurrency, set the value to Yes for your target:

This will automatically opt you in to all features shown above. Let’s take a look at all five settings to see what they do, and why they’re important to making concurrency more approachable.

Enabling approachable concurrency in a Swift Package

Packages are a little bit more complex than Xcode projects. By default, a newly created package will use the Swift 6.2 toolchain and the Swift 6 language mode. In practice, this will mean that most of approachable concurrency's features will be on by default. There are two features that you'll need to enable manually though:

swiftSettings: [
  .enableUpcomingFeature("NonisolatedNonsendingByDefault"),
  .enableUpcomingFeature("InferIsolatedConformances")
]

If you're using the Swift 5 language mode in your package, your swift settings should look a bit more like this:

swiftSettings: [
   .swiftLanguageMode(.v5),
   .enableUpcomingFeature("NonisolatedNonsendingByDefault"),
   .enableUpcomingFeature("InferIsolatedConformances"),
   .enableUpcomingFeature("InferSendableFromCaptures"),
   .enableUpcomingFeature("DisableOutwardActorInference"),
   .enableUpcomingFeature("GlobalActorIsolatedTypesUsability"),
]

Adding these settings to your package will get you an equivalent setup to that of Xcode when you enable approachable concurrency for your app target.

Which settings are part of approachable concurrency?

Approachable concurrency mostly means that Swift Concurrency will be more predictable in terms of compiler errors and warnings. In lots of cases Swift Concurrency had strange and hard to understand behaviors that resulted in compiler errors that weren’t strictly needed.

For example, if your code could have a data race the compiler would complain even when it could prove that no data race would occur when the code would be executed.

With approachable concurrency, we opt-in to a range of features that make this easier to reason about. Let’s take a closer look at these features starting with nonisolated(nonsending) by default.

Understanding nonisolated(nonsending) By Default

The compiler setting for nonisolated(nonsending) is probably the most important. With nonisolated(nonsending) your nonisolated async will run on the calling actor’s executor by default. It used to be the case that a nonisolated async function would always run on the global executor. Now that behavior will change and be consistent with nonisolated functions that are not async.

The @concurrent declaration is also part of this feature. You can study this declaration more in-depth in my post on @concurrent.

Understanding Infer Sendable for Methods and Key Path Literals

This compiler flag introduces a less obvious, but still useful improvement to how Swift handles functions and key paths. It allows functions of types that are Sendable to automatically be considered Sendable themselves without forcing developers to jump through hoops.

Similarly, in some cases where you’d leverage KeyPath in Swift, the compiler would complain about key paths capturing non-Sendable state even when there’s no real potential for a data race in certain cases.

This feature is already part of Swift 6 and is enabled in Approachable Concurrency in the Swift 5 Language Version (which is the default).

I’ve found that this setting solves a real issue, but not one that I think a lot of developers will immediately benefit from.

Understanding Infer Isolated Conformances

In Swift 6, it’s possible to have protocol conformances that are isolated to a specific global actor. The Infer Isolated Conformances build setting will make it so that protocol conformances on a type that’s isolated to a global actor will automatically be isolated to the same global actor.

Consider the following code:

@MainActor
struct MyModel: Decodable {
}

I’ve explicitly constrained MyModel to the main actor. But without inferring isolated conformances, my conformance to Decodable is not on the main actor which can result in compiler errors.

That’s why with SE-470, we can turn on a feature that will allow the compiler to automatically isolate our conformance to Decodable to the main actor if the conforming type is also isolated to the main actor.

Understanding global-actor-isolated types usability

This build setting is another one that’s always on when you’re using the Swift 6 Language mode. With this feature, the compiler will make it less likely that you need to mark a property as nonisolated(unsafe). This escape hatch exists for properties that can safely be transferred across concurrency domains even when they’re not sendable.

In some cases, the compiler can actually prove that even though a property isn’t sendable, it’s still safe to be passed from one isolation context to another. For example, if you have a type that is isolated to the main actor, its properties can be passed to other isolation contexts without problems. You don’t need to mark these as nonisolated(unsafe) because you can only interact with these properties from the main actor anyway.

This setting also includes other improvements to the compiler that will allow globally isolated types to use non-Sendable state due to the protection that’s imposed by the type being isolated to a global actor.

Again, this feature is always on when you’re using the Swift 6 Language Version, and I think it’s a type of problem that you might have run into in the past so it’s nice to see this solved through a build setting that makes the compiler smarter.

Understanding Disable outward actor isolation inference

This build setting applies to code that’s using property wrappers. This is another setting that’s always on in the Swift 6 language mode and it fixes a rather surprising behavior that some developers might remember from SwiftUI.

This setting is explained in depth in SE-0401 but the bottom line is this.

If you’re using a property wrapper that has an actor-isolated wrappedValue (like @StateObject which has a wrappedValue that’s isolated to the main actor) then the entire type that uses that property wrapper is also isolated to the same actor.

In other words, back when View wasn’t annotated with @MainActor in SwiftUI, using @StateObject in your View would make your View struct @MainActor isolated.

This behavior was implicit and very confusing so I’m honestly quite glad that this feature is gone in the Swift 6 Language Version.

Deciding whether you should opt-in

Now that you know a little bit more about the features that are part of approachable concurrency, I hope that you can see that it makes a lot of sense to opt-in to approachable concurrency. Paired with your code running on the main actor by default for new projects created with Xcode 26, you’ll find that approachable concurrency really does deliver on its promise. It gets rid of certain obscure compiler errors that required weird fixes for non-existent problems.

Ternary operator in Swift explained

The ternary operator is one of those things that will exist in virtually any modern programming language. When writing code, a common goal is to make sure that your code is succinct and no more verbose than it needs to be. A ternary expression is a useful tool to achieve this.

What is a ternary?

Ternaries are essentially a quick way to write an if statement on a single line. For example, if you want to tint a SwiftUI button based on a specific condition, your code might look a bit as follows:

struct SampleView: View {
  @State var username = ""

  var body: some View {
    Button {} label: {
      Text("Submit")
    }.tint(username.isEmpty ? .gray : .red)
  }
}

The line where I tint the button contains a ternary and it looks like this: username.isEmpty ? .gray : .red. Generally speaking, a ternary always has the following shape <condition> ? <if true> : <else>. You must always provide all three of these "parts" when using a ternary. It's basically a shorthand way to write an if {} else {} statement.

When should you use ternaries?

Ternary expressions are incredibly useful when you're trying to assign a property based on a simple check. In this case, a simple check to see if a value is empty. When you start nesting ternaries, or you find that you're having to evaluate a complex or long expression it's probably a good sign that you should not use a ternary.

It's pretty common to use ternaries in SwiftUI view modifiers because they make conditional application or styling fairly straightforward.

That said, a ternary isn't always easy to read so sometimes it makes sense to avoid them.

Replacing ternaries with if expressions

When you're using a ternary to assign a value to a property in Swift, you might want to consider using an if / else expression instead. For example:

let buttonColor: Color = if username.isEmpty { .gray } else { .red }

This syntax is more verbose but it's arguably easier to read. Especially when you make use of multiple lines:

let buttonColor: Color = if username.isEmpty { 
  .gray 
} else {
  .red
}

For now you're only allowed to have a single expression on each codepath which makes them only marginally better than ternaries for readability. You also can't use if expressions everywhere so sometimes a ternary just is more flexible.

I find that if expressions strike a balance between evaluating longer and more complex expressions in a readable way while also having some of the conveniences that a ternary has.