Loose coupling and the law of Demeter

When you're designing a new component for your codebase, you will usually only think of the component itself, and the objects that it interacts with directly. If you're designing a component that authenticates a user, you will typically only consider objects directly related to the authentication flow. You'll take into account that there's probably a network call, and maybe a central current user storage object. You don't want to spend time thinking about objects that are related to that network object since that's not something the component you're designing should care about. For example, if the network object caches responses from the server, or whether it uses some sort of configuration object. It's not relevant to the needs of the authentication flow. However, it's not uncommon to see code like the following in production:

struct Authenticator {
  let networkingObject: Networking

  func authenticate(using email: String, password: String, @escaping (Result<User, Error>) -> Void) {
    if networkingObject.configuration.requiresAuthentication {
      // authenticate
    }

    // proceed because we don't need to be authenticate
  }
}

Code like the preceding snippet is called tightly coupled; it relies heavily on another object's implementation. In today's article, I will explain why tightly coupled code is a problem for flexibility, readability, and maintainability. You will also learn about an interesting principle called the law of Demeter. Let's dive right in, shall we?

Understanding why tight coupling is problematic

In the introduction of this article I showed you a code snippet that contained the following line of code:

networkingObject.configuration.requiresAuthentication

At first glance, you'll probably think there is nothing wrong with this code. The Authenticator has access to a Networking object. And we know that the networking object has a configuration, so we can access that configuration, and we can check whether the network's configuration says that authentication is required. Based on that line of code, we can draw a couple of conclusions:

  • The Authenticator depends on a configuration value to determine whether authentication is required.
  • The network's configuration can not be private because that would prevent Authenticator from reading it.
  • The network's configuration can not be removed or changed significantly because that would break the Authenticator.

The three conclusions that I listed are all implicit. If you're a developer that's tasked with making changes to the Networking object, and you've never worked on the Authenticator, it's unlikely that you're aware of the Authenticator and its needs. And even if you would take a quick look at the Authenticator, it's not likely that you'll realize that changing the Networking configuration will break your Authenticator. Even the smallest changes to the Networking object can now impact Authenticator which is undesirable.

We call this kind of situation "tight coupling". The Networking object depends heavily on the Authenticator's implementation details. You might even say that we have an implicit dependency on a configuration in Authenticator because, to function properly, the requiresAuthentication property of a configuration object is used. Both implicit dependencies and tight coupling are problematic because it makes refactoring and changing code really complicated. Changing details in one object might cause other objects to break in unexpected ways. This can be frustrating and it often results in a significant increase in development time for new features, bug fixes and it makes solving tech debt incredibly hard.

Not only is code like this harder to maintain, but it's also in many ways very complicated to design code like this. To write code that is tightly coupled to other code, you need to know everything about all code in your codebase. You can only stitch together a chain of parameters like networkingObject.configuration.requiresAuthentication when you have deep knowledge of the system and all components it. However, in my experience, this kind of code is not written because the author made a conscious decision to do so. This kind of code is often written when there hasn't been a lot of thought about the design of the code before writing it and the path to achieving the desired outcome is paved while you're walking it. What I mean by this is that a developer who writes code that's tightly coupled often tackles problems as they present themselves rather than trying to anticipate the way code will work upfront.

There are many ways to avoid writing tightly coupled code that has implicit dependencies. Let's take a look at one of these approaches now.

Decoupling code using the law of Demeter

The law of Demeter, also known as the principle of least knowledge presents a set of rules that should be followed in any codebase. The short explanation of this rule is that objects are only allowed to access properties, functions, and parameters that are directly available to them. If you keep that in mind, do you see how the following code violates this rule?

self.networkingObject.configuration.requiresAuthentication

I added self here to make it clear how deep we have to go to read the requiresAuthentication property. It's okay for self to access networkingObject since it's a property that's owned by self, so that's all good. Since the networkingObject exposes certain properties and methods that self should interact with, it's also okay to access configuration as long as we need to. The biggest problem is with accessing requiresAuthentication. By accessing requiresAuthentication, self knows too much about the structure of the code. One way to sum up the rules of the law of Demeter that I like is provided on its wikipedia page:

  • Each unit should have only limited knowledge about other units: only units "closely" related to the current unit.
  • Each unit should only talk to its friends; don't talk to strangers.
  • Only talk to your immediate friends.

This list makes the idea very clear. Everything you access should be as close to the place you're accessing it from as possible.

So what if we still want to read requiresAuthentication by going through the Networking object? You might argue that the Networking object should be the source of truth for whether authentication is required, which is fine. The important part is that we depend on as little implementation details as possible. So if the Networking object's configuration property holds the requiresAuthentication property, we could refactor the code in Networking to the following:

struct Networking {
  private let configuration: Configuration

  var requiresAuthentication: Bool { configuration.requiresAuthentication }

  // The rest of the implementation
}

By refactoring the code, as shown above, the Authenticator can now use networking.requiresAuthentication rather than networkingObject.configuration.requiresAuthentication, and that's much better. We're dependant on the configuration anymore, so we're free to make that private, and change it as needed. The source of the requiresAuthentication property is now hidden, so we no longer violate the law of Demeter.

The big win here is that by only accessing properties that are "one level deep", we can now refactor our code more freely, and we only need to know about objects that we need to interact with directly. All dependencies are clear and there are no surprises hidden in our code anymore.

The downside is that this approach might require you to wrap a lot of code in very short methods or computed properties as we did in the refactored code snippet I showed you earlier. When you stick to the law of Demeter, it's not uncommon to have methods that look like the following example:

struct UserManager {
  // Other code

  func getAuthenticationStatus(completion: @escaping (Result<AuthenticationStatus, Error>) -> Void) {
    authenticator.getAuthenticationStatus(completion)
  }
}

It might seem tedious to write code like this at first. After all, it's much simpler to just write manager.authenticator.getAuthenticationStatus { _ in } than to create a new method that only calls a different method. While this can be tedious at first, you'll find that once you need to make changes to how the Authenticator determines whether somebody is authenticated, it's very convenient to be able to limit the work you're doing to a single place.

In summary

Today's article was something different than you might be used to from me. Instead of learning about something cool that exists on iOS or in Swift, you learned how loose coupling works in practice. I explained that loose coupling allows you to refactor with confidence and that it allows you to write code that is as independent of other objects as possible. You learned and saw that tight coupling can lead to incredibly complex code that is very hard to refactored.

I then explained to you that there is a rule about how much objects should know about each other and that this rule is called the law of Demeter, or principle of least knowledge. You saw that applying this law in your code leads to easy public interfaces and that it enforces loose coupling, which means that your code is much more flexible and simpler to reason about than code that doesn't adhere to the law of Demeter.

The law of Demeter is one of the few principles that I try to apply in every project, regardless of size, complexity or purpose. If you're not sure how to apply it in your project, have questions or if you have feedback, I love to hear from you on Twitter.

Sequencing tasks with DispatchGroup

When you're building apps, there are times when you need to perform certain tasks before executing the next task. Imagine a scenario where you need to make a couple of API calls to a webserver to retrieve information before you can begin processing the information that's fetched by all preceding API calls, so it can be used in your app. Usually, you want to perform this work as efficiently as possible. In the example I just outlined, this might mean that you want to fire off your API calls to retrieve information all at once, and begin processing immediately when all calls have finished.

In today's article, I will show you how to do this using a GCD (Grand Central Dispatch) feature called Dispatch Groups. I will first explain what we're going to build, and then I will show you how to achieve complete the task using the DispatchGroup class.

Understanding the problem that a Dispatch Group solves

In this article, I will show you how to build an advanced system that collects information from a couple of resources before it stitches that information together so it can be used by another operation. Examine the following graph of operations:

Operation Graph

The diagram above is a simplified version of a flow that I once had to build. A lot of complex caching was involved in the original problem but for the purposes of this article I distilled it down to a handful of tasks. When a user logs in, certain information is retrieved. To limit the number of calls made to the movie webserver, we first collect all the required information about the user. The user profile contains a reference to the user's all-time favorite movie. Every ticket that belongs to the user also has a reference to the movie that the ticket is for. The user's favorites are, you guessed it, also references to movies. So once we have all the references to movies, we can bundle them together and get all movie data in one API call rather than three.

To achieve this without Dispatch Groups you could wrap each individual operation in an object, and when one task object finishes you can check the status of all task objects to see if all tasks are now finished. If they are, you can trigger the next operation. This might work for a while, but it takes a lot of manual bookkeeping and race conditions between operations might make it seem like not all operations have finished while in reality, they finished at pretty much the exact same moment.

Dispatch Groups solve exactly this problem. With a Dispatch Group, you remove the burden of wrapping and monitoring every task you want to execute. Instead, you register enter and leave events on the Dispatch Group. You also tell the Dispatch Group what to do when all enter invocations have had a matching leave invocation. The idea is simple enough, right? Create a group, register a bunch of enter events, leave when a task completes and the group will automatically do what you need it to do when all work is done. Amazingly, that is exactly how DispatchGroup works!

Note:
If you've done work with operations and operation queues where operations depend on each other, you might be wondering why I didn't mention them in this section. You might even prefer them for problems similar to the one I highlighted. Operation queues with operations and dependencies are indeed a fantastic way to solve problems like these. However, they are more complex than using Dispatch Queues directly and also require a bit more work to get going.

Let's see how you would implement the tasks outlined in the diagram from the beginning of this section, shall we?

Using a Dispatch Group

To use a Dispatch Group, the first thing you need to do is create one. So let's define a function that we'll expand until it's a complete step in our syncing process:

func collectMovieIds() {
  let group = DispatchGroup()
  var movieIds = Set<Int>()

  // we'll write the implementation of this function here
}

This function declares two local variables. One for the Dispatch Group, and one for the movie ids we collect along the way. When all movie ids are collected, we need to kick off the next operation. Let's update the function so it does that as soon as the Dispatch Group has executed all of its tasks:

func collectMovieIds() {
  let group = DispatchGroup()
  var movieIds = Set<Int>()

  // All code snippets after this one go in this area

  group.notify(queue: DispatchQueue.global()) {
    self.handleMovieIds(movieIds)
  }
}

To specify what the Dispatch Group does after all its tasks are completed, you call the notify(queue:work:) method on the Dispatch Group. The queue argument is an important one because it specifies where the closure that's passed as the work argument is executed. If you want to update the UI when all tasks are done, you could pass DispatchQueue.main. If you want to kick off the next API call as we do here, DispatchQueue.global() is probably more appropriate. If you to learn more about Dispatch Queues and when you need to use the main queue, you can read my post called "Appropriately using DispatchQueue.main".

For convenience, we're going to assume that a UserService object is available to our function and that it can perform all API calls we need. The following snippet would go in between the movieIds set and the call to notfify(group:work:):

group.enter()
userService.fetchProfile { profile in
  movieIds.insert(profile.allTimeFavorite)
  group.leave()
}

group.enter()
userService.fetchFavorites { favorites in
  for favorite in favorites {
    movieIds.insert(favorite.movie)
  }
  group.leave()
}

group.enter()
userService.fetchTickets { tickets in
  for ticket in tickets {
    movieIds.insert(ticket.movie)
  }
  group.leave()
}

Note how every operation enters the group right before it starts its task, and it leaves the group when each completion closure is called. The Dispatch Group will keep track of all calls to enter() and leave(), and it will call the closure that you passed to notify(queue:work:) once all calls to enter() have a corresponding leave() call.

You don't have to keep the group inside of a function as I did here. You're free to abstract each operation into a different object and pass the queue around. For example, you could refactor the function we wrote in this section to look a little bit like this:

func collectMovieIds() {
  let group = DispatchGroup()
  var movieIds = Set<Int>()

  userService.fetchProfile(group: group) { profile in
    movieIds.insert(profile.allTimeFavorite)
  }

  userService.fetchFavorites(group: group) { favorites in
    for favorite in favorites {
      movieIds.insert(favorite.movie)
    }
  }

  userService.fetchTickets(group: group) { tickets in
    for ticket in tickets {
      movieIds.insert(ticket.movie)
    }
  }

  group.notify(queue: DispatchQueue.global()) {
    print("Completed work: \(movieIds)")
    // Kick off the movies API calls
    PlaygroundPage.current.finishExecution()
  }
}

In this example, it's the job of the UserService to call enter() and leave() on the group when needed. As long as that's done correctly you can pass your group around as much as you want.

If you were expecting to see or write a lot more code, I'm sorry. There isn't anything more to DispatchGroup that you need to know in order to implement a flow similar to the one I showed you in the diagram from the previous section.

In summary

When you need to implement processes in your app that run asynchronously, and you need to wait on all of them to complete before doing the next thing, Dispatch Groups are a tool worth exploring. With its relatively simple API, DispatchGroup packs quite a punch. It provides powerful mechanisms to orchestrate and manage complicated processes in your app. When used wisely and correctly, it can really help you clean up your code.

The most important things to remember about using DispatchGroup in your code is that every enter() call needs to have a corresponding call to leave(), and you need to call notify(queue:work:) to tell the DispatchQueue what to do when all of its work is completed.

If you have any questions about Dispatch Groups, have feedback or anything else, don't hesitate to reach out on Twitter.

Breaking an app up into modules

As apps grow larger and larger, their complexity tends to increase too. And quite often, the problems you're solving become more specific and niche over time as well. If you're working on an app like this, it's likely that at some point, you will notice that there are parts of your app that you know on the back of your hand, and other parts you may have never seen before. Moreover, these parts might end up somehow talking to each other even though that seems to make no sense. As teams and apps grow, boundaries in a codebase begin to grow naturally while at the same time the boundaries that grow are not enforced properly. And in turn, this leads to a more complicated codebase that becomes harder to test, harder to refactor and harder to reason about.

In today's article, I will explain how you can break a project up into multiple modules that specialize on performing a related set of tasks. I will provide some guidance on how you can implement this in your team, and how you identify parts of your app that would work well as a module. While the idea of modularizing your codebase sounds great, there are also some caveats that I will touch upon. By the end of this article, you should be able to make an informed decision on whether you should break your codebase up into several modules, or if it's better to keep your codebase as-is.

Before I get started, keep in mind that I'm using the term module interchangeably with target, or framework. Throughout this article, I will use the term module unless I'm referring to a specific setting in Xcode or text on the screen.

Determining whether you should break up your codebase

While the idea of breaking your codebase up into multiple modules that you could, in theory, reuse across projects sounds very attractive, it's important that you make an educated decision. Blindly breaking your project up into modules will lead to a modularized codebase where every module needs to use every other module in order to work. If that's the case, you're probably better of not modularizing at all because the boundaries between modules are still unclear, and nothing works in isolation. So how do you decide whether you should break your project up?

Before you can answer that question, it's important to understand the consequences and benefits of breaking a project up into multiple modules. I have composed a list of things that I consider when I decide whether I should break something out into its own module:

  • Does this part of my code have any (implicit) dependencies on the rest of my codebase?
  • Is it likely that I will need this in a different app (for example a tvOS counterpart of the iOS app)?
  • Can somebody work on this completely separate from the app?
  • Does breaking this code out into a separate module make it more testable?
  • Am I running into any problems with my current setup?

There are many more considerations you might want to put into your decision, but I think if you answer "yes" to at least three out of the five bullet points above it might make sense to break part of your code out into a separate module. Especially if you're running into problems with your current setup. I firmly believe that you shouldn't attempt to fix what isn't broken. So breaking a project up into module for the sake of breaking it up is always a bad idea in my opinion. Any task that you perform without a goal or underlying problem is essentially doomed to fail or at least introduce problems that you didn't have before.

As with most things in programming the decision to break your project up is not one that's clear cut. There are always trade-offs and sometimes the correct path forward is more obvious than other times. For example, if you're building an app that will have a tvOS flavor and an iOS flavor, it's pretty clear that using a separate module for shared business logic is a good idea. You can share the business logic, models and networking client between both apps while the UI is completely different. If your app will only work on the iPhone, or even if it works on the iPhone and iPad, it's less clear that you should take this approach.

The same is true if your team works on many apps and you keep introducing the same boilerplate code over and over. If you find yourself doing this, try to package up the boilerplate code in a framework and include it as a separate module in every project. It will allow you to save lots of time and bug fixes are automatically available to all apps. Beware of app-specific code in your module though. Once you break something out into its own module, you should try to make sure that all code that's in the module works for all consumers of that module.

Identifying module candidates

Once you've decided that you have a problem, and modularizing your codebase can fix this problem, you need to identify the scope of your modules. There are several obvious candidates:

  • Data storage layers
  • Networking clients
  • Model definitions
  • Boilerplate code that's used across many projects
  • ViewModels or business logic that used on tvOS and iOS
  • UI components or animations that you want to use in multiple projects

This list is in no way exhaustive, but I hope it gives you an idea of what things might make sense as a specialized module. If you're starting a brand new project, I don't recommend to default to creating modules for the above components. Whether you're starting a new project or refactoring an existing one, you need to think carefully about whether you need something to be its own module or not. Successfully breaking code up into modules is not easy, and doing so prematurely makes the process even harder.

After identifying a good candidate for a module, you need to examine its code closely. In your apps, you will usually use the default access level of objects, properties, and methods. The default access level is internal, which means that anything in the same module (your app) can access them. When you break code out into its own module, it will have its own internal scope. This means that by default, your application code cannot access any code that's part of your module. When you want to expose something to your app, you must explicitly mark that thing as public. Examine the following code for a simple service object and try to figure out what parts should be public, private or internal:

protocol Networking {
  func execute(_ endpoint: Endpoint, completion: @escaping (Result<Data, Error>) -> Void)

  // other requirements
}

enum Endpoint {
  case event(Int)

  // other endpoints
}

struct EventService {
  let network: Networking

  func fetch(event id: Int, completion: @escaping (Result<Event, Error>) -> Void) {
    network.execute(.event(id)) { result in
      do {
        let data = try result.get()
        let event = try JSONDecoder().decode(Event.self, from: data)
        completion(.success(event))
      } catch {
        completion(.failure(error))
      }
    }
  }
}

There's a good chance that you immediately identified the network property on EventService as something that should be private. You're probably used to marking things as private because that's common practice in any codebase, regardless of whether it's split up or not. Deciding what's internal and public is probably less straightforward. I'll show you my solution first, and then I'll explain why I would design it like that.

// 1
internal protocol Networking {
  func execute(_ endpoint: Endpoint, completion: @escaping (Result<Data, Error>) -> Void)

  // other requirements
}

// 2
internal enum Endpoint {
  case event(Int)

  // other endpoints
}

// 3
public struct EventService {
  // 4
  private let network: Networking

  // 5
  public func fetch(event id: Int, completion: @escaping (Result<Event, Error>) -> Void) {
    network.execute(.event(id)) { result in
      do {
        let data = try result.get()
        let event = try JSONDecoder().decode(Event.self, from: data)
        completion(.success(event))
      } catch {
        completion(.failure(error))
      }
    }
  }
}

Note that I explicitly added the internal access level. I only did this for clarity in the example, it's the default access level so in your own codebase it's up to you whether you want to add the internal access level explicitly. Let's go over the comments one by one so I can explain my choices:

  1. Networking is marked internal because the user of the EventService doesn't have any business using the Networking object directly. Our purpose is to allow somebody to retrieve events, not to allow them to make any network call they want.
  2. Endpoint is marked internal for the same reason I marked Networking as internal.
  3. EventService is public because I want users of my module to be able to use this service to retrieve events.
  4. network is private, nobody has any business talking to the EventService's Networking object other than the service itself. Not even within the same module.
  5. fetch(event:completion:) is public because it's how users of my module should interact with the events service.

Identifying your module's public interface helps you to identify whether the code you're abstracting into a module can stand on its own, and it helps you decide whether the abstraction would make sense. A module where everything is public is typically not a great module. The purpose of a module is that it can perform a lot of work on its own and that it enforces a natural boundary between certain parts of your codebase.

Creating new modules in Xcode

Once you've decided that you want to pull a part of your app into its own module, it's time to make the change. In Xcode, go your project's settings and add a new target using the Add Target button:

Add Target Button

In the next window that appears, scroll all the way down and select the Framework option:

Select Add Framework

In the next step, give your module a name, choose the project that your module will belong to, and the application that will use your module:

Configure new module window

This will add a new target to the list of targets in your project settings, and you will now have a new folder in the project navigator. Xcode also adds your new module to the Frameworks, Libraries, and Embedded Content section of your app's project settings:

Framework added to app

Drag all files that you want to move from your application to your module into the new folder, and make sure to update the Target Membership for every file in the File Inspector tab on the right side of your Xcode window:

Target Membership settings

Once you have moved all files from your app to your new module, you can begin to apply the public, private and internal modifiers as needed. If your code is already loosely coupled, this should be a trivial exercise. If your code is a bit more tightly coupled, it might be harder to do this. When everything is good, you should be able to (eventually) run your app and everything should be good. Keep in mind that depending on the size of your codebase this task might be non-trivial and even a bit frustrating. If this process gets to frustrating you might want to take a step back and try to split your code up without modules for now. Try to make sure that objects are clean, and that objects exist in as much isolation as possible.

Maintaining modules in the long run

When you have split your app up into multiple modules, you are now maintaining several codebases. This means that you might often refactor your app, but some of your modules might remain untouched for a while. This is not a bad thing; if everything works, and you have no problems, you might not need to change a thing. Depending on your team size, you might even find that certain developers spend more time in certain modules than others, which might result in slightly different coding styles in different modules. Again, this is not a bad thing. What's important to keep in mind that you're probably growing to a point where it's unreasonable to expect that every developer in your team has equal knowledge about every module. What's important is that the public interface for each module is stable predictable and consistent.

Having multiple modules in your codebase introduces interesting possibilities for the future. In addition to maintaining modules, you might find yourself completely rewriting, refactoring or swapping out entire modules if they become obsolete or if your team has decided that an overhaul of a module is required. Of course, this is highly dependent on your team, projects, and modules but it's not unheard of to make big changes in modules over time. Personally, I think this is where separate modules make a large difference. When I make big changes in a module I can do this without disrupting other developers. A big update of a module might take weeks and that's okay. As long as the public API remains in-tact and functional, nobody will notice that you're making big changes.

If this sounds good to you, keep in mind that the smaller your team is, the more overhead you will have when maintaining your modules. Especially if every module starts maturing and living a live off its own, it becomes more and more like a full-blown project. And if you use one module in several apps, you will always have to ensure that your module remains compatible with those apps. Maintaining modules takes time, and you need to be able to put in that time to utilize modularized projects to their fullest.

Avoiding pitfalls when modularizing

Let's say you've decided that you have the bandwidth to create and maintain a couple of modules. And you've also decided that it absolutely makes sense for your app to be cut up into smaller components. What are things to watch for that I haven't already mentioned?

First, keep in mind that application launch times are impacted by the number of frameworks that need to be loaded. If your app uses dozens of modules, your launch time will be impacted negatively. This is true for external dependencies, but it's also true for code that you own. Moreover, if you have modules that depend on each other, it will take iOS even longer to resolve all dependencies that must be loaded to run your app. The lesson here is to not go overboard and create a module for every UI component or network service you have. Try to keep the number of modules you have low, and only add new ones when it's needed.

Second, make sure that your modules can exist in isolation. If you have five modules in your app and they all import each other in order to work, you haven't achieved much. Your goal should be to write your code so it's flexible and separate from the rest of your app and other modules. It's okay for a networking module to require a module that defines all of your models and some business logic, or maybe your networking module imports a caching module. But when your networking code has to import your UI library, that's a sign that you haven't separated concerns properly.

And most importantly, don't modularize prematurely or if your codebase isn't ready. If splitting your app into modules is a painful process where you're figuring out many things at once, it's a good idea to take a step back and restructure your code. Think about how you would modularize your code later, and try to structure your code like that. Not having the enforced boundary that modules provide can be a valuable tool when preparing your code to be turned into a framework.

In summary

In today's article, you have learned a lot about splitting code up into modules. Everything I wrote in this post is based on my own experiences and opinions, and what works for me might not work for you. Unfortunately, this is the kind of topic where there is no silver bullet. I hope I've been able to provide you some guidance to help you decide whether a modularized codebase is something that fits your team and project, and I hope that I have given you some good examples of when you should or should not split your code up.

You also saw how you can create a new framework in Xcode, and how you can add your application code to it. In addition to creating a framework I briefly explained how you can add your existing code to your framework and I told you that it's important to properly apply the public, private and internal access modifiers.

To wrap it up, I gave you an idea of what you need to keep in mind in regards to maintaining modules and some pitfalls you should try to avoid. If you have any questions left, if you have an experience to share or if you have feedback on this article, don't hesitate to reach out to me on Twitter.

Using preconditions, assertions, and fatal errors in Swift

As developers, we are often told that we should avoid crashing our apps at all costs. It's why we are told that we shouldn't force unwrap our optionals, that we should avoid unowned references and that we should never use try! in production code. In today's article, I would like to offer you a counter opinion on this never crash train of thought. I firmly believe that sometimes we should crash and that not crashing could sometimes be worse than crashing because it might leave your app in an unresponsive state, or maybe it hides a problem that you won't know about until your App is in the store and gets its first one-star reviews.

In this post, we'll spend some time on exploring the following topics:

  • Understanding how your code can fail
  • Crashing your app in production
  • Crashing your app during development

It's important to make the distinction between crashing during development and in production because you'll often want to be less forgiving during development than you'd be in production. I'm going to show you several of Swift's built-in assertion mechanisms that you can use to verify that your app is in an expected state and can continue to function.

Understanding how your code can fail

There are many things that can go wrong in code. And there are many ways for code to communicate errors and propagate them through your application. For example, a method might use a Result object to specify whether an operation succeeded, or if an error occurred. This is common in asynchronous code that uses callbacks.

When code executes synchronously or uses Swift Concurrency's async functions, it's common for methods to be marked with throws to indicate that the method will either complete successfully and return a value if relevant or that the function will throw an error if something goes wrong. Both Result and throw force developers to think about what should happen when something goes wrong. Throws does this by requiring a do / catch block and a try keyword in front of your call. Result does this by requiring you to extract the success or error from the Result object.

Both Result and throws are examples of errors that can and should be handled by the developer. It's also possible for a function to be written in a way that doesn't allow a developer to handle and recover from errors.

Consider this example:

let array = [String]()
print(array[1])

The above code will crash when you run it. The reason for this is that the subscript function on Array in Swift considers accessing an index that is not in the array to be a programmer error.

In other words, they consider it a bug in your code that you should fix. The example above crashes both in production and development. In the Swift source code, you can see that Apple uses _precondition() to crash your program because error is not recoverable and that the program should be considered in an invalid state.

A very similar way to crash your app when something is wrong is when you use fatalError(). While the result is the same (your app crashes), fatal errors should be used less often than precondition failures because they carry a slightly different meaning. A precondition failure is usually used in response to something a programmer is doing. For example, accessing an invalid index on an array. A fatal error is more unexpected. For example, imagine that the program knows that the index you're accessing on an array is valid, but the object has gone missing somehow. This would be unexpected and is a fatalError() because there it's unlikely that your program can still reliably when objects randomly go missing from memory. To be honest, I doubt you would even be able to detect something like that.

The last way of failing that I want to highlight is assertionFailure(). An assertion failure is very similar to a precondition failure. The main difference is that an assertion failure is not always evaluated. When your app is running in debug mode and you call assertionFailure() in your code, your app crashes. However, when you export your app for the app store, the assertion failures are stripped from your code and they won't crash your app. So assertion failures are a fantastic way for you to add loads of sanity checks to your codebase, and crash safely without impacting the user experience of your app in production.

All in all, we roughly have the following ways of communicating errors in Swift:

  • Swift's Result type.
  • Throwing errors.
  • Crashing with preconditionFailure().
  • Crashing with fatalError().
  • Crashing during development using assertionFailure().

I would like to spend the next two sections on using the last three failure modes on the preceding list. Note that this list might be incomplete, or that some nuances could be missing. I've simply listed the errors that I see most commonly in code from others, and in my own code.

Crashing your app in production

When your app is on the App Store, there are a couple of things you should care about:

  • Your app shouldn't crash.
  • Your app should be responsive.
  • You want to know about any problems your users run into.

It's not always easy to be able to hit all three targets. Especially if you avoid crashing at all costs, it might happen that instead of crashing your app shows a blank screen. The user will usually remedy this by force-closing your app and trying again, and if that works, it's a problem that you are not made aware of. So at that point, you miss two out of three marks; your app wasn't responsive, and you didn't know about the problem.

Sometimes, this is okay. Especially if it's unlikely to ever happen again. But if it does, you might not hear about it until the first bad reviews for your app start pouring in. One good way for you to be made aware of these kinds of problems is through logging. I won't go into any details for logging right now, but it's good practice to log any unexpected states you encounter to a server or other kind of remote log. By doing that, your app was unresponsive but you also have a log in your database.

This approach is great if you encounter a recoverable or rare error in production. It's not so great that your app may have been unresponsive, but at least you have captured the bad state and you can fix it in future releases. This is especially useful if the error or bug has a limited impact on your app's functionality.

There are also times when your app is more severely impacted and it makes little to no sense to continue operating your app in its current state. For example, if your app uses a login mechanism to prevent unauthorized access to certain parts of your app, it might make sense to use a preconditionFailure if your app is about to show an unauthenticated user something they aren't supposed to see. A problem like this would indicate that something went wrong during development, and a programming error was made. Similar to how accessing an out of bounds index on Array is considered a programming error. Let's look at a quick example of how preconditionFailure can be used:

override func viewDidLoad() {
  guard dataProvider.isUserLoggedIn == true else {
    preconditionFailure("This page should never be loaded for an unauthenticated user")
  }
}

Note that the return keyword can be omitted in this guard's else clause because there is no way for the program to continue after a preconditionFailure.

Or maybe your app is supposed to include a configuration file that is read when your app launches. If this file is missing and your app has no way to determine a default or fallback configuration, it makes perfect sense to crash your app with a fatalError. In that specific example, your app shouldn't even pass Apple's App Review because it doesn't work at all, and bundled files don't typically go missing from people's devices randomly. And when they do, it's probably because the app was tampered with and the user should expect things to crash. So when your app is live in the App Store, it's pretty safe to expect any files you bundled with the app to exist and be valid, and to crash if this is not the case.

The major difference between being unresponsive and allowing the user to manually kill your app and pro-actively crashing is the message it sends to the user. An unresponsive UI could mean anything, and while it's frustrating it's likely that the user will kill the app and relaunch it. They will expect whatever issue they ran into to be a rare occurrence and it probably won't happen again. While crashing is more serious. This tells your user that something is seriously wrong.

I can't tell you which of the two is the better approach. In my opinion, you need to find a balance between recovering from errors if possible, leaving your app in a potentially unexpected state and crashing. What's maybe most important here is that you make sure that you know about the issues users run into through logging unexpected states and crashing so you get crash reports.

Crashing your app during development

While you need to be careful about crashing in production, you can be a little bit more relentless during development. When your app is run in debug mode, fatal errors and preconditions work the same as they do in your App Store build. However, you have one more tool at your disposal during development; assertion failures.

Assertion failures are best used in places where your app might still work okay but is not in an expected state. These are the cases where you would want your production builds to send a log message to a server so you know that something was wrong. During development, however, you can call assertionFailure and crash instead. Doing this will make sure that you and your teammates see this error, and you can decide whether it's something you can fix before shipping your app.

The nice thing about assertion failure is that they are only used during development, so you can use them quite liberally in your code to make sure that your code doesn't do anything unexpected, and that no mistakes go unnoticed through silent failures. Let's look at the same example I showed earlier for the unauthenticated access of a part of your app, except this time we're less restrictive.

override func viewDidLoad() {
  guard dataProvider.isUserLoggedIn == true else {
    assertionFailure("This page should never be loaded for an unauthenticated user")
    dataLogger.log("Attempted to show view controller X to unauthenticated user.", level: .error)
    return
  }
}

In the above example, the view would still be shown to the user in production and an error message is logged to a data logger. We need to return from the guard's else clause here because an assertion failure does not guarantee that it will terminate your app.

In summary

There are many ways for developers to communicate errors in their apps. Some are very friendly, like for example a Result type with an error state, or using throws to throw errors. Others are more rigorous, like fatalError, preconditionFailure and assertionFailure. These errors crash your app, sending a very strong message to you and your users.

In this article, I have explained several reasons for crashing your apps, and I've shown you that different ways of crashing your app make sense in different circumstances. The example that I used for preconditionFailure and assertionFailure is one that might seem farfetched. When I wrote them down I realized that the obvious fix for the problem was to not show a restricted view controller in the first place if a user isn't logged in. If you thought the same you'd be right. But the problem we're solving here is a different one. By performing precondition and assertion checks in this hypothetical case, we make sure that any mistakes your teammates might make are caught. You don't always know whether your app attempts to present a restricted view controller to a non-authenticated user until something in your app starts yelling at you. And that's what assertions and preconditions are both really good at. So I hope that with that in mind, the example makes sense to you.

If you have any questions, or feedback for me, please reach out on X.

Testing your push notifications without a third party service

Many apps rely on push notifications to inform their users about interesting updates, important events, or interactions on social media that a user probably wants to know about. It's the perfect way to grab your users' attention and inform them about information they are likely interested in. To send push notifications, a lot of companies and developers make use of third-party services like Firebase, Amazon, Pusher, and others.

Today, I would like to show you a simple way to send push notifications without needing any of these services. Personally, I like to use this approach in the early stages of development because I don't want to have to go through a third party dashboard, integrate an SDK and just prefer to keep everything on my own machine.

It's also much quicker to set up quick prototypes or test cases if you don't need to involve an external service in your workflow. In today's article you will learn the following:

  • Preparing your app and certificates for push notifications
  • Writing a script to send notifications

Note that I mentioned services in this introduction, not libraries or frameworks. As you will see in the second section of this article, I use a small Javascript library to help me send push notifications because that actually saves me a lot of time.

Preparing your app and certificates for push notifications

Preparing for push notifications is a two-step process:

  1. Prepare and register the app
  2. Create the needed certificates

Preparing your app for push notifications

In order for your app to be able to receive push notifications, you need to add the Push Notifications entitlement to your project. To do this, go to the Signing & Capabilities tab in your project settings, click the + Capability button and add the Push Notifications capability to your app. Doing this will update the certificate that Xcode manages for you, and it allows you to create the needed certificates for your test script. It also allows you to register your app for push notifications.

To register your app for push notifications, you need to call the following method. Make sure to do this after you've asked the user for permission to send notifications.

UIApplication.shared.registerForRemoteNotifications()

Also, implement the following two AppDelegate methods:

func application(_ application: UIApplication, didRegisterForRemoteNotificationsWithDeviceToken deviceToken: Data) {
  let deviceTokenString = deviceToken.map { String(format: "%02x", $0) }.joined()
  print(deviceTokenString)
}

func application(_ application: UIApplication, didFailToRegisterForRemoteNotificationsWithError error: Error) {
  print(error.localizedDescription)
}

When your app successfully registers for push notifications, the device token is printed to the console. Make sure to keep the token somewhere where you can easily access it so you can use it when we write the script that will send our test notifications.

Generating the required certificates

There are two ways that you can use to generate your push certificates; manually or using a tool called Fastlane. I will briefly outline both options here.

Generating certificates manually

To generate your push certificates manually, you need to log in to the Apple Developer Portal and go to the Certificates, Identifiers & Profiles section. Click the + button to create a new certificate and choose the iOS Apple Push Notification service SSL (Sandbox) certificate.

iOS Apple Push Notification service SSL (Sandbox) option

Click continue and find the App ID that you want to send push notifications to. If your app doesn't show up, make sure you have added the push notifications entitlement to your app in Xcode.

The next step is to create a certificate signing request. To do this, open the Keychain Access app on your Mac and choose Keychain Access -> Certificate Assistant -> Request a Certificate from a Certificate Authority from the menu. Enter an email address and name in the required fields, and leave the CA Email address empty. Also, make sure to check the Saved to disk option.

Certificate Sign request example

Click continue and store the certificate in a convenient place, for example on your desktop.

Go back to the developer portal where you should see a screen that asks you to Upload a Certificate Signing Request. Choose the request you just created using Keychain Access. After clicking continue you should be able to download your freshly generated certificate.

After downloading the certificate, double click it so it's added to your keychain. Once it's added, make sure to select the Certificates option in the list of keychain Categories:

Certificates option selected

Now find the certificate you just added, expand it using the arrow that should be visible next to it. Select both the certificate and the private key, right-click and choose the Export 2 items... option. Store the .p12 file that Keychain will export in a place where you can easily find it, for example, your desktop.

Open your terminal, navigate to the folder where you've stored your .p12 file and type the following command. Make sure to replace <your file> with the filename you chose in the previous step:

openssl pkcs12 -in <your filename>.p12 -out certs.pem -nodes -clcerts

This command will generate a .pem file which is needed to connect to Apple's push service. Move the generated file to the folder where you'll be writing your script. I like to keep the certs file and the script itself all in the same folder. Of course, you're free to do whatever you want if something else works better for you.

Generating certificates with Fastlane

By far the easier option and the one I prefer is to generate the needed .pem files with Fastlane. If you have Fastlane installed, use the following command to generate a .pem file for your app:

fastlane pem --development

You will be asked to log in to your Apple ID and to provide the bundle identifier for the app that you need a .pem file for. Once you've done this Fastlane will generate three files for you. Copy the .pem file to a place where you can easily reference it from the push notification script. Like I mentioned earlier, I like to keep them all in the same folder but you can store them wherever you want.

Writing a script to send push notifications

Before you get started writing your push script, make sure you have Node.js installed. Navigate to the folder where you'll create your push script and use the following command to install node-apn, the helper library we'll use to send notifications:

npm install --save node-apn

The preceding command will pull down the node-apn package from Node.js' version of SPM and install it in the current directory. Next, create a new javascript file. Call it what you want but I will call it send_push.js. Sending push notifications with node-apn is fairly simple. First, import the package and create the push service:

const apn = require("apn");

let provider = new apn.Provider({
  "cert": "certs.pem",
  "key": "certs.pem",
});

Next, create a notification to send to your app:

let notification = new apn.Notification();
notification.alert = "Hello, this is a test!";
notification.badge = 1337;

The Notification object that's created has all the properties that you might normally see on an apns payload. For more information on what can be included in the payload, refer to Apple's documentation and the documentation for node-apn.

After creating the notification, all you need to do is grab the push token that you obtained from your app in the first section of this article, and send the notification:

let token = "<your-token>";

provider.send(notification, token).then( (response) => {
  console.log("done");
});

To run this script, open your terminal, navigate to the folder that your script is in and run the following command:

node send_push.js

This will run your javascript file using Node.js, and you should see a push message appear on your device! Pretty cool, right? Even if you've never written a line of javascript in your life, modifying this sample script should be fairly straightforward and both Apple's documentation and node-apn's documentation should be able to point you in the right direction if you get stuck.

In summary

In today's article, you learned how you can set up push notifications in your app, generate certificates that are used to send push notifications manually or through Fastlane and you saw how you can create a simple Node.js script to send push notifications. Personally, I love using Fastlane to generate my .pem file and sending push notifications through a simple script feels so much more flexible than having to use third-party provider during development.

Of course, when your app is finished and you deploy it to the App Store, it might make much more sense for you to integrate Firebase push messages in your app. They handle all the complicated stuff like keeping track of push tokens, certificates and know how to send push notifications without blowing up their server. However, if you don't send many push messages, or if you have some knowledge of back-end development, it might be feasible for you to own your push notification service yourself. It really depends on what you're comfortable with.

That said, during development I personally prefer using a local script to send test push notifications. And maybe you will too now that you realize it's not extremely complicated to do. If you have any questions, feedback or anything else please feel to reach out on Twitter.

Scheduling daily notifications on iOS using Calendar and DateComponents

On iOS, there are several ways to send notifications to users. And typically every method of sending push notifications has a different goal. For example, when you're sending a remote push notification to a user, you will typically do this because something interesting happened outside of the user's device. Somebody might have sent them a message for example, or something exciting happened during a sports game.

However, when we schedule notifications on the device locally, we typically do this as a reminder, or in response to a user action. Like, for example, entering or exiting a geofence. In today's post, I'm going to focus on scheduling notifications that are essentially repeating reminders. They are notifications that are scheduled to show up every day at a certain time, or maybe every Monday at 09:00 am, or on every first day of the month. The same technique can be used to schedule each of these notifications.

In today's post you will learn about the following:

  • Working with the Calendar and DateComponents.
  • Scheduling notifications using a UNCalendarNotificationTrigger.

Let's get started with learning about the Calendar, shall we?

Working with the Calendar and DateComponents

If you have worked extensively with dates and times on a global scale, you know that pretty much any assumptions you may have about how dates work are wrong. A good way to explore how complicated dates can be is by playing with the Calendar object that is part of the Foundation framework. If you're reading this, it's likely that you're used to the Gregorian calendar. That's the calendar that typically has 365 days in a year unless it's a leap year and 24 hours in a day unless a leap second is added. I don't want to go into how messy dates are, so I'm going to leave it at that, but let's have some fun exploring calendars!

Take a look at the following code:

var gregorianCalendar = Calendar(identifier: .gregorian)
var japaneseCalendar = Calendar(identifier: .japanese)
var hebrewCalendar = Calendar(identifier: .hebrew)

func currentDate(for calendar: Calendar) -> DateComponents {
  calendar.dateComponents([.year, .month, .day], from: Date())
}

print("Gregorian", currentDate(for: gregorianCalendar))
print("Japanese", currentDate(for: japaneseCalendar))
print("Hebrew", currentDate(for: hebrewCalendar))

How different do you expect the output for each call to currentDate(for:) to be? Will the years all be the same? What about the months? Or the day of the month? Let's look at the output of the preceding code:

Gregorian year: 2019 month: 12 day: 11 isLeapMonth: false 
Japanese year: 1 month: 12 day: 11 isLeapMonth: false 
Hebrew year: 5780 month: 3 day: 13 isLeapMonth: false 

If you guessed that the dates would all be completely different, you were right! The Calendar object can take dates, like the current date and represent them as DateComponents for whatever the Calendar represents. So we can take the current date on your machine, and convert it to a date representation that makes sense for the calendar you use to represent dates.

This also works the other way around, we can use DateComponents to represent a date that's based on a specific Calendar:

var components = DateComponents()
components.year = 1989
components.month = 11
components.day = 15

func date(for components: DateComponents, using calendar: Calendar) -> Date? {
  return calendar.date(from: components)
}

print("Gregorian", date(for: components, using: gregorianCalendar)!)
print("Japanese", date(for: components, using: japaneseCalendar)!)
print("Hebrew", date(for: components, using: hebrewCalendar)!)

The preceding code creates a DateComponents object that's based on the current calendar of the machine that's running this code. So in my case, it's Gregorian. Let's see what my birthday is on the Japanese and Hebrew calendars:

Gregorian 1989-11-14 23:00:00 +0000
Japanese 4007-11-14 23:00:00 +0000
Hebrew -1771-06-25 23:40:28 +0000

Notice how the Gregorian date is not what you would expect. The DateComponents were configured for November 15th. However, the printed date is November 14th. The reason for this is that we didn't specify a timezone for the calendar. Because of this, it defaults to GMT and since I'm not in the GMT timezone but in the UTC timezone, I get the wrong date. If you set components.timeZone to the timezone that you're representing your date in, you will get the expected output. So setting the timezone on the components objects like this: components.timeZone = TimeZone(identifier: "UTC") will produce the following output:

Gregorian 1989-11-15 00:00:00 +0000
Japanese 4007-11-15 00:00:00 +0000
Hebrew -1771-06-26 00:00:00 +0000

Much better. As you might realize by now, DateComponents describe dates using their components. In addition to year, month and day it's possible to specify things like era, minute, second, quarter, microsecond and much more. It's also possible to take a date and extract its DateComponents as I've shown in the first code snippet of this section.

Now that you have some working knowledge of Calendar and DateComponent, let's see how this applies to scheduling recurring notifications with UNCalendarNotificationTrigger.

Scheduling notifications using a UNCalendarNotificationTrigger

You probably didn't come here to spend a bunch of time reading about calendars and dates, you wanted to see how to schedule recurring notifications based on the time of day, day of the week, day of the month or pretty much any other similar kind of rule. To do that, I had to show you some examples of calendars. If you've never seen calendars and date components before, it would be incredibly hard to explain UNCalendarNotificationTrigger to you.

I'm going to assume that you have basic knowledge of asking a user for permission to send them notifications, and what the best way is to do this. If you need a quick reminder, here's how you ask for notification permissions in code:

UNUserNotificationCenter.current().requestAuthorization(options: [.alert, .sound, .badge]) { success, error in

  if error == nil && success {
    print("we have permission")
  } else {
    print("something went wrong or we don't have permission")
  }
}

Once you have the user's permission to send them notifications, you can schedule your local notifications as needed. Let's start with a simple example. Imagine sending the user a local notification every day at 09:00 am. You would use the following code to do this:

// 1
var dateComponents = DateComponents()
dateComponents.hour = 9
let trigger = UNCalendarNotificationTrigger(dateMatching: dateComponents, repeats: true)

// 2
let content = UNMutableNotificationContent()
content.title = "Daily reminder"
content.body = "Enjoy your day!"

let randomIdentifier = UUID().uuidString
let request = UNNotificationRequest(identifier: randomIdentifier, content: content, trigger: trigger)

// 3
UNUserNotificationCenter.current().add(request) { error in
  if error != nil {
    print("something went wrong")
  }
}

The preceding code snippet creates a UNCalendarNotificationTrigger using a DateComponents object where only the hour is set to 9. This means that the notification will trigger as soon as the user's current date reaches the ninth hour of the day. Keep in mind that for this to work, the calendar that the user uses and the one that's used to configure the DateComponent must match. Since DateComponent uses the current calendar it's not a problem in this case. But if you allow the user to configure their reminder, and your UI assumes a Gregorian calendar while the user's device uses a different calendar, you might run into trouble.

You can configure your date components however you want, and every time the current date on the device matches your criteria, a notification will be shown to the user. If you want to show a notification only on Wednesdays at 2:00 pm, you could use the following DateComponents configuration:

var dateComponents = DateComponents()
dateComponents.hour = 14
dateComponents.weekday = 3

Keep in mind that not all calendars start their week on the same day. Most calendars will start their week on a Monday, but some might start on a Sunday. You can account for this by grabbing the current calendar's firstWeekday property and calculating its offset from the day you're targeting. Math for dates is quite complicated so make sure to always triple check your work.

With this knowledge, you should now be able to schedule recurring notification using DateComponents!

In Summary

In this article, you learned that dates are not the same everywhere in the world. Some calendars that are used in certain places of the world are completely different from the Gregorian calendar that I am familiar with. Luckily, the Calendar and DateComponents objects from Foundation make these differences somewhat manageable because they allow us to convert from a date represented in one calendar, to the same moment in time on another calendar. You saw an example of this in the first section of this article.

Once you learned the basics of Calendar and DateComponents, you learned how to use your new knowledge to schedule recurring notifications based on DateComponents using the UNCalendarNotificationTrigger class. It's really cool to be able to schedule notifications using this trigger because it allows you to set up notifications that repeat indefinitely and fire on a certain time, day of the week, month or any other interval that you can represent using DateComponent.

If you have any questions, feedback or anything else for me, don't hesitate to reach out on Twitter. I love hearing from you!

Handling deeplinks in your app

iOS 14 introduced a new way to build apps where you no longer need an App- and SceneDelegate for SwiftUI apps. Learn how to handle deeplinks for these apps in this article.

On iOS, it's possible to send users from one app to the next or to send them from a webpage into an app. A link that's used to send a user to an app is typically called a deeplink. Often, there is more than one way for a deeplink to be passed to your app. Because of this, it's not always trivial to handle deeplinks. In today's article, I will go over two kinds of deeplinks that you can support in your apps, and I will show you how you can create a deeplink handler that can handle virtually any deeplink you throw at it. This article is divided into the following topics:

  • Using a custom URL scheme to handle deeplinks
  • Adding Universal Links to your app
  • Interpreting a deeplink and navigating to the correct view

Ready to level up your deeplink knowledge? Let's get going with custom URL schemes!

Using a custom scheme to handle deeplinks

A popular way for apps to support deeplinks is through custom URL schemes. A URL scheme is a prefix that is placed before your url. For example, if you make a request a webserver, the scheme is typically https:// or if you make user of websockets in your app, you might use the wss:// scheme. It's also possible to define your own, custom URL scheme. For example: deeplink-example:// or myapplication://. These schemes won't mean anything on devices that don't have your app installed. Using a custom scheme in a web browser like Safari won't work. Only your app knows how to handle your custom URL scheme, and iOS knows how to forward URLs that use your scheme to your app.

Before you can handle custom URL schemes in your app, you must register the URL schemes you want to support in your Info.plist file. Registering your URL scheme helps iOS determine what app it should ask to open a deeplink, but it's also used to protect the privacy of users. Apps cannot check whether iOS can handle URL schemes that they haven't registered in their Info.plist. If you could check for any URL scheme, at any time, it would be possible for apps to detect other installed apps by random asking iOS whether it can open as many URL schemes as possible. Luckily for your users, iOS limits the amount of attempts apps have to check whether a certain scheme is supported, and they must specify the schemes that they wish to support upfront.

To register a custom URL scheme for your app, you can either add entries to your Info.plist by hand, or you can go through the Info tab in your project's settings. I prefer the latter. The Info tab has a section called URL Types. This is where you register your custom URLs. An example of a custom scheme looks as shown in the following screenshot:

URL Types entry

The identifier field is equal to our app's bundle identifier, the URLSchemes field contains a custom scheme, in this case, deeplink-example and the other fields are empty. Adding a scheme here will automatically update the Info.plist as shown in the following screenshot:

URL in Info.plist

Because I added the deeplink-example scheme to this app's Info.plist it can now handle URLs that look as follows:

deeplink-example://some.host/a/path

When your app is asked to handle this deeplink, the scene(_:openURLContexts:) method is called on your scene delegate. If you're using the older app delegate rather than a scene delegate, the application(_:open:options:) method would be called on your app delegate object.

You can validate whether your scheme was added and works correctly by creating a simple view controller, adding a button to it and making it call a method that does the following:

let url = URL(string: "deeplink-example://donnywals.com/")!
UIApplication.shared.open(url, options: [:], completionHandler: nil)

And in your open URL method (scene(_:openURLContexts:) if you're using a scene delegate and application(_:open:options:) if you're only using an app delegate) you could print the following:

func scene(_ scene: UIScene, openURLContexts URLContexts: Set<UIOpenURLContext>) {
  for context in URLContexts {
    print("url: \(context.url.absoluteURL)")
    print("scheme: \(context.url.scheme)")
    print("host: \(context.url.host)")
    print("path: \(context.url.path)")
    print("components: \(context.url.pathComponents)")
  }
}

Note:
All examples used for URL handling will be based on the scene delegate. The only major difference between the app delegate and scene delegate is that the app delegate receives a single URL that it can use directly. The scene delegate receives a Set of URLContext objects where the URL has to be extracted from the context.

The preceding code takes the URL that should be opened and prints several of its components, I will explain more about these components later in the interpreting a deeplink and navigating to the correct view section.

For now, you know all you need to know about custom URL schemes. Let's take a short look at Universal Links before we move on to handling deeplinks.

Adding Universal Links to your app

In the previous section, you saw how easy it is to add any URL scheme to your app. The example I just showed you is fine, but it's also possible to register schemes for apps you don't own. Like for example uber:// or facebook://. When you claim a URL scheme like this, and you're not using the scene delegate, you can easily call UIApplication.shared.canOpenURL(_:) to figure out whether a user has certain apps installed. It also means that anybody can potentially hijack your custom URL scheme, which is not ideal. You can get around these issues by using Universal Links instead of URL schemes.

To add Universal Links to your app, you need to add the Associated Domains capability to your app. In your project settings, go to the Signing & Capabilities tab and click the + icon in the top left corner. Look for the Associated Domains capability and add it. After doing this you need to add your domains to the capability. When you add domains for Universal Links, you should use the following format: applinks:<your domain name>. The following screenshot shows the entry for donnywals.com:

App Links entry

After doing this, you need to upload a file to the server that you serve your website from. This file is called the apple-app-site-association file and it's used to verify that your app and domain belong together. Apple will look for this file on the root of your domain. In my case, that would be https://donnywals.com/apple-app-site-association. The apple-app-site-association file should use a JSON format and contain the following contents:

{
  "applinks": {
    "apps": ["A4VDH56G8B.com.donnywals.deeplinks"]
  }
}

The apps array contains all apps that belong to this domain. The format of each app in the apps array is as follows: <team identifier>.<app bundle id>.

Important:
Once you've added this file to your domain, make sure to reinstall your app if you're running it from Xcode. The system appears to only download your apple-app-site-association file when your app is installed or updated.

Once the system has matched your app and domain to each other, all links found in iOS that point to the domain you're using for Universal Links will be sent to your app unless the user chooses otherwise. If you only want to open very specific URLs in your app, consider using a subdomain for Universal Links rather than using your website's regular domain name. Make sure to update the domain in your app capability settings and make sure that your server can serve the apple-app-site-association file.

When the system decides that your app should open a Universal Link, it calls scene(_:continue:) on your scene delegate, or application(_:continue:restorationHandler:) on the app delegate. This method receives an instance of NSUserActivity with NSUserActivityTypeBrowsingWeb set as its activityType, and the URL that needs to be opened as the webpageURL of the user activity. The following shows how you could extract the target URL from a user activity:

func scene(_ scene: UIScene, continue userActivity: NSUserActivity) {
  guard userActivity.activityType == NSUserActivityTypeBrowsingWeb,
    let urlToOpen = userActivity.webpageURL else {
      return
  }

  print(url)
}

Interpreting a deeplink and navigating to the correct view

Probably the most interesting part of implementing deeplinks is determining how your app will handle them. Since the implementation that's best for you is dependent on a lot of things, I will show you a simple implementation that should be possible to adopt for most applications somehow. I'm not going to show you how this might work in a coordinator pattern and I won't be using any complex helper objects. Both are very suited for handling deeplinks but I want to make sure you understand how you can handle deeplinks with a very plain setup rather than get bogged down into the details of a specific architecture.

Note:
In the code examples I will be working from the scene delegate, keep in mind that if you don't have a scene delegate, all code should go in your app delegate and application(_:open:options:) will be the entry point for all deeplinks that use a custom URL scheme, and application(_:continue:restorationHandler:) is the entry point for Universal links.

So far, I have shown you the following code:

func scene(_ scene: UIScene, openURLContexts URLContexts: Set<UIOpenURLContext>) {
  for context in URLContexts {
    print("url: \(context.url.absoluteURL)")
    print("scheme: \(context.url.scheme)")
    print("host: \(context.url.host)")
    print("path: \(context.url.path)")
    print("components: \(context.url.pathComponents)")
  }
}

func scene(_ scene: UIScene, continue userActivity: NSUserActivity) {
  guard userActivity.activityType == NSUserActivityTypeBrowsingWeb,
    let urlToOpen = userActivity.webpageURL else {
      return
  }

  print(url)
}

The scene delegate can be asked to open multiple URLs. In practice, this shouldn't happen if you're handling normal deeplinks, so let's simplify this code a little bit.

guard let context = URLContexts.first else { return }

print("url: \(context.url.absoluteURL)") // https://app.donnywals.com/post/10
print("scheme: \(context.url.scheme)") // https
print("host: \(context.url.host)") // app.donnywals.com
print("path: \(context.url.path)") // /post/10
print("components: \(context.url.pathComponents)") // ["/", "posts", "10"]

Every URL is constructed using components. The preceding code lists the most important components. I added some comments to show you how any URL can be deconstructed into components. When you're handling a deeplink, you're usually most interested in the pathComponents array because that contains the most useful information in your URL. For this reason, it's often not needed to distinguish between Universal Links or custom schemes when you're handling deeplinks, unless you absolutely have to. It this is the case, the scheme property of the URL should contain the information you need to make the distinction between Universal Links and custom schemes. Let's see how we could implement a deeplink handling strategy to handle two different links:

  • https://app.donnywals.com/post/10
  • https://app.donnywals.com/settings/profile

Since a lot of apps use tab bars for their main navigation and navigation controllers for their sub-navigation, I will do the same in my example of handling deeplinks.

The following code passes a URL from each of the URL handling entry points to a new method that picks apart the path components and determines whether we're showing a settings page or a post.

func scene(_ scene: UIScene, continue userActivity: NSUserActivity) {
  guard userActivity.activityType == NSUserActivityTypeBrowsingWeb,
    let urlToOpen = userActivity.webpageURL else {
      return
  }

  handleURL(urlToOpen)
}

func scene(_ scene: UIScene, openURLContexts URLContexts: Set<UIOpenURLContext>) {
  guard let urlToOpen = URLContexts.first?.url else { return }

  handleURL(urlToOpen)
}

func handleURL(_ url: URL) {
  guard url.pathComponents.count >= 3 else { return }

  let section = url.pathComponents[1]
  let detail = url.pathComponents[2]

  switch section {
  case "post":
    guard let id = Int(detail) else { break }
    navigateToItem(id)
  case "settings":
    navigateToSettings(detail)
  default: break
  }
}

Since the first component in the path components array is always "/", we need to look at the second and third entries to retrieve the section of the app we need to show, and the detail page we need to show. Depending on the section, we then call different methods to handle navigating to the appropriate page. The reason for this is mostly so the code remains clear, concise and easy to read. Let's look at navigateToItem(_:):

func navigateToItem(_ id: Int) {
  // 1
  guard let tabBarController = window?.rootViewController as? UITabBarController else {
    return
  }

  // 2
  guard let viewControllers = tabBarController.viewControllers,
    let listIndex = viewControllers.firstIndex(where: { $0 is ListNavigationController }),
    let listViewController = viewControllers[listIndex] as? ListNavigationController else { return }

  // 3
  listViewController.popToRootViewController(animated: false)
  tabBarController.selectedIndex = listIndex

  // 4
  let detailViewController = ListDetailViewController(item: id)
  listViewController.pushViewController(detailViewController, animated: true)
}

This method grabs the window's root view controller and casts it to a tab bar controller. This is really only done so we can access the tab bar controller's viewControllers property. We then look for the position of the ListNavigationController that shows our posts, and we make sure that we can extract the ListNavigationController from the view controllers array.

The purpose of steps one and two is to obtain the view controller that will show the post detail and to find its position within the tab bar controller's tab bar items. In step three, the tab bar controller's selectedIndex is set to the index of our list view controller, and since the list view controller is a navigation controller, we pop it to its root view controller.

Depending on whether this is needed for your app, you might not need to pop to the root view controller. Or maybe your app shows modal view controllers. If that's the case you might want to call dismiss() on both your tab bar and your navigation controller.

In step four, the target view controller is created and the navigation controller is told to present the view controller.

Let's look at the implementation of navigateToSettings(_:) to see if it's any different:

func navigateToSettings(_ detailType: SettingsDetailViewController.DetailType) {
  guard let tabBarController = window?.rootViewController as? UITabBarController else {
    return
  }

  guard let viewControllers = tabBarController.viewControllers,
    let settingsIndex = viewControllers.firstIndex(where: { $0 is SettingsNavigationController }),
    let settingsViewController = viewControllers[settingsIndex] as? SettingsNavigationController else { return }

  settingsViewController.popToRootViewController(animated: false)
  tabBarController.selectedIndex = settingsIndex

  settingsViewController.popToRootViewController(animated: false)

  let detailViewController = SettingsDetailViewController(type: detailType)
  settingsViewController.pushViewController(detailViewController, animated: true)
}

Examine this code closely, and you'll find that the process of showing the settings page is pretty much the same as it is for the posts page.

At this point, your app will happily open links as long as your app is active in the background. When it's not active in the background and your user is sent to your app through a deeplink, the logic you have implemented currently won't work.

If your app is launched because to open a deeplink, the link is passed to the connectionOptions in your scene(_:willConnectTo:options:) method in the SceneDelegate and it can be accessed through the connection options' urlContexts property. To handle the link, you can call scene(_:openURLContexts:) with the contexts that are present on the connectionOptions:

func scene(_ scene: UIScene, willConnectTo session: UISceneSession, options connectionOptions: UIScene.ConnectionOptions) {
  self.scene(scene, openURLContexts: connectionOptions.urlContexts)
}

Similarly, if your app is launched to open a Universal Link, you can inspect connectionOptions.userActivities to see if there are any user activities for your app to handle. If this is the case, you might grab the first user activity and pass it to scene(_:continue:):

func scene(_ scene: UIScene, willConnectTo session: UISceneSession, options connectionOptions: UIScene.ConnectionOptions) {
  if let userActivity = connectionOptions.userActivities.first {
    self.scene(scene, continue: userActivity)
  } else {
    self.scene(scene, openURLContexts: connectionOptions.urlContexts)
  }
}

Note that you might receive several user activities so the URL that you're supposed to open could very well be the second or third item in the user activities set. Make sure to account for this in your app if needed.

You need to make sure that you call this method after you've set up and configured your scene and layout because your UI won't be available otherwise which means that you can't activate the correct screen in your app.

Note:
If you're using the AppDelegate instead of the SceneDelegate, the link is passed along with the launchOptions dictionary in application(_:didFinishLaunchingWithOptions:) under the UIApplication.LaunchOptionsKey.url key.

The way of handling deeplinks I have just shown you should contain enough information to give an idea of how deeplink handling might fit in your app. If you're using Storyboards, you could obtain a reference to your Storyboard, make it instantiate view controllers or ask the storyboard to perform segues depending on the path components in the URL you're handling. When you're implementing deeplinks in your app, it's important to try and keep your implementation flexible and avoid hardcoding things like indexes of tabs in a tab bar controller. If your tab bar items or ordering ever change, your deeplinking logic might have to change too, which is typically not what you want. It's better to look up appropriate indices and view controllers dynamically like I've shown in my example.

In Summary

Deeplink functionality is something a lot of apps need but it's not always easy to get started with adding deeplinks for your app. Do you use a custom URL scheme? Or are Universal Links a better fit for you? Or maybe a combination of both?

In today's article, I have shown you how to add support for both Universal Links and URL schemes. You know how each behaves and what you need to do to register each type of deeplink on your app so iOS knows where to send users when they encounter one of your links.

You also saw an example of deeplink handling code. Again, you might want to adapt this to fit better in your architecture or you might have to make some changes to make this example work with Storyboards, but the idea remains the same. Extract the path, determine what the user should see and find a good path to get there. Usually, this involves setting a selected tab on a tab bar controller and pushing a view controller on a navigation controller.

If you have any questions, feedback or additions for me about this article, don't hesitate to contact me on Twitter.

Thanks to Mat Gadd for making me realize Universal Links are handled in a different way than custom url schemes.

Measuring performance with os_signpost

One of the features that got screen time at WWDC 2018 but never really took off is the signposting API, also known as os_signpost. Built on top of Apple’s unified logging system, signposts are a fantastic way for you to gain insight into how your code behaves during certain operations.

In this post, I will show you how to add signpost logging to your app, and how you can analyze the signpost output using Instruments.

Adding signposts to your code

If you’re familiar with OSLog and already use it in your app, adding signpost logging should be fairly simple for you. If you have never used OSLog before, don’t worry, you can follow along with this post just fine.

Once you have determined what operation in your code you want to measure exactly, you need to create a so-called log handle that you can use as an entry point for your signpost logging. Recently I wanted to measure the difference in execution speed of certain quality of service settings on an operation queue. I built a simple sample app and added my log handle to my view controller. In your code, you should add it near the operation you want to measure. For example by making the log handle a property on your networking object, view model, view controller or anything else. What’s important is that it’s an instance property since you want to make sure the log handle can be accessed from anywhere within the code you’re measuring.

You can create a log handle as follows:

import os.log

let logHandler = OSLog(subsystem: "com.dw.networking", category: "qos-measuring")

A log handler always belongs to a subsystem, you might consider your entire app to be a subsystem, or maybe you consider different components in your app to be their own subsystem. You should name your subsystem some unique for your app, and it’s good practice to use reverse DNS notation for the naming. You also need to specify a category, in this case, I chose one that describes the thing I’m measuring with the signpost we’ll add next. Note that the preceding code imports the os.log framework. The signpost API is part of this framework so we need to import it in order to use signposts.

In a very simple example, you might want to add signposts in a way similar to what the following code snippet shows:

func processItem(_ item: Item) {
  os_signpost(.begin, log: logHandler, name: "Processing", "begin processing for %{public}s", item.id)

  // do some work
  os_signpost(.event, log: pointsOfInterestHandler, name: "Processing", "reached halfway point for %{public}s", label)
  // do some more work

  os_signpost(.end, log: logHandler, name: "Processing", "finished processing for %{public}s", item.id)
}

Note that there are three different event types used in the preceding code:

  • .begin
  • .event
  • .end

The .begin event is used to mark the start of an operation, and the .end event is used to mark the end of an operation. In this example, the system will use the name as a way of identifying each operation to link up the operation start and end events. We can also add points of interest that occur during an event, for example when you reach the halfway point. You add points of interest using the .event event type.

In order to log .event events, you need a special log handler that specializes in points of interest. You create such a log handler as follows:

let pointsOfInterestHandler = OSLog(subsystem: "dw.qostest", category: .pointsOfInterest)

It works pretty much the same as the normal logger, except you use a predefined category.

Also note the final two arguments passed to os_signpost: "finished processing for %{public}s", item.id. The first of these two arguments is a format string. Depending on the number of placeholders in the format string, the first, second, third, etc. arguments after the format string will be used to fill the placeholders. You can specify placeholders as either {public} or {private}. Specifying neither will default to {private}. Values passed to public placeholders are visible in the console, even if your app is running without the Xcode debugger attached. So if you’re handling sensitive data, make sure to mark your placeholders as private.

The s after the placeholder’s access level specifier marks that the value that’s used to fill the placeholder will be a string. You could also use a number instead of a string if you replace s with a d. Apple recommends that you only use strings and numbers for your placeholders in order to keep your logs simple and lightweight.

This example is very simple, everything occurs in the same method and we could use a string to link our signpost begin and end. But what if we have multiple operations running that all use signposts. If they all have the same name, they will start interfering with each other. In that case, you can use a SignpostID. You can create SignpostID objects in two ways:

let uniqueID = OSSignpostID(log: logHandler)
let idBasedOnObject = OSSignpostID(log: logHandler, object: anObject)

If you use the first method, you need to keep a reference to the identifier around so you can use it to correctly link .begin and .end events together. If your operation is strongly related to an instance of a class, for example, if each instance only runs one operation, or if you’re manipulating an object that’s a class in your operation, you can use the second method to obtain a SignpostID. When you create an identifier using an object, you always get the same SignpostID back as long as you’re using the same instance of the object. Note that the object must be a class. You can’t use value types for this.

You can use SignpostID in your code as follows:

class ImageManipulator {
  // other properties, like the logHandle
  let signpostID = SignpostID(log: logHandler, object: self)

  func start() {
    os_signpost(.begin, log: logHandler, name: "Processing", signpostID: signpostID, "begin processing for %{public}s", item.id)
    // do things
  }

  func end() {
    os_signpost(.end, log: logHandler, name: "Processing", signpostID: signpostID, "finished processing for %{public}s", item.id)
  }
}

Our signposts are now uniquely identified through the signpostID that gets generated based on the ImageManipulator itself. Note that this object is now expected to only work on one image at a time. If you would use this object for multiple operations in parallel, you’d need to either create a unique SignpostID for each operation or, for example, generate the identifier based on the image.

Reading signposts with instruments

Once you’ve added signposts to your code, you can view them in Console.app, or you can analyze them with Instruments. To do this, run your app with Instruments like you normally would (cmd + i or Product -> Profile) and select a blank Instruments template:

New Instrument Window

In the blank Instrument window, click the + icon in the top right, find the os_signpost instrument and double click it to add it to your Instruments session. Also, add the points of interest instrument from the same menu.

Add signpost Instrument

After doing that, hit record and use your app so a bunch of signposts are logged, and you have some data to look at:

Instruments overview

If you have the os_signpost track selected, Instruments will group measurements for each of its begin and end signposts based on your signpost message. So if you’re using the same message for operations, as we have in the earlier examples, performing the same operation over and over will cause Instruments to group those operations. And more importantly, Instruments will tell you the maximum duration, minimum duration, average duration and more for each operation. That way, you can easily measure the performance of the things your app does, without relying on print statements or date calculations that might negatively impact your code!

In summary

In this post, you’ve seen that Instruments and os_signpost are a powerful team that can help you gain insight into your code. You can use signposts as a way of regular logging to Console.app, but it’s also very well suited to do low-impact performance measuring of your code if you combine signposts with Instruments. Both signposts and Instruments are tools you might not need or use all the time, but knowing they exist, what they do and when you use them is essential to learning more about the code you write, and ultimately to becoming a better developer.

If you have feedback, questions or anything else regarding this post for me, please reach out on Twitter. I love hearing from you.

Using Xcode’s memory graph to find memory leaks

There are many reasons for code to function suboptimally. In a post, I have shown you how to use the Time Profiler to measure the time spent in each method in your code, and how to analyze the results. While a lot of performance-related problems can be discovered, analyzed and fixed using these tools, memory usage must often be debugged slightly differently. Especially if it's related to memory leaks.

In today's post, I will show you how to use the Memory Graph tool in Xcode to analyze the objects that are kept in memory for your app, and how to use this tool to discover memory leaks. I will focus specifically on retain cycles today.

Activating the Memory Graph

When you run your app with Xcode, you can click the memory debugger icon that's located between your code and the console, or at the bottom of your Xcode window if you don't have the console open:

Memory debugger icon

When you click this icon, Xcode will take a snapshot of your app's memory graph and the relationships that every object has to other objects. Your app's execution will be paused and Xcode will show you all objects that are currently in memory. Note that this might take a little while, depending on how big your app is.

Example memory graph

In the sidebar on the left-hand side, Xcode shows a full list of all objects that it has discovered. When you select an object in the sidebar, the middle section will show your selected object, and the relationships it has to other objects. Sometimes it's a big graph, like in the screenshot. Other times it's a smaller graph with just a couple of objects.

If Xcode spots a relationship that it suspects to be a memory leak, or retain cycle, it will add a purple square with a exclamation mark behind the object in the sidebar. In the screenshot you just saw, it's quite obvious where the purple squares are. If they are more hidden, or you just want to filter memory leaks, you can do so using the filter menu at the bottom of the sidebar as shown in the following screenshot:

Filtered view

The screenshot above shows that instances of two different objects are kept in memory while Xcode thinks they shouldn't. When you click one of them, the problem becomes visible immediately.

Retain cycle image

The DataProvider and DetailPage in this example are pointing at each other. A classic example of a retain cycle. Let's see how this occurs and what you can do to fix it.

Understanding how retain cycles occur and how to fix them

In iOS, objects are removed from memory when there are no other objects that keep a strong reference to them. Every instance of an object you create in your app has a retain count. Any time you pass a reference to your object to a different place in your code, its retain count is increased because there is now one more object pointing at the location in memory for that object.

This principle of retain counts mostly applies to classes. Because when you pass around an instance of a class in your code, you're really passing around a memory reference, which means that multiple objects point to the same memory address. When you're passing around value types, the value is copied when it's passed around. This means that the retain count for a value type is typically always one; there is never more than one object pointing to the memory address of a value type.

In order for an object to be removed from memory, its reference count must be zero; no objects should be referencing that address in memory. When two objects hold a reference to each other, which is often the case when you're working with delegates, it's possible that the reference count for either object never reached zero because they keep a reference to each other. Note that I mentioned a strong reference at the beginning of this section. I did that on purpose, if we have a strong reference, surely there is such a thing as a weak reference right? There is!

Weak references are references to instances of reference types that don't increase the reference count for the object the reference points to. The principles that apply here are exactly the same as using weak self in closures. By making the delegate property of an object weak, the delegate and its owner don't keep each other alive and both objects can be deallocated. In the example we were looking at this means that we need to change the following code:

class DataProvider {
  var delegate: DataDelegate?

  // rest of the code
}

Into the following:

class DataProvider {
  weak var delegate: DataDelegate?

  // rest of the code
}

For this to work, DataDelegate must be constrained to being a class, you can do this by adding : AnyObject to your protocol declaration. For example:

protocol DataDelegate: AnyObject {
  // requirements
}

When you'd run the app again and use the memory graph to look for retain cycles, you would notice that there are no more purple squares and the memory graph looks exactly like you'd expect.

In Summary

In this article, I have shown you that you can use Xcode to visualize and explore the memory graph of your app. This helps you to find memory leaks and retain cycles. When clicking on an object that's in memory, you can explore its relationship with other objects, and ultimately you can track down retain cycles. You also learned what a retain cycle is, how they occur and how you can break them.

If you have questions, feedback or anything else for me, don't hesitate to reach out on Twitter

Finding slow code with Instruments

Every once in a while we run into performance problems. One thing you can do when this happens is to measure how long certain things in your code take. You can do this using signposts. However, there are times when we need deeper insights in our code. More specifically, sometimes you simply want to know exactly how long each function in your code takes to execute. You can gain these insights using the Time Profiler Instrument. In today's article, I will show you how you can use the Time Profiler to analyze your code, and how you can optimize its output so you can gain valuable insights.

Exploring the Time Profiler Instrument

If you want to analyze your app, you need to run it for profiling. You can do this by pressing cmd+i or by using the Product -> Profile menu item. When your app is done compiling, it will be installed on your device and Instruments will launch. In the window that appears when Instruments launches, pick the Time Profiler template:

Instruments template selection

When you select this template, Instruments will launch a new Instruments session with several tracks.

Empty Instruments window

The one you're most interested in is the Time Profiler track. When you select the Time Profiler track, the table under the Instruments timeline will show your app's objects and their methods, and how much time is spent in each method. To profile your app, unlock your device and hit the record button in the top left corner. Use your app like you normally would and make sure to spend some time with the feature your most interested in. Instruments will begin filling up with measurements from your code as shown in the following screenshot.

Instruments with measurements

The Time Profiler takes snapshots of your app's memory and CPU usage every couple of milliseconds to create a picture of what is running, and when. Based on this, the Time Profiler measures how much time is spent in each method. The flip side here is that the Time Profiler is not suited for fine-grained, high-resolution profiling of your code. If this is what you need, you should use signposts instead.

Note
It's always best to run your app on a real device if you want to run the Time Profiler on it. The simulator has all the processing power from your working machine at its disposal so measurements will be very skewed if you profile your app using the Simulator.

Once you feel like you've captured enough data to work with, you can begin analyzing your measurements.

Analyzing the Time Profiler's measurements

By default, Instruments will show its measurements from the inside out. The topmost item in the tree is your app, followed by several threads. Note how instruments displays a number of seconds spent in each thread. This counter only increments if your app is actively processing data on the corresponding thread. Since you're probably not really interested in working your way from the inside out, and also not in system libraries, it's a good idea to change the way instruments visualizes data. In the bottom of the window, there's a button named Call Tree, if you click that, you can specify how Instruments should display its measurements. I always use the following settings:

Instruments settings

At the surface, not much will seem to have changed. Your code is still separated by thread, but if you expand the threads, your code is listed first because the call tree is now shown from the outside in rather than from the inside out. Every time you drill down one level deeper, Instruments shows what method called the method you're drilling into.

In the app I've been profiling here I was looking for reasons that it took a long time to update my UI after an image has finished downloading. I can see that a lot of time is spent in my performTask method. This is the method that's responsible for fetching and processing the image, and ultimately pass it to the UI. There's also some time spent in the UIImage initializer. Which is called from the performTask method as shown in the following screenshot:

Instruments readout

Based on these findings, you would conclude that something fishy might be happening in performTask because we're spending all of our time there. If the UIImage initialization was slow, we would be spending way more time in that initializer. And since the code spends so much time in performTask, but not in the UIImage initializer, this is a good first guess.

In this case, I made performTask slow on purpose. After loading an image I would write it to the phone's documents directory a couple of times, and also convert it to a UIImage not once, but five times before updating the UI. In this case, a potential fix would be to either update the UI immediately before persisting the image to the documents directory and to remove the loop that's obviously not needed.

In summary

From personal experience, I can tell you that the Time Profiler Instrument is an extremely valuable tool in an iOS developer's toolbox. If your UI doesn't scroll as smooth as you want, or if your device runs hot every time you use your app, or if see CPU and memory usage in Xcode rise all the time, the Time Profiler is extremely helpful to gain an understanding of what your code is doing exactly. When you profile your code and know what's going on, you can start researching performance problems in your code with more confidence.

If you have any questions about the Time Profiler, have feedback or just want to reach out, you can find me on Twitter.