Customizing how Codable objects map to JSON data

In the introductory post for this series you learned the basics of decoding and encoding JSON to and from your Swift structs. In that post, you learned that your JSON object is essentially a dictionary, and that the JSON's dictionary key's are mapped to your Swift object's properties. When encoding, your Swift properties are used as keys in the encoded JSON output dictionary.

Unfortunately, we don't always have the luxury of a 1 to 1 mapping between JSON and our Swift objects.

For example, the JSON you're working with might use snake case (snake_case) instead of camel case (camelCase) for its keys. Of course, you could write your Decodable object's properties using snake case so they match the JSON you're decoding but that would lead to very unusual Swift code.

In addition to the case styling not being the same, your JSON data might even have completely different names for some things that you don't want to use in your Swift objects.

Fortunately, both of these situations can be solved by either setting a key decoding (or encoding) strategy on your JSONDecoder or JSONEncoder, or by specifying a custom list of coding keys that map JSON keys to the properties on your Swift object.

If you'd prefer to learn about mapping your codable objects to JSON in a video format, you can watch the video on YouTube:

Automatically mapping from and to snake case

When you're interacting with data from a remote source, it's common that this data is returned to you as a JSON response. Depending on how the remote server is configured, you might receive a server response that looks like this:

{
    "id": 10,
    "full_name": "Donny Wals",
    "is_registered": false,
    "email_address": "[email protected]"
}

This JSON is perfectly valid. It represents a single JSON object with four keys and several values. If you were to define this model as a Swift struct, it'd look like this:

struct User: Codable {
  let id: Int
  let full_name: String
  let is_registered: Bool
  let email_address: String
}

This struct is valid Swift, and if you would decode User.self from the JSON I showed you earlier, everything would work fine.

However, this struct doesn't follow Swift conventions and best practices. In Swift, we use camel case so instead of full_name, you'll want to use fullName. Here's what the struct would look like when all properties are converted to camel case:

struct User: Codable {
  let id: Int
  let fullName: String
  let isRegistered: Bool
  let emailAddress: String
}

Unfortunately, we can't use this struct to decode the JSON from earlier. Go ahead and try with the following code:

let jsonData = """
{
  "id": 10,
  "full_name": "Donny Wals",
  "is_registered": false,
  "email_address": "[email protected]"
}
""".data(using: .utf8)!

do {
  let decoder = JSONDecoder()
  let user = try decoder.decode(User.self, from: jsonData)
  print(user)
} catch {
  print(error)
}

You'll find that the following error is printed to the console:

keyNotFound(CodingKeys(stringValue: "fullName", intValue: nil), Swift.DecodingError.Context(codingPath: [], debugDescription: "No value associated with key CodingKeys(stringValue: \"fullName\", intValue: nil) (\"fullName\").", underlyingError: nil))

This error means that the JSONDecoder could not find a corresponding key in the JSON data for the fullName.

To make our decoding work, the simplest way to achieve this is to configure the JSONDecoder's keyDecodingStrategy to be .convertFromSnakeCase. This will automatically make the JSONDecoder map full_name from the JSON data to fullName in struct by converting from snake case to camel case.

Let's look at an updated sample:

do {
  let decoder = JSONDecoder()
  decoder.keyDecodingStrategy = .convertFromSnakeCase
  let user = try decoder.decode(User.self, from: jsonData)
  print(user)
} catch {
  print(error)
}

This will succesfully decode a User instance from jsonData because all instances of snake casing in the JSON data are automatically mapped to their camel cased counterparts.

If you need to encode an instance of User into a JSON object that uses snake casing, you can use a JSONEncoder like you would normally, and you can set its keyEncodingStrategy to convertToSnakeCase:

do {
  let user = User(id: 1337, fullName: "Donny Wals",
                  isRegistered: true,
                  emailAddress: "[email protected]")

  let encoder = JSONEncoder()
  encoder.keyEncodingStrategy = .convertToSnakeCase
  let data = try encoder.encode(user)

  print(String(data: data, encoding: .utf8)!)
} catch {
  print(error)
}

The output for this code is the following:

{"id":1337,"full_name":"Donny Wals","email_address":"[email protected]","is_registered":true}

The ability to automatically convert from/to snake case is really useful when you're dealing with a server that uses snake case instead of camel casing.

Using a custom key decoding strategy

Since there is no single standard for what a JSON response should look like, some servers use arbitrary patterns for their JSON keys. For example, you might encounter service that uses keys that look like this: USER_ID. In that case, you can specify a custom key encoding- or decoding strategy.

Let's take a look at a slightly modified version of the JSON you saw earlier:

{
  "ID": 10,
  "FULL_NAME": "Donny Wals",
  "IS_REGISTERED": false,
  "EMAIL_ADDRESS": "[email protected]"
}

Since these keys all follow a nice and clear pattern, we can specify a custom strategy to convert our keys to lowercase, and then convert from snake case to camel case. Here's what that would look like:

do {
  let decoder = JSONDecoder()

  // assign a custom strategy
  decoder.keyDecodingStrategy = .custom({ keys in
    return FromUppercasedKey(codingKey: keys.first!)
  })

  let user = try decoder.decode(User.self, from: uppercasedJson)
  print(user)
} catch {
  print(error)
}

The closure that's passed to the custom decoding strategy takes an array of keys, and it's expected to return a single key. We're only interested in a single key so I'm grabbing the first key here, and I use it to create an instance of FromUppercasedKey. This object is a struct that I defined myself, and it conforms to CodingKey which means I can return it from my custom key decoder.

Here's what that struct looks like:

struct FromUppercasedKey: CodingKey {
  var stringValue: String
  var intValue: Int?

  init?(stringValue: String) {
    self.stringValue = stringValue
    self.intValue = nil
  }

  init?(intValue: Int) {
    self.stringValue = String(intValue)
    self.intValue = intValue
  }

  // here's the interesting part
  init(codingKey: CodingKey) {
    var transformedKey = codingKey.stringValue.lowercased()
    let parts = transformedKey.split(separator: "_")
    let upperCased = parts.dropFirst().map({ part in
      return part.capitalized
    })

    self.stringValue = (parts.first ?? "") + upperCased.joined()
    self.intValue = nil
  }
}

Every CodingKey must have a stringValue and an optional intValue property, and two initializers that either take a stringValue or an intValue.

The interesting part in my custom CodingKey is init(codingKey: CodingKey).

This custom initializer takes the string value for the coding key it received and transforms it to lowercase. After that I split the lowercased string using _. This means that FULL_NAME would now be an array that contains the words full, and name. I look over all entries in that array, except for the first array and I captialize the first letter of each word. So in the case of ["full", "name"], upperCased would be ["Name"]. After that I can create a string using the first entry in my parts array ("full"), and add the contents of the uppercase array after it (fullName). The result is a camel cased string that maps directly to the corresponding property in my User struct.

Note that the work I do inside init(codingKey: CodingKey) isn't directly related to Codable. It's purely string manipulation to convert strings that look like FULL_NAME to strings that look like fullName.

This example is only made to work with decoding. If you want it to work with encoding you'll need to define a struct that does the opposite of the struct I just showed you. This is slightly more complex because you'll need to find uppercase characters to determine where you should insert _ delimiters to match the JSON that you decoded initially.

A custom key decoding strategy is only useful if your JSON response follows a predicatable format. Unfortunately, this isn't always the case.

And sometimes, there's nothing wrong with how JSON is structured but you just prefer to map the values from the JSON you retrieve to different fields on your Decodable object. You can do this by adding a CodingKeys enum to your Codable object.

Using custom coding keys

Custom coding keys are defined on your Codable objects as an enum that's nested within the object itself. They are mostly useful when you want your Swift object to use keys that are different than the keys that are used in the JSON you're working with.

It's not uncommon for a JSON response to contain one or two fields that you would name differently in Swift. If you encounter a situation like that, it's a perfect reason to use a custom CodingKeys enum to specify your own set of coding keys for that object.

Consider the following JSON:

{
  "id": 10,
  "fullName": "Donny Wals",
  "registered": false,
  "e-mail": "[email protected]",
}

This JSON is slightly messy, but it's not invalid by any means. And it also doesn't follow a clear pattern that we can use to easily transform all keys that follow a specific pattern to something that's nice and Swifty. There are two fields that I'd want to decode into a property that doesn't match the JSON key: registered and e-mail. These fields should be decoded as isRegistered and email respectively.

To do this, we need to modify the User struct that you saw earlier in this post:

struct User: Codable {
  enum CodingKeys: String, CodingKey {
    case id, fullName
    case isRegistered = "registered"
    case email = "e-mail"
  }

  let id: Int
  let fullName: String
  let isRegistered: Bool
  let email: String
}

The only think that's changed in this example is that User now has a nested CodingKeys enum. This enum defines all keys that we want to extract from the JSON data. You must always add all keys that you need to this enum, so in this case I added id and fullName without a custom mapping. For isRegistered and email, I added a string that represents the JSON key that this property should map to. So isRegistered on User will be mapped to registered in the JSON.

To decode an object that uses custom coding keys, you don't need to do anything special:

do {
  let decoder = JSONDecoder()
  let user = try decoder.decode(User.self, from: jsonData)
  print(user)
} catch {
  print(error)
}

Swift will automatically use your CodingKeys enum when decoding your object from JSON, and it'll also use them when encoding your object to JSON. This is really convenient since there's nothing you need to do other than defining your CodingKeys.

Your CodingKeys enum must conform to the CodingKey playform and should use String as its raw value. It should also contain all your struct properties as its cases. If you omit a case, Swift won't be able to decode that property and you'd be in trouble. The case itself should always match the struct (or class) property, and the value should be the JSON key. If you want to use the same key for the JSON object as you use in your Codable object, you can let Swift infer the JSON key using the case name itself because enums will have a string representation of case as the case's raw value unless you specify a different raw value.

In Summary

In this post, you learned how you can use different techniques to map JSON to your Codable object's properties. First, I showed you how you can use built-in mechanisms like keyDecodingStrategy and keyEncodingStrategy to either automatically convert JSON keys to a Swift-friendly format, or to specificy a custom pattern to perform this transformation.

Next, you saw how you can customize your JSON encoding and decoding even further with a CodingKeys enum that provides a detailed mapping for your JSON <-> Codable conversion.

Using CodingKeys is very common when you work with Codable in Swift because it allows you to make sure the properties on your Codable object follow Swift best practices without enforcing a specific structure on your server data.

An introduction to JSON parsing in Swift

Virtually every modern application needs some way to retrieve, and use, data from a remote source. This data is commonly fetched by making a network request to a webserver that returns data in a JSON format.

When you're working with Javascript, this JSON data can be easily decoded into a Javascript object. Javascript doesn't have strong typing, so a JSON object in Javascript is really just a JavaScript Object.

Objects in Javascript are very comparable to dictionaries in Swift, except they aren't strongly typed and they have a couple of extra features. But that's way beyond what I want to cover in this post...

In this post, I want to take a look at Swift's Codable protocol.

So why start with JSON?

Well, JSON is arguably the most common data format that we use to exchange data on the web. And Swift's Codable protocol was designed to provide a powerful and useful mechanism to convert JSON data into Swift structs.

What's nice about Codable is that it was designed to not be limited to JSON. Out of the box, Codable can also be used to decode a .plist file into Swift structs, or to convert Swift structs into data for a .plist file.

The post you're looking at is intended to provide an introduction into Swift's Codable protocol, and it's part of a series of posts on this topic. I will focus on showing you how to work with JSON and Codable in Swift. It's good to understand that the principles in this series can be applied to both JSON data, as well as .plist files.

I'll start by explaining what Swift's Codable is. After that, I'll show you how to define a struct that implements the Codable protocol, and I'll explain the basics of encoding and decoding JSON data.

If you prefer to consume the contents of this post as a video, you can watch the video below.

Understanding what Swift's Codable is

The Codable protocol in Swift is really a union of two protocols: Encodable and Decodable. These two protocols are used to indicate whether a certain struct, enum, or class, can be encoded into JSON data, or materialized from JSON data.

When you only want to convert JSON data into a struct, you can conform your object to Decodable. If you only want to transform instances of your struct into Data, you can conform your object to Encodable, and if you want to do both you can conform to Codable.

A lot of Swift's built-in types already conform to Codable by default. For example, Int, String, and Bool are Codable out of the box.

Even dictionaries and arrays are Codable by default as long as the objects that you store in them conform to Codable.

This means that an array defined as Array<String> conforms to Codable already. A dictionary that's defined as Dictionary<String: String> is Codable too.

Arrays and dictionaries both play important roles in JSON because everything in JSON is defined using the equivalent of Swift's arrays and dictionaries.

For example, the following is valid JSON for an array of strings:

["hello", "world"]

And the following is an example of a dictionary in JSON:

{
    "hello": "world",
    "someInt": 10,
    "someBool": true
}

Notice how this dictionary has String as its key and three different kinds of values as its value. In Swift, you might represent a dictionary like this as [String: Any]. If we want to decode this JSON into something useful, we can't use [String: Any]. Because Any isn't Codable, a dictionary that has Any as its key can't be Codable either.

Luckily, all values for this object are Codable. Remember, Swift's String, Int, and Bool are all Codable!

Earlier I wrote that your structs, enums, and classes can conform to Codable. Swift can generate the code needed to extract data to populate a struct's properties from JSON data as long as all properties conform to Codable.

In this case, that means we would define a struct that has three properties with types String, Int, and Bool. Swift will take care of the rest.

Let's take a look at an example.

Defining a Codable struct

Given a specific JSON object, it's possible for us to figure out and define structs, classes, and enums that represent this JSON data in Swift.

The easiest way to do this, is to mirror the JSON structure 1-on-1. In this post, you will learn how you can customize the mapping between your Codable object an the JSON data you want to encode or decode. In this post, you will learn how to write custom logic to extract JSON data for a struct that's completely different from the JSON data that's used to populate the struct. For now, we'll focus on a direct mirror.

Earlier, I showed you this JSON:

{
    "hello": "world",
    "someInt": 10,
    "someBool": true
}

If we'd model this data using a Swift struct, we'd write the following:

struct ExampleStruct: Decodable {
    let hello: String
    let someInt: Int
    let someBool: Bool
}

Notice how I declared my struct as ExampleStruct: Decodable. This means that my struct conforms to Decodable, and I can decode JSON into instances of this struct. If I'd want to encode instances of my struct into JSON data, I would declare my struct as ExampleStruct: Encodable, and to convert in both directions I'd use ExampleStruct: Codable.

In this case, I only want to decode so I'm declaring my struct as Decodable.

Notice how the property names for my struct exactly match the keys in my JSON dictionary. This is important because the code that Swift generates behind the scenes for you when you compile your code assumes that the keys in your JSON match the property names of your Decodable object.

The properties of my struct are all Decodable themselves, this means that Swift can automatically generate the code needed to decode JSON data into my struct.

Let's take a look at a more complex JSON structure:

{
  "status": "active",
  "objects": [
    {
      "id": 1,
      "name": "Object one",
      "available": true
    },
    {
      "id": 2,
      "name": "Object two",
      "available": false
    },
  ]
}

In this example, we have a JSON object with two keys, one of them has an array as its value as you can tell by the [] that wrap the value for objects. The array contains more JSON objects. JSON objects are always wrapped by {}.

If we look at this JSON data from the point of view of our struct, we can see that we should define one struct with two properties (status and objects), and that objects should be an array of sorts. This array will hold instances of another struct that has three properties (id, name, and available).

Here's what our Swift models might look like:

struct Response: Decodable {
  let status: String
  let objects: [Product]
}

struct Product: Decodable {
  let id: Int
  let name: String
  let available: Bool
}

Swift can generate code to decode JSON into these structs because Product's properties are all Decodable. This means that Response's properties are also all Decodable since [Product] is Decodable. Remember, arrays are Decodable (or Codable) as long as their Element is Decodable (or Codable).

What's interesting about Codable, is that we can also make enums Codable, as long as they have a raw value that is Codable. For example, we could change the Response's status property to a ResponseStatus enum as follows:

struct Response: Decodable {
  let status: ResponseStatus
  let objects: [Product]
}

enum ResponseStatus: String, Decodable {
  case active = "active"
  case inactive = "inactive"
}

When we attempt to decode our JSON data into Response, the decoding will fail if we receive an unkown value for ResponseStatus.

Depending on your use case, this might be desired, or a problem. In this post, you'll learn how you can write custom decoding logic that will allow you to decode unkown values into a special other case that has an associated value (case other(String)) that can be used to represent new and unkown enum cases for a Decodable enum.

Now that you've seen some examples of how you can define a Decodable struct, let's see how you can decode JSON data into a Decodable struct with a JSONDecoder.

Decoding JSON into a struct

When you've obtained a Data object that represents JSON data, you'll want to decode this data into your Swift struct (or class of course). If you don't have a remote API to practice with, you can define some dummy JSON data using Swift's multiline string syntax as follows:

let exampleData = """
{
  "status": "active",
  "objects": [
    {
      "id": 1,
      "name": "Object one",
      "available": true
    },
    {
      "id": 2,
      "name": "Object two",
      "available": false
    },
  ]
}
""".data(using: .utf8)!

You can call data(using:) on any Swift string to obtain a data representation for that string.

To convert your Data to an instance of your struct, you need a JSONDecoder instance. You can create one as follows:

let decoder = JSONDecoder()

To decode the dummy data I showed you just now into an instance of the Response struct from the previous section, you'd use the following code:

do {
  let jsonDecoder = JSONDecoder()
  let decodedResponse = try jsonDecoder.decode(Response.self,
                                               from: exampleData)

  print(decodedResponse)
} catch {
  print(error)
}

Your JSONDecoder instance has a decode(_:from:) method that you call to convert JSON data into the object of your choosing.

The first argument for this method is the type that you want to decode your data into. In this case, that's Response.self.

The second argument for this method is the data that you want to extract your data from. In this case, that's exampleData.

Because JSON decoding can fail, decode(_:from:) must be called with a try prefix, preferably in a do {} catch {} block.

If something goes wrong we print the error so we can see what went wrong. The error messages that are surfaced by JSONDecoder are generally very helpful. For example, if our struct would contain a type that is not present in the JSON data we would see an error that looks like this:

keyNotFound(CodingKeys(stringValue: "missingObject", intValue: nil), Swift.DecodingError.Context(codingPath: [], debugDescription: "No value associated with key CodingKeys(stringValue: \"missingObject\", intValue: nil) (\"missingObject\").", underlyingError: nil))

We can see that we're dealing with a keyNotFound error. We can find out which key wasn't found by reading the CodingKeys declaration that comes after the error case. In this case, the CodingKeys value tells us that we're trying to extract a value for the missingObject key but that key does not exist in the JSON as noted by the debugDescription.

When you see an error like this it usually means that you made a typo, or your JSON object doesn't always contain a specific key. This can happen when your remote data source doesn't include keys with a nil value.

If you made a typo, you should fix it. If your remote data source omits keys with a nil value, you can mark your property as optional. That way the missing property will get a nil value automatically if it's missing in the JSON response.

All errors you might encounter when decoding JSON in Swift follow a similar pattern. Make sure you read your decoding errors if you encounter them because they'll typically provide you with very useful information to debug and fix your models.

Now that you've seen how to decode data, let's take a look at doing the opposite; encoding structs into JSON data.

Encoding a struct to JSON

When you encode data from a struct, class, or enum to JSON data, the end result of your encoding will always be Data. In other words, you decode Data into Decodable objects, and you encode an Encodable object into Data. This data can be written to a file, sent to a server, it could even be persisted using a Core Data entity or UserDefaults. However, the most common goal when encoding objects is to either write the data to a file, or to send it to a server.

Take a look at the following Encodable struct:

struct Product: Codable {
    let id: Int
    let name: String
    let available: Bool
}

Now let's see how you can encode an instance of this struct to Data:

let sampleInput = Product(id: 0, name: "test name", available: true)

do {
  let encoder = JSONEncoder()
  let data = try encoder.encode(sampleInput)
  print(data)
} catch {
  print(error)
}

This code is pretty straightforward, and if you run this in a playground, you'll find that the printed output is the following:

44 bytes

That might be surprising to you. After all, you encoded your struct to JSON data, right?

Well, you did. But Data is data and it's represented as bytes. You can inspect the generated JSON by transforming the data to a string:

if let jsonString = String(data: data, encoding: .utf8) {
  print(jsonString)
}

The output for this code is the following:

{"id":0,"name":"test name","available":true}

Neat! That's a nice JSON string.

Note that this output is not what you should typically send to a server or write to a file. Instead, you should use the Data that was returned by the JSON encoder's encode method. That Data is the binary representation of the String that we just printed.

By default, JSONEncoder will encode your objects into a single-line JSON structure like you just saw. The exampleData that I showed you earlier was nicely formatted on multiple lines. It's possible to configure JSONEncoder to insert newlines and tabs into the output, this allows you to inspect a nicely formatted string representation of the JSON data. you can do this by setting the encoder's outputFormatting to .prettyPrinted:

do {
  let encoder = JSONEncoder()

  encoder.outputFormatting = .prettyPrinted

  let data = try encoder.encode(sampleInput)
  if let jsonString = String(data: data, encoding: .utf8) {
    print(jsonString)
  }
} catch {
  print(error)
}

The output for the code below would look like this:

{
  "id" : 0,
  "name" : "test name",
  "available" : true
}

If you're inspecting a large JSON structure, it's nice to use this pretty printed format. It's not common to need this output format when you write your encoded data to a file, or when you send it to a server. The whitespace is only useful for humans, and it doesn't provide any value to machines that interpret the JSON data.

A more important outputFormatting is .sortedKeys. When you set the output formatting to .sortedKeys, the generated Data will have your JSON keys sorted alphabetically. This can be useful if your server expects you to format your keys in a specific way, or if you want to compare to different encoded objects to see if their data is the same. If the keys aren't sorted, two Data instances that hold the same JSON data might not be equal due to differences in how their keys are ordered.

Here's an example of the encoded sampleInput from earlier when using a JSONEncoder that has its outputFormatting set to .sortedKeys:

{"available":true,"id":0,"name":"test name"}

The output isn't pretty printed but notice how the encoded keys are now in alphabetical order.

It's not common to have to encode your JSON data using a specific key sorting, but it's good to know this option exists if needed. I know I've needed it a few times when working with third party APIs that had requirements about how the JSON data I sent it was formatted.

You can combine the .sortedKeys and .prettyPrinted options by setting outputFormatting to an array:

let encoder = JSONEncoder()
encoder.outputFormatting = [.prettyPrinted, .sortedKeys]

In Summary

In this post, you learned everything you need to know to get started with JSON encoding and decoding in Swift. You learned what the Codable protocol is, you learned how Swift automatically generates encoding and decoding logic for objects that conform to Codable, and you learned that Codable is really a union of two protocols; Encodable and Decodable.

I also showed you several examples of decoding JSON into Swift objects, and of encoding Swift objects into JSON.

In future posts, we'll dive deeper into thinks like CodingKeys, custom encoding- and decoding logic, and more advanced examples of how you can work with complex JSON data.

Flattening a nested JSON response into a single struct with Codable

Often, you'll want you Swift models to resemble JSON that's produced by an external source, like a server, as closely as possible. However, there are times when the JSON you receive is nested several levels deep and you might not consider this appropriate or needed for your application. Or maybe you're only interested in a couple of fields from the JSON response and these fields are hidden several levels deep in the JSON that's returned by a server.

In this post I'll show you how you can use nested containers to decode nested JSON data into a flat struct with a custom init(from:) implementation.

If you're not familiar with implementing a custom init(from:) method, take a look at this post. It describes custom encoding and decoding logic in detail and serves as basis for us to be building a flattening init(from:).

Decoding nested JSON data into a single struct

Consider the follow JSON data:

{
  "id": 10,
  "contact_info": {
    "email": "[email protected]"
  },
  "preferences": {
    "contact": {
      "newsletter": true
    }
  }
}

There's a lot of nesting here, and in this case all of this nesting is kind of noisy but it's very close to the kinds of JSON we sometimes have to work with in production. We can't change the backend in this case, so let's see how this JSON can be decoded into the following struct:

struct User: Decodable {
  let id: Int
  let email: String
  let isSubscribedToNewsletter: Bool
}

This struct does not represent our JSON at all. It's a good representation of the data for usage in an app but we can't go from our JSON to this struct directly without writing a custom init(from:) that leverages multiple CodingKey enums to map the source JSON to our struct.

struct User: Decodable {
  let id: Int
  let email: String
  let isSubscribedToNewsletter: Bool

  enum OuterKeys: String, CodingKey {
    case id, preferences
    case contactInfo = "contact_info"
  }

  enum ContactKeys: String, CodingKey {
    case email
  }

  enum PreferencesKeys: String, CodingKey {
    case contact
  }

  enum ContactPreferencesKeys: String, CodingKey {
    case newsletter
  }

  init(from decoder: Decoder) throws {
    let outerContainer = try decoder.container(keyedBy: OuterKeys.self)
    let contactContainer = try outerContainer.nestedContainer(keyedBy: ContactKeys.self,
                                                              forKey: .contactInfo)
    let preferencesContainer = try outerContainer.nestedContainer(keyedBy: PreferencesKeys.self,
                                                                  forKey: .preferences)
    let contactPreferencesContainer = try preferencesContainer.nestedContainer(keyedBy: ContactPreferencesKeys.self,
                                                                               forKey: .contact)

    self.id = try outerContainer.decode(Int.self, forKey: .id)
    self.email = try contactContainer.decode(String.self, forKey: .email)
    self.isSubscribedToNewsletter = try contactPreferencesContainer.decode(Bool.self, forKey: .newsletter)
  }
}

In this example I've defined several coding key enums. Each enum represents one of the JSON objects that I want to flatten into the User struct.

In the init(from:) method, the first like should look familiar to you if you've written a custom init(from:) before.

let outerContainer = try decoder.container(keyedBy: OuterKeys.self)

This line extracts a container that uses the keys in my OuterKeys enum. The lines after this line are probably new to you:

let contactContainer = try outerContainer.nestedContainer(keyedBy: ContactKeys.self,
                                                          forKey: .contactInfo)
let preferencesContainer = try outerContainer.nestedContainer(keyedBy: PreferencesKeys.self,
                                                              forKey: .preferences)
let contactPreferencesContainer = try preferencesContainer.nestedContainer(keyedBy: ContactPreferencesKeys.self,
                                                                           forKey: .contact)

Instead of extracting a container from the decoder instance, I extract containers from other containers. These containers are keyed by their respective enums and they allow me to dig into the JSON data to get to the data I'm interested in.

In this case, that means that I can extract the id from the outerContainer, the email from the contactContainer and lastly, I can extract the value for isSubscribedToNewsletter from the contactPreferencesContainer.

Using nested container can be a super powerful approach to flattening your JSON data but maybe you're just looking for a way to provide a flattened struct and you don't mind defining the Decodable structs that mirror your JSON data.

If that's the case, you can simplify your init(from:) quite a bit, and you don't need to write custom coding keys for every intermediate object in your JSON. You do, however have to define all intermediate structs which means that your gains are exclusively in the init(from:) as shown in the example below:

struct User: Decodable {
  let id: Int
  let email: String
  let isSubscribedToNewsletter: Bool

  enum CodingKeys: String, CodingKey {
    case id, preferences
    case contactInfo = "contact_info"
  }

  struct ContactInfo: Decodable {
    let email: String
  }

  struct Preferences: Decodable {
    let contact: ContactPreferences

    struct ContactPreferences: Decodable {
      let newsletter: Bool
    }
  }

  init(from decoder: Decoder) throws {
    let container = try decoder.container(keyedBy: CodingKeys.self)
    let contactInfo = try container.decode(ContactInfo.self, forKey: .contactInfo)
    let preferences = try container.decode(Preferences.self, forKey: .preferences)

    self.id = try container.decode(Int.self, forKey: .id)
    self.email = contactInfo.email
    self.isSubscribedToNewsletter = preferences.contact.newsletter
  }
}

This approach for decoding the data was pointed out to me by Filip Němeček as an alternative that's easier to understand. I definitely agree that not needing the intermediate containers can be a fantastic bonus. I'll leave it up to you to decide which solution you like better; they each have their own merit in my opinion.

Each of these two approaches take a little bit of extra work compared to having a model that mirror your JSON data but the result of this flattening is quite nice and it doesn't make using your JSONDecoder any more complex:

let decoder = JSONDecoder()
let user = try! decoder.decode(User.self, from: jsonData)

While it's nice that we can flatten this data, let's see how we can write a custom encode(to:) implementation that would allow us to encode and send this User object back to a server in its original shape.

Encoding a flat struct into nested JSON data

Sometimes you'll need to be able to encode and decode your data in order to be able to fetch data from a server and then update it as needed. In these cases, you'll need to write some custom encoding logic to allow converting your flat struct back into the nested JSON data you started out with.

As usual, the encoding part of this example is very simliar to the decoding part. Let's look at the encoding counterpart for the first flattening approach:

struct User: Codable {
  let id: Int
  let email: String
  let isSubscribedToNewsletter: Bool

  // coding keys

  init(from decoder: Decoder) throws {
    // unchanged
  }

  func encode(to encoder: Encoder) throws {
    var container = encoder.container(keyedBy: OuterKeys.self)
    var contactContainer = container.nestedContainer(keyedBy: ContactKeys.self,
                                                     forKey: .contactInfo)
    var preferencesContainer = container.nestedContainer(keyedBy: PreferencesKeys.self,
                                                         forKey: .preferences)
    var contactPreferencesContainer = preferencesContainer.nestedContainer(keyedBy: ContactPreferencesKeys.self,
                                                                           forKey: .contact)

    try container.encode(id, forKey: .id)
    try contactContainer.encode(email, forKey: .email)
    try contactPreferencesContainer.encode(isSubscribedToNewsletter, forKey: .newsletter)
  }
}

Note that I've omitted the implementation for init(from:) and the coding key enums. They are unchanged from the previous section.

The implementation for encode(to) follows the exact same pattern as init(from:). I create all the containers using their respective coding keys, and then I encode the properties of User into the appropriate containers.

Let's take a look at the alternative approach that uses intermediate structs instead of coding keys next:

struct User: Codable {
  let id: Int
  let email: String
  let isSubscribedToNewsletter: Bool

  // coding keys and structs

  init(from decoder: Decoder) throws {
    // unchanged
  }

  func encode(to encoder: Encoder) throws {
    var container = encoder.container(keyedBy: CodingKeys.self)
    let contactPreferences = Preferences.ContactPreferences(newsletter: isSubscribedToNewsletter)
    let preferences = Preferences(contact: contactPreferences)
    let contactInfo = ContactInfo(email: email)

    try container.encode(id, forKey: .id)
    try container.encode(preferences, forKey: .preferences)
    try container.encode(contactInfo, forKey: .contactInfo)
  }
}

In order to encode the original structs into my encoder, I need to create instances of these structs by hand. In this case, that's not a big deal; my structs are very small so this only takes a couple of lines of code.

After initializing my structs, I encode them into my container using the coding keys that I originally used to extract the same structs in my init(from:).

If you would decode the data from the beginning of this post into a User and then back into Data, you'll see that the JSON structure is identical with this approach. Nice!

In Summary

In this post I showed you how you can use a custom init(from:) to flatten nested JSON data into a single struct by writing your own init(from:) that created several keyed containers based on the different nested objects in the JSON data we're decoding. I also showed you an alternative approach that uses intermediate structs to decode the data and eventually assigned values from the decoded objects to my flattened struct. As I said in the section, I'll leave it up to you to decide which approach you prefer; I like them both. After showing you how to decode nested data, you saw how you can encode a flat struct into nested JSON data.

Writing your own encoding and decoding logic to perform radical transformations like this is something you'll rarely do. It's often more work than it's worth, and it's generally good to have your models mirror the data that you fetch from a remote source. Whether flattening JSON data into a single struct is a good idea will always depend on your reasons and use case. This post is not intended to be advice; it's intended to show you one of the many interesting things that can be done with Swift's encoding and decoding tools.

Preventing unwanted fetches when using NSFetchedResultsController and fetchBatchSize

This article covers a topic that is extensively covered in my Practical Core Data book. This book is intended to help you learn Core Data from scratch using modern techniques and every chapter features sample apps in SwiftUI as well as UIKit whenever this is relevant.

When you use Core Data in a UIKit or SwiftUI app, the easiest way to do this is through a fetched results controller. In SwiftUI, fetched results controller is best used through the @FetchRequest property wrapper. In UIKit, a fetched results controller can be conveniently set up to provide diffable data source snapshots for your table- or collection view while SwiftUI's @FetchRequest will conveniently update your UI as needed without requiring any extra work.

If you're somewhat knowledgable in the realm of Core Data, you've heard about the fetchBatchSize property.

This property is used to fetch your data in batches to prevent having to fetch your entire result set in one go. When you're dealing with a large data set, this can be a huge win.

However, when you're using a fetched results controller with diffable data sources and you set a fetchBatchSize, you'll find that your fetched results controller will initially fetch all of your data using your specified batch size. In other words, your data will be retrieved immediately using many small fetches. Once you start scrolling through your list, the fetched results controller will fetch your data again. using the specified batch size

Because SwiftUI's @FetchRequest is built on top of NSFetchedResultsController, you'll see the exact same problem manifest in a SwiftUI app that uses @FetchRequest with a fetch request that has its fetchBatchSize set.

In this post, I will briefly explain what the problem is exactly, and I'll show you a solution for a UIKit solution. A solution for SwiftUI will be published in a seperate post.

Understanding the problem

The easiest way to spot a problem like the one I described in the introduction of this post is through Core Data's debug launch arguments so you can see the SQLite statements that Core Data runs to retrieve and save data.

When you enable these launch arguments in an app that uses fetchBatchSize combined with a fetched results controller that provides diffable data source snapshots, you'll notice the following:

  1. First, all objectIDs are fetched in the correct order so the fetched results controller (or @FetchRequest which uses a fetched results controller under the hood as far as I can tell) knows the number of items in the result set, and so it knows how to page requests.
  2. Then, all managed objects are fetched in batches that match the batch size you've set.
  3. Lastly, your managed objects are fetched in batches that match your batch size as you scroll through your list.

The second point on this list is worrying. Why does a fetched results controller fetch all data when we expect it to only fetch the first batch? After all, you set a batch size so you don't fetch all data in one go. And now your fetched results controller doesn't just fetch all data up front, it does so in many small batches.

That can't be right, can it?

As it turns out, it seems related to how NSFetchedResultsController constructs a diffable data source snapshot.

I'm not sure how it works exactly, but I am sure that generating the diffable data source snapshot is what triggers these unwanted fetch requests. If a UIKit app, you can quickly verify this by commenting out your NSFetchedResultsControllerDelegate's controller(_:didChangeContentWith:) method. One you do this, you'll notice that your fetched results controller no longer fetches all data.

So how can you work around this?

As it turns out, there's no straightforward way to do this. The best way I've found is to stop using diffable data sources completely and instead use the older delegate methods from NSFetchedResultsControllerDelegate to update your table- or collection view.

In the next section, I'll show you how you can implement the appropriate delegate methods and update an existing collection view. How you build the collection view is up to you, as long as you populate your collection view by implementing the UICollectionViewDataSource methods rather than using diffable data sources.

Preventing unwanted requests in a UIKit app

The easiest way to prevent unwanted requests in a UIKit app is to get rid of the controller(_:didChangeContentWith:) delegate method that's used to have your fetched results controller construct diffable data source snapshots. Instead, you'll want to implement the following four NSFetchedResultsControllerDelegate methods:

  • controllerWillChangeContent(_:)
  • controller(_:didChange:atSectionIndex:for:)
  • controller(_:didChange:at:for:newIndexPath:)
  • controllerDidChangeContent(_:)

I like to abstract my fetched results controllers behind a provider object. For example, an AlbumsProvider, UsersProvider, POIsProvider, and so forth. The name of the provider describes the type of object that this provider object will fetch.

Here's a simple skeleton for a UsersProvider:

class UsersProvider: NSObject {
  fileprivate let fetchedResultsController: NSFetchedResultsController<User>

  let controllerDidChangePublisher = PassthroughSubject<[Change], Never>()
  var inProgressChanges: [Change] = []

  var numberOfSections: Int {
    return fetchedResultsController.sections?.count ?? 0
  }

  init(managedObjectContext: NSManagedObjectContext) {
    let request = User.byNameRequest
    self.fetchedResultsController =
      NSFetchedResultsController(fetchRequest: request,
                                 managedObjectContext: managedObjectContext,
                                 sectionNameKeyPath: nil, cacheName: nil)

    super.init()

    fetchedResultsController.delegate = self
    try! fetchedResultsController.performFetch()
  }

  func numberOfItemsInSection(_ section: Int) -> Int {
    guard let sections = fetchedResultsController.sections,
          sections.endIndex > section else {
      return 0
    }

    return sections[section].numberOfObjects
  }

  func object(at indexPath: IndexPath) -> User {
    return fetchedResultsController.object(at: indexPath)
  }
}

I'll show you the NSFetchedResultsControllerDelegate methods that should be implemented in a moment. Let's go over this class first.

The UsersProvider class contains two properties that you wouldn't need when you're using a diffable data source:

let controllerDidChangePublisher = PassthroughSubject<[Change], Never>()
var inProgressChanges: [Change] = []

The first of these two properties provides a mechanism to tell a view controller that the fetched results controller has informed us of changes. You could use a different mechanism like a callback to achieve this, but I like to use a publisher.

The second property provides an array that's used in the NSFetchedResultsControllerDelegate to collect the different changes that our fetched result controller sends us. These changes are communicated through multiple delegate callbacks because there's one call to a delegate method for each object or section that's changed.

The rest of the code in UsersProvider is pretty straightforward. We have a computed property to extract the number of sections in the fetched results controller, a method to extract the number of items in the fetched results controller, and lastly a method to retrieve an object for a specific index path.

Note that the controllerDidChangePublisher published an array of Change objects. Let's see what this Change object looks like next:

enum Change: Hashable {
  enum SectionUpdate: Hashable {
    case inserted(Int)
    case deleted(Int)
  }

  enum ObjectUpdate: Hashable {
    case inserted(at: IndexPath)
    case deleted(from: IndexPath)
    case updated(at: IndexPath)
    case moved(from: IndexPath, to: IndexPath)
  }

  case section(SectionUpdate)
  case object(ObjectUpdate)
}

The Change enum is an enum I've defined to encapsulate changes in the fetched result controller's data.

Now let's move on to the delegate methods. I'll show them all in one go:

extension AlbumsProvider: NSFetchedResultsControllerDelegate {
  func controllerWillChangeContent(_ controller: NSFetchedResultsController<NSFetchRequestResult>) {
    inProgressChanges.removeAll()
  }

  func controller(_ controller: NSFetchedResultsController<NSFetchRequestResult>, didChange sectionInfo: NSFetchedResultsSectionInfo, atSectionIndex sectionIndex: Int, for type: NSFetchedResultsChangeType) {
    if type == .insert {
      inProgressChanges.append(.section(.inserted(sectionIndex)))
    } else if type == .delete {
      inProgressChanges.append(.section(.deleted(sectionIndex)))
    }
  }

  func controller(_ controller: NSFetchedResultsController<NSFetchRequestResult>, didChange anObject: Any, at indexPath: IndexPath?, for type: NSFetchedResultsChangeType, newIndexPath: IndexPath?) {
    // indexPath and newIndexPath are force unwrapped based on whether they should / should not be present according to the docs.
    switch type {
    case .insert:
      inProgressChanges.append(.object(.inserted(at: newIndexPath!)))
    case .delete:
      inProgressChanges.append(.object(.deleted(from: indexPath!)))
    case .move:
      inProgressChanges.append(.object(.moved(from: indexPath!, to: newIndexPath!)))
    case .update:
      inProgressChanges.append(.object(.updated(at: indexPath!)))
    default:
      break
    }
  }

  func controllerDidChangeContent(_ controller: NSFetchedResultsController<NSFetchRequestResult>) {
    controllerDidChangePublisher.send(inProgressChanges)
  }
}

There's a bunch of code here, but the idea is quite simple. First, the fetched results controller will inform us that it's about to send us a bunch of changes. This is a good moment to clear the inProgressChanges array so we can populate it with the changes that we're about to receive.

The following two methods are called by the fetched results controller to tell us about changes in objects and sections. A section can only be inserted or deleted according to the documentation.

Managed objects can be inserted, moved, deleted, or updated. Note that a moved object might also be updated (it usually is because it wouldn't have moved otherwise). When this happens, you're only informed about the move.

When the fetched results controller has informed us about all changes, we can call send on the controllerDidChangePublisher so we send all changes that were collected to subscribers of this publisher. Usually that subscriber will be your view controller.

Note:
I'm assuming that you understand the basics of Combine. Explaining how publishers work is outside of the scope of this article. If you want to learn more about Combine you can take a look at my free blog posts, or purchase my Practical Combine book.

In your view controller, you'll want to have a property that holds on to your data provider. For example, you might add the following property to your view controller:

let usersProvider: UsersProvider

Your data sources should typically be injected into your view controllers, but view controllers can also initialize their own data provider. Choose whichever approach works best for your app.

What's more interesting is how you should respond to change arrays that are sent by controllerDidChangePublisher. Let's take a look at how I subscribe to this publisher in viewDidLoad():

override func viewDidLoad() {
  super.viewDidLoad()

  // setup code...

  albumsProvider.controllerDidChangePublisher
    .sink(receiveValue: { [weak self] updates in
      var movedToIndexPaths = [IndexPath]()

      self?.collectionView.performBatchUpdates({
        for update in updates {
          switch update {
          case let .section(sectionUpdate):
            switch sectionUpdate {
            case let .inserted(index):
              self?.collectionView.insertSections([index])
            case let .deleted(index):
              self?.collectionView.deleteSections([index])
            }
          case let .object(objectUpdate):
            switch objectUpdate {
            case let .inserted(at: indexPath):
              self?.collectionView.insertItems(at: [indexPath])
            case let .deleted(from: indexPath):
              self?.collectionView.deleteItems(at: [indexPath])
            case let .updated(at: indexPath):
              self?.collectionView.reloadItems(at: [indexPath])
            case let .moved(from: source, to: target):
              self?.collectionView.moveItem(at: source, to: target)
              movedToIndexPaths.append(target)
            }
          }
        }
      }, completion: { done in
        self?.collectionView.reloadItems(at: movedToIndexPaths)
      })
    })
    .store(in: &cancellables)
}

This code is rather long but it's also quite straightforward. I use UICollectionView's performBatchUpdates(_:completion:) method to iterate over all changes that we received. I also define an array before calling performBatchUpdates(_:completion:). This array will hold on to all index paths that were the target of a move operation so we can reload those cells after updating the collection view (the app will crash if you move and reload a cell).

By checking whether a change matches the section or object case I know what kind of a change I'm dealing with. Each case has an associated value that describes the change in more detail. Based on this associated value I can insert, delete, move, or reload cells and sections.

I haven't shown you the UICollectionViewDataSource methods that are needed to provide your collection view will data and cells. I'm sure you know how to do this as it'd be no different from a very plain and boring collection view. Just make sur eto use your data provider's convenient helpers to determine the number of sections and objects in your collection view.

In Summary

Doing all this work is certainly less convenient than using a diffable data source snapshot, but in the end you'll find that when you're using a fetchBatchSize this is approach will make sure your fetched results controller doesn't make a ton of unwanted extra fetch requests.

I'm not sure whether the behavior we see with diffable data sources is expected, but it's most certainly inconvenient. Especially when you have a large set of data, fetchBatchSize should help you reduce the time it takes to load data. When your app then proceeds to fetch all data anyway except with many small requests you'll find that performance is actually worse than it was when you fetched all data in one go.

If you don't want to do any extra work and have a small data set of maybe a couple dozen items, it might be a wise choice to not use fetchBatchSize if you want to utilize diffable data source snapshots. It takes a bunch of extra work to implement fetched results controller without it, and this extra work might not be worth the trouble if you're not seeing any problems in an app that doesn't use fetchBatchSize.

I will publish a follow-up post that details a fix for the same problem in SwiftUI when you use the @FetchRequest property wrapper. If you have any feedback or questions about this post, you can reach out to me on Twitter. If you want to learn more about Core Data, fetched results controllers and analyziing performance in Core Data apps, check out my Practical Core Data book.

What does “atomic” mean in programming?

When you're learning about databases or multithreaded programming, it's likely that you'll come across the term "atomic" at some point.

Usually you'll hear the term in the context of an operation. For example, an atomic read / write operation. Or atomic access to a property.

But what does this mean?

Generally, you can summarize atomic as "one at a time".

For example, when accessing or mutating a property is atomic, it means that only one read or write operation can be performed at a time. If you have a program that reads a property atomically, this means that the property cannot change during this read operation.

In Swift, operations on a dictionary are not atomic. This means that in a multithreaded application, a dictionary might be changed from one thread while another thread is reading from it. No thread or operation has exclusive access to your dictionary.

If the operation was atomic, the first read operation would have to finish before the write can start.

Another way to think of an atomic operation is that no observer of an atomic operation can "see" the operation as in-progress. You can observe the operation as not yet started or as completed, but never in between.

I wrote a post about an @Atomic property wrapper that I've seen making the rounds. In that post, you can see why this property wrapper does not guarantee exclusive access properly for value types, resulting in strange results.

If you want to learn more about atomicity and see an example of atomicity in Swift, I highly recommend you give that post a look.

If you're just looking for a definition, think of atomic as exclusive or one at a time. When an operation is performed atomically, you know that no other operations will interfere with your atomic operation.

I hope this quick tip gave you a better idea of what atomic means in programming. If you have any questions about this posts or if you have suggestions to make it better, feel free to reach out on Twitter.

10 things iOS developers should focus on in 2021

I know. This is a clickbaity title. And yes, I know that this list is not relevant for everybody. I know that not every iOS developer has to learn everything on this list.

That said, this list is a list of technologies and skills that I think are either already important, or becoming increasingly important this year.

It's a list of technologies and skills that I have learned, plan to learn, or would like to learn this year.

It's also a list that hopefully inspires you to broaden your horizons, and learn new things. Or maybe this list inspires you to refresh your knowledge of things that you've looked at before but haven't paid attention to for a while.

Disclaimer: I wasn't paid to link to any of the creators that I mention in this post. I also don't receive any kickback for any purchases you make through my links (except for my own books, I do make money off those).

Combine

Apple released the Combine framework with iOS 13. Combine is Apple's functional reactive programming framework that's similar to RxSwift, but also very different. Combine's main selling point is that it's a first-party framework. This means that it will be maintained by Apple (hopefully for a long time), and updated with releases of Apple's OSes which is both great, and a downside.

In any event, my goal isn't to convince you that Combine is great. It's also not my goal to convince you Combine isn't great.

I'll leave it up to you to make up your mind about Combine.

There's no denying that Apple is betting big on Combine, and that it's worth taking a look at it.

Especially because SwiftUI makes heavy use of Combine.

If you want to learn more about Combine, I'd like to recommend my Practical Combine book to help you get up and running.

SwiftUI

I don't think you can talk about iOS development these days without at least mentioning SwiftUI. In fact, I think SwiftUI is quickly becoming more and more important in the land of iOS.

At this point, it's not likely that you'll need to know SwiftUI to be employable in the short term. I do believe that SwiftUI is an important framework to learn, and it can certainly give you an edge when looking for jobs.

Some good resources to look at if you want to learn SwiftUI are Apple's tutorials, Paul Hudson's 100 days of SwiftUI, objc.io's Thinking in SwiftUI, Daniel Steinberg's SwiftUI Kickstart, and Majid Jabrayilov's website.

Of course there are many, many more resources. These are just some of my personal favorites.

Whether SwiftUI is production-ready or not is an interesting discussion at this time. There certainly are rough edges, and we're collectively figuring out how to properly write apps in SwiftUI. A popular architecture for SwiftUI apps that you might want to look at is pointfree.co's Composable Architecture.

XCTest

If there's one thing we all know we should do, but regularly skip, can't do, won't do, or simply forget about, it's unit testing.

My personal motivation to write tests whenever I can is that it allows me to know something works rather than thinking it should still work after making changes some where else in my codebase. Unless I've tested it, I can not be more certain that thinking something works. Automated tests make sure that I never forget to test certain features, and it's much, much faster than testing manually.

If you're struggling to convince your manager that you should be writing tests, take a look at my talk from 2019 called Adopting TDD in the Workplace. The bottom line is that testing should be part of your process as a developer. Tests help you write decoupled code, and once they are set up, your tests run all the time. This is much faster and more rigorous than manual testing will ever be.

If you want to learn more about unit testing on iOS I can highly recommend Jon Reids' website and his book iOS unit testing by example.

Collection Views

Apple has been busy improving Collection Views over the past few years. Especially iOS 13's compositional collection view layout and diffable data sources were huge improvements to how we use collection views.

In iOS 14, Apple has made more improvements. For example, we now use a collection view list layout that's extremely flexible, and there's a new way to register and dequeue custom cells called cell registration.

If you're not familiar with collection views, or if you haven't looked at the new features yet I highly recommend that you do. Apple really did a fantastic job on collection views. Make sure to check out Apple's sample app to see most of the new features since iOS 13 in action.

Core Data

Even though Core Data isn't a new framework and has its root set firmly in the realm of Objective-C, it's still a very relevant technology. Apple has invested significant resources into making Core Data easier and nicer to work with, and they even added the ability to sync with iCloud automatically in iOS 13.

Strictly speaking this wasn't Apple's first attempt to add iCloud syncing to Core Data but it's certainly Apple best attempt at doing this.

If you've used Core Data before Apple added NSPersistentContainer in iOS 10 and didn't like it, or if you were told to avoid Core Data because it's clunky, bad, inefficient, or hard to work with, I highly recommend that you take another look.

Apple has lots of information about Core Data on their website, and community members like Antoine van der Lee have written a lot about Core Data.

I have done a lot of Core Data related writing myself, and I released a book on the framework at the start of this year called Practical Core Data that I personally really like and would highly recommend to newcomers and people that haven't looked at Core Data for a while.

Instruments

We all want our apps to be free of memory leaks, frame drops and other characteristics that are poor for performance.

My favorite way to discover performance issues is the Instruments tool. If you've never looked at Instruments before, I think 2021 should be the year that you change that.

Instruments is a fundamental tool that, in my opinion, deserves a place in every iOS developer's toolbox.

To get started with Instruments, you could take a look at this overview that Apple provides.

If that overview is a bit much, you might like this post I wrote on the Time Profiler which is the Instrument I use most (by far).

Communication Skills

Being able to communicate efficiently as a developer is important. Both verbally, and in written form. In my opinion, we're never done improving how we communicate.

That's why, in 2021 I think it's good to take some time to improve your so-called "soft" skills. This will help you become a better team member, a more efficient communicator, and a better listener.

These are skills that I think developers often underestimate which is why it was important for me to add this to the list. (Thanks for the tip HeidiPuk).

Some resources to help you get started are this talk from Ask Furrow and this interview/podcast episode with Sean Allen and Mayuko.

Practice your communication skills, write often, make sure you listen to people, and ask for feedback on your communication skills when possible. If you do this regularly I'm sure you'll be a much stronger communicator by the end of 2021.

Building Universal Apps

Now this is a technology that I personally want to spend a bunch of time on in 2021. For the past couple of years, Apple has been showing us how to build apps that run on iOS and the Mac. First with catalyst, later with SwiftUI.

Now that Apple's M1 Macs are out and they can run iOS apps natively, I think it's time to start considering the Mac as a platform that we should write our apps for whenever possible. Similar to how we try to make sure most (if not all) of our apps run on iPads as well as iPhones.

Unfortunately I haven't come across any useful resources for this yet. Apple has some WWDC videos that might be interesting, but since I haven't looked at universal apps just yet, I owe you some links.

If you have good universal app resources for me, let me know.

ARKit (and RealityKit)

As rumors of Apple glasses keep growing stronger and stronger, I think it's likely that we'll eventually see them. Maybe in 2021, maybe later.

However, once these glasses are (inevitably) announced, we'll probably want to build apps for them.

My bet is that once we're able to build apps for glasses, we'll do this on top of Apple's Augmented Reality frameworks.

In my opinion, now is a perfect time to start learning ARKit and build some Augmented Reality experiences. Especially if you're interested in possibly making apps for the rumored Apple glasses.

In addition to documentation and WWDC videos for ARKit, Apple provides lots of resources to help you get started with Augmented Reality.

Async / Await

While this feature isn't offcially available in Swift yet, as its bits and pieces are still being reviewed on the Swift forums, I think async / await is one of the biggest new things to focus on this year.

I don't know which Swift version will contain an official async/await release but you can experiment with the feature today if you're using the latest Swift build.

Async/await is going to significantly change how we write asynchronous code in Swift, and I'm super excited about it.

If you want to follow along with its development, you can do so on the Swift forums where all reviews and pitches are published.

In Summary

This list of 10 things you should focus on in 2021 is a list that I think is relevant. Of course, some things might not be relevant for you. Or maybe this list is missing important technologies or skills that you think everybody should focus on.

That's okay, I just hope this list gave you a direction of (new) things to learn. Some of the things on my list have been around for a while, others are brand new. If you don't get around to learning the brand new things this year, that's okay. Learn and investigate at your own pace, focus on what gets you where want to go.

If you have any feedback on this list, or if you want to share your focus for 2021, send me a Tweet. I love hearing from you.

Observing the result of saving a background managed object context with Combine

I love posts where I get to put write about two of my favorite frameworks at the moment; Combine and Core Data.

When you're working with Core Data, it's common to perform save operations asynchronously using a background context. You could even perform an asynchronous save on the main managed object context.

Consider the following method that I added to an object that I wrote called StorageProvider:

public extension StorageProvider {
  func addTask(name: String, description: String,
               nextDueDate: Date, frequency: Int,
               frequencyType: HouseHoldTask.FrequencyType) {

    persistentContainer.performBackgroundTask { context in
      let task = HouseHoldTask(context: context)
      task.name = name
      task.taskDescription = description
      task.nextDueDate = nextDueDate
      task.frequency = Int64(frequency)
      task.frequencyType = Int64(frequencyType.rawValue)

      do {
        try context.save()
      } catch {
        print("Something went wrong: \(error)")
        context.rollback()
      }
    }
  }
}

My StorageProvider has a property called persistentContainer which is an NSPersistentContainer and it contains several useful features like this convenient method to create a new instance of a HouseHoldTask model. The contents and details of this model are not relevant per se.

It's the asynchronous nature of this method that I want you to consider. Note that even if I use persistentContainer.viewContext.perform, the contents of the perform closure are not executed synchronously; addTask returns before the save is completed in both cases.

Now consider the following SwiftUI code:

struct AddTaskView: View {
  // a bunch of properties

  /// Passed in by the parent. When set to false this view is dismissed by its parent
  @Binding var isPresented: Bool

  let storageProvider: StorageProvider

  var body: some View {
    NavigationView {
      Form {
        // A form that's used to configure a task
      }
      .navigationTitle("Add Task")
      .navigationBarItems(leading: Button("Cancel") {
        isPresented = false
      }, trailing: Button("Save") {
        // This is the part I want you to focus on
        storageProvider.addTask(name: taskName, description: taskDescription,
                                nextDueDate: firstOccurrence, frequency: frequency,
                                frequencyType: frequencyType)
        isPresented = false
      })
    }
  }
}

I've omitted a bunch of code in this example and I added a comment that reads This is the part I want you to focus on for the most interesting part of this code.

When the user taps Save, I create a task and dismiss the AddTaskView by setting its isPresented property to false. In my code the view that presents AddTaskView passes a binding to AddTaskView, allowing the parent of AddTaskView to dismiss this view when appropriate.

However, since addTask is asynchronous, we can't respond to any errors that might occur.

If you want to prevent dismissing AddTaskView before the task is saved you would usually use the viewContext to save your managed object using performAndWait. That way your code is run on the viewContext's queue, your code will also wait for the closure passed to performAndWait to complete. That way, you could return a Result<Void, Error> from your addTask method to communicate the result of your save operation to the user.

Usually, a save operation will be quite fast, and running it on the viewContext doesn't do much harm. Of course, there are exceptions where you want your save operation to run in the background to prevent blocking the main thread. And since most save operations will probably succeed, you might even want to allow the UI to continue operating as if the save operation has already succeeded, and show an alert to the user in the (unlikely) scenario that something went wrong. Or maybe you even want to present an alert in case the save operation succeeded.

An interesting way to achieve this is through Combine. You can wrap the Core Data save operation in a Future and use it to update a StateObject in the main view that's responsible for presenting AddTaskView.

I'll show you the updated addTask method first, and then we'll work our way up to addTask from the main view up.

Here's the adjusted addTask method:

public extension StorageProvider {
  func addTask(name: String, description: String,
               nextDueDate: Date, frequency: Int,
               frequencyType: HouseHoldTask.FrequencyType) -> Future<Void, Error> {
    Future { promise in
      self.persistentContainer.performBackgroundTask { context in
        let task = HouseHoldTask(context: context)
        task.name = name
        task.taskDescription = description
        task.nextDueDate = nextDueDate
        task.frequency = Int64(frequency)
        task.frequencyType = Int64(frequencyType.rawValue)

        do {
          try context.save()
          promise(.success(()))
        } catch {
          print("Something went wrong: \(error)")
          promise(.failure(error))
          context.rollback()
        }
      }
    }
  }
}

This setup is fairly straightforward. I create a Future and fulfill it with a success if everything is good and Error if something went wrong. Note that my Output for this Future is Void. I'm not really interested in publishing any values when everything went okay. I'm more interested in failures.

Tip:
If you're not familiar with Combine's Futures, check out my post on using Future in Combine.

Next, let's take a look at the main view in this scenario; TasksOverview. This view has an Add Task button and presents the AddTaskView:

struct TasksOverview: View {
  static let dateFormatter: DateFormatter = {
    let formatter = DateFormatter()
    formatter.dateStyle = .long
    return formatter
  }()

  @FetchRequest(fetchRequest: HouseHoldTask.sortedByNextDueDate)
  var tasks: FetchedResults<HouseHoldTask>

  @State var addTaskPresented = false

  // !!
  @StateObject var addTaskResult = AddTaskResult()

  let storageProvider: StorageProvider

  var body: some View {
    NavigationView {
      List(tasks) { (task: HouseHoldTask) in
        VStack(alignment: .leading) {
          Text(task.name ?? "--")
          if let dueDate = task.nextDueDate {
            Text("\(dueDate, formatter: Self.dateFormatter)")
          }
        }
      }
      .listStyle(PlainListStyle())
      .navigationBarItems(trailing: Button("Add new") {
        addTaskPresented = true
      })
      .navigationBarTitle("Tasks")
      .sheet(isPresented: $addTaskPresented, content: {
        // !!
        AddTaskView(isPresented: $addTaskPresented,
                    storageProvider: storageProvider,
                    resultObject: addTaskResult)
      })
      .alert(isPresented: $addTaskResult.hasError) {
        // !!
        Alert(title: Text("Could not save task"),
              message: Text(addTaskResult.error?.localizedDescription ?? "unknown error"),
              dismissButton: .default(Text("Ok")))
      }
    }
  }
}

I added three comments in the code above to the places where you should focus your attention. First, I create an @StateObject that holds an AddTaskResult object. I will show you this object in a moment but it'll be used to determine if we should show an error alert and it holds information about the error that occurred.

The second comment I added shows where I initialize my AddTaskView and you can see that I pass the addTaskResult state object to this view.

The third and last comment shows how I present the error alert.

For posterity, here's what AddTaskResult looks like:

class AddTaskResult: ObservableObject {
  @Published var hasError = false
  var error: Error?
}

It's a simple object with a simple published property that's used to determine whether an error alert should be shown.

Now all we need is a way to link together the Future that's created in addTask and the TasksOverview which will show an alert if needed. This glue code is written in the AddTaskView.

struct AddTaskView: View {
  // this is all unchanged

  // a new property to hold AddTaskResult
  @ObservedObject var resultObject: AddTaskResult

  var body: some View {
    NavigationView {
      Form {
        // form to create a task
      }
      .navigationTitle("Add Task")
      .navigationBarItems(leading: Button("Cancel") {
        isPresented = false
      }, trailing: Button("Save") {
        // this is where it gets interesting
        storageProvider.addTask(name: taskName, description: taskDescription,
                                nextDueDate: firstOccurrence, frequency: frequency,
                                frequencyType: frequencyType)
          .map { return false }
          .handleEvents(receiveCompletion: { completion in
            if case let .failure(error) = completion {
              self.resultObject.error = error
            }
          })
          .replaceError(with: true)
          .receive(on: DispatchQueue.main)
          .assign(to: &resultObject.$hasError)

        // this view is still dismissed as soon as Save is tapped
        isPresented = false
      })
    }
  }
}

In the code above the most important differences are that AddTaskView now has a resultObject property, and I've added some Combine operators after addTask.

Since addTask now returns a Future, we can apply operators to this Future to transform its output. First, I map the default Void output to false. This means that no errors occurred. Then I use a handleEvents operator with a receiveCompletion closure. This allows me to intercept errors and assign the intercepted error to the resultObject's error property so it can be used in TasksOverviewView later.

Next, I replace any errors that may have occurred with true which means that an error occurred. Since all UI mutations in SwiftUI must originate on the main thread I use receive(on:) to ensure that the operator that follows it will run on the main thread.

Lastly, I use Combine's assign(to:) subscriber to assign the transformed output (a Bool) of the Future to &resultObject.$hasError. This will modify the TasksOverview's @StateObject and trigger my alert to be shown if hasError was set to true.

Because I use an object that is owned by TasksOverview in my assign(to:) the subscription to my Future is kept alive even after AddTaskView is dismissed. Pretty neat, right?

In Summary

In this post, you saw an example of how you can wrap an asynchronous operation, like saving a background managed object context, in a Combine Future. You saw how you can use @StateObject in SwiftUI to determine if an when an error should be presented, and you saw how you can wire everything up so a Core Data save operation ultimately mutates a property on your state object to present an alert.

Complex data flows like these are a lot of fun to play with, and Combine is an incredibly useful tool when you're dealing with situations like the one I described in this article.

If you have any questions about this article, or if you have any feedback for me, don't hestitate to send me a message on Twitter.

Responding to changes in a managed object context

Working with multiple managed object contexts will often involve responding to changes that were made in one context to update another context. You might not even want to update another context but reload your UI or perform some other kind of update. Maybe you want to do this when a specific context updates, or maybe you want to run some code when any context updates.

In this week's post I will show you how you can listen for changed in managed object contexts, and how you can best use them. I will also show you a convenient way to extract information from a Core Data related Notification object through a nice extension.

Subscribing to Core Data related Notifications

Regardless of your specific needs, Core Data has a mechanism that allows you to be notified when a managed object updates. This mechanism plays a key role in objects like NSFetchedResultsController which tracks a specific managed object context in order to figure out whether specific objects were inserted, deleted or updated. In addition to this, a fetch result also tracks whether the position of a managed object within a result set has changed which is not something that you can trivially track yourself; this is implemented within the fetched results controller.

You can monitor and respond to changes in your managed object contexts through NotificationCenter. When your managed object context updates or saves, Core Data will post a notification to the default NotificationCenter object.

For example, you can listen for an NSManagedObjectContext.didSaveObjectsNotification to be notified when a managed object context was saved:

class ExampleViewModel {
  init() {
    let didSaveNotification = NSManagedObjectContext.didSaveObjectsNotification
    NotificationCenter.default.addObserver(self, selector: #selector(didSave(_:)),
                                            name: didSaveNotification, object: nil)
  }

  @objc func didSave(_ notification: Notification) {
    // handle the save notification
  }
}

The example code above shows how you can be notified when any managed object context is saved. The notification you receive here contains a userInfo dictionary that will tell you which objects were inserted, deleted and/or updated. For example, the following code extracts the inserted objects from the userInfo dictionary:

@objc func didSave(_ notification: Notification) {
  // handle the save notification
  let insertedObjectsKey = NSManagedObjectContext.NotificationKey.insertedObjects.rawValue
  print(notification.userInfo?[insertedObjectsKey])
}

Note that NSManagedObjectContext has a nested type called NotificationKey. This type is an enum that has cases for every relevant key that you might want to use. Since the enum case name for the notification keys don't match with the string that you need to access the relevant key in the dictionary, it's important that you use the enum's rawValue rather than the enum case directly.

Note that NSManagedObjectContext.NotificationKey is only available on iOS 14.0 and up. For iOS 13.0 and below you can use the Notification.Name.NSManagedObjectContextDidSave to listen for save events. For a more complete list for iOS 13.0 notifications I'd like to point you to the "See Also" section on the documentation page for NSManagedObjectContextDidSave which is located here.

I'm not a big fan of how verbose this is so I like to use an extension on Dictionary to help me out:

extension Dictionary where Key == AnyHashable {
  func value<T>(for key: NSManagedObjectContext.NotificationKey) -> T? {
    return self[key.rawValue] as? T
  }
}

This extension is very simple but it allows me to write code from before as follows which is much cleaner:

@objc func didSave(_ notification: Notification) {
  // handle the save notification
  let inserted: Set<NSManagedObject>? = notification.userInfo?.value(for: .insertedObjects)
  print(inserted)
}

We could take this even further with an extension on Notfication specifically for Core Data related notifications:

extension Notification {
  var insertedObjects: Set<NSManagedObject>? {
    return userInfo?.value(for: .insertedObjects)
  }
}

This notification would be used as follows:

@objc func didSave(_ notification: Notification) {
  // handle the save notification
  let inserted = notification.insertedObjects
  print(inserted)
}

I like how clean the callsite is here . The main downside is that we can't constrain the extension to Core Data related notifications only, and we'll need to manually add computed properties for every notification key. For example, to extract all updated objects through a Notification extension you'd have to add the following property to the extension I showed you earlier:

var updatedObjects: Set<NSManagedObject>? {
  return userInfo?.value(for: .updatedObjects)
}

It's not a big deal to add these computed properties manually, and it can clean up your code quite a bit so it's worth the effort in my opinion. Whether you want to use an extension like this is really a matter of preference so I'll leave it up to you to decide whether you think this is a good idea or not.

Let's get bakc on topic, this isn't a section about building convenient extensions after all. It's about observing managed object context changes.

The code I showed you earlier subscribed to the NSManagedObjectContext.didSaveObjectsNotification in a way that would notify you every time any managed object context would save. You can constrain this to a specific notification as follows:

let didSaveNotification = NSManagedObjectContext.didSaveObjectsNotification
let targetContext = persistentContainer.viewContext
NotificationCenter.default.addObserver(self, selector: #selector(didSave(_:)),
                                        name: didSaveNotification, object: targetContext)

By passing a reference to a managed object context you can make sure that you're only notified when a specific managed object context was saved.

Imagine that you have two managed object contexts. A viewContext and a background context. You want to update your UI whenever one of your background contexts saves, triggering a change in your viewContext. You could subscribe to all managed object context did save notifications and simply update your UI when any context got saved.

This would work fine if you have set automaticallyMergesChangesFromParent on your viewContext to true. However, if you've set this property to false you find that your viewContext might not yet have merged in the changes from the background context which means that updating your UI will not always show the lastest data.

You can make sure that a managed object context merges changes from another managed object context by subscribing to the didSaveObjectsNotification and merging in any changes that are contained in the received notification as follows:

@objc func didSave(_ notification: Notification) {
  persistentContainer.viewContext.mergeChanges(fromContextDidSave: notification)
}

Calling mergeChanges on a managed object context will automatically refresh any managed objects that have changed. This ensures that your context always contains all the latest information. Note that you don't have to call mergeChanges on a viewContext when you set its automaticallyMergesChangesFromParent property to true. In that case, Core Data will handle the merge on your behalf.

In addition to knowing when a managed object context has saved, you might also be interested in when its objects changed. For example, because the managed object merged in changes that were made in another context. If this is what you're looking for, you can subscribe to the didChangeObjectsNotification.

This notification has all the same characteristics as didSaveObjectsNotification except it's fired when a context's objects change. For example when it merges in changes from another context.

The notifications that I've shown you so far always contain managed objects in their userInfo dictionary, this provides you full access to the changed objects as long as you access these objects from the correct managed object context.

This means that if you receive a didSaveObjectsNotification because a context got saved, you can only access the included managed objects on the context that generated the notification. You could manage this by extracting the appropriate context from the notifiaction as follows:

@objc func didSave(_ notification: Notification) {
  guard let context = notification.object as? NSManagedObjectContext,
        let insertedObjects = notification.insertedObjects as? Set<ToDoItem> else {
    return
  }
  context.perform {
    for object in insertedObjects {
      print(object.dueDate)
    }
  }
}

While this works, it's not always appropriate.

For example, it could make perfect sense for you to want to access the inserted objects on a different managed object context for a variety of reasons.

Extracting managed object IDs from a notification

When you want to pass managed objects from a notification to a different context, you could of course extract the managed object IDs and pass them to a different context as follows:

@objc func didSave(_ notification: Notification) {
  guard let insertedObjects = notification.insertedObjects else {
    return
  }

  let objectIDs = insertedObjects.map(\.objectID)

  for id in objectIDs {
    if let object = try? persistentContainer.viewContext.existingObject(with: id) {
      // use object in viewContext, for example to update your UI
    }
  }
}

This code works, but we can do better. In iOS 14 it's possible to subscribe to Core Data's notifications and only receive object IDs. For example, you could use the insertedObjectIDs notification to obtain all new object IDs that were inserted.

The Notification extension property to get convenient access to insertedObjectIDs would look as follows:

extension Notification {
  // other properties

  var insertedObjectIDs: Set<NSManagedObjectID>? {
    return userInfo?.value(for: .insertedObjectIDs)
  }
}

You would then use the following code to extract managed object IDs from the notification and use them in your viewContext:

@objc func didSave(_ notification: Notification) {
  guard let objectIDs = insertedObjects.insertedObjectIDs else {
    return
  }

  for id in objectIDs {
    if let object = try? persistentContainer.viewContext.existingObject(with: id) {
      // use object in viewContext, for example to update your UI
    }
  }
}

It doesn't save you a ton of code but I do like that this notification is more explicit in its intent than the version that contains full managed objects in its userInfo.

In Summary

Notifications can be an incredibly useful tool when you're working with any number of managed object contexts, but I find them most useful when working with multiple managed object contexts. In most cases you'll be interested in the didChangeObjectsNotification for the viewContext only. The reason for this is that it's often most useful to know when your viewContext has merged in data that may have originated in another context. Note that didChangeObjectsNotification also fires when you save a context.

This means that when you subscribe to didChangeObjectsNotification on the viewContext and you insert new objects into the viewContext and then call save(), the didChangeObjectsNotification for your viewContext will fire.

When you use NSFetchedResultsController or SwiftUI's @FetchRequest you may not need to manually listen for notifications often. But it's good to know that these notifications exist, and to understand how you can use them in cases where you're doing more complex and custom work.

If you have any questions about this post, or if you have feedback for me you can reach out to me on Twitter.

Building a concurrency-proof token refresh flow in Combine

Refreshing access tokens is a common task for many apps that use OAuth or other authentication mechanisms. No matter what your authentication mechanism is, your tokens will expire (eventually) and you'll need to refresh them using a refresh token. Frameworks like RxSwift and Combine provide convenient ways to build pipelines that perform transformation after transformation on a succesful network response, allowing you to grab Data, manipulate and transform it to an instance of a model object or anything else.

Programming the not-so-happy path where you need to refresh a token is not as simple. Especially because in an ideal world you only fire a single token refresh request even if multiple requests fail due a token error at the same time. You'll want to retry every request as soon as possible without firing more than one token refresh call.

The trick to building something like this is partly a Combine problem (what type of publisher can/should we use) but mostly a concurrency problem (how do we ensure that we only perform a single network call).

In this week's post I'll take a closer look at this problem and show a solution that should be able to hold up even if you're hammering it with token refresh requests.

Setting up a simple networking layer

In this post I will set up a simple mock networking layer that will allow you to experiment with the solution provided in this post even if you don't have a back-end or a token that requires refreshing. I'll start by showing you my models:

struct Token: Decodable {
  let isValid: Bool
}

struct Response: Decodable {
  let message: String
}

enum ServiceErrorMessage: String, Decodable, Error {
  case invalidToken = "invalid_token"
}

struct ServiceError: Decodable, Error {
  let errors: [ServiceErrorMessage]
}

These are just a few simple models. The Token is the key actor here since it's used to authenticate network calls. The Response object models a succesful request and ServiceError and ServiceErrorMessage represent the response that we'll get in case a user isn't authenticated due to a bad or expired token. You back-end will probably return something entirely different and your Token will probably have an expiresAt or expiresIn field that you would use to determine if the current device clock is past the token's expected expiration date. Since different servers might use different mechanisms to let their client know about a token's moment of expiration I won't detail that here. Just make sure that your version of isValid is based on the token's expiration time.

The networking itself is modelled by this protocol:

protocol NetworkSession: AnyObject {
  func publisher(for url: URL, token: Token?) -> AnyPublisher<Data, Error>
}

Using a protocol for this will help me swap out URLSession for a mock object that allows me to easily experiment with different responses. Note that I'm using Data as an output for my publisher here. This means that callers for publisher(for:token:) wouldn't have access to the response object that's returned by a data task publisher. That's not a problem for me in this case but if it is for you, make sure that you adjust the output (and adapt the code from this post) accordingly.

Here's what my mock networking object looks like:

class MockNetworkSession: NetworkSession {
  func publisher(for url: URL, token: Token? = nil) -> AnyPublisher<Data, Error> {
    let statusCode: Int
    let data: Data

    if url.absoluteString == "https://donnys-app.com/token/refresh" {
      print("fake token refresh")
      data = """
      {
        "isValid": true
      }
      """.data(using: .utf8)!
      statusCode = 200
    } else {
      if let token = token, token.isValid {
        print("success response")
        data = """
        {
          "message": "success!"
        }
        """.data(using: .utf8)!
        statusCode = 200
      } else {
        print("not authenticated response")
        data = """
        {
          "errors": ["invalid_token"]
        }
        """.data(using: .utf8)!
        statusCode = 401
      }
    }

    let response = HTTPURLResponse(url: url, statusCode: statusCode, httpVersion: nil, headerFields: nil)!

    // Use Deferred future to fake a network call
    return Deferred {
      Future { promise in
        DispatchQueue.global().asyncAfter(deadline: .now() + 1, execute: {
          promise(.success((data: data, response: response)))
        })
      }
    }
    .setFailureType(to: URLError.self)
    .tryMap({ result in
      guard let httpResponse = result.response as? HTTPURLResponse,
            httpResponse.statusCode == 200 else {

        let error = try JSONDecoder().decode(ServiceError.self, from: result.data)
        throw error
      }

      return result.data
    })
    .eraseToAnyPublisher()
  }
}

While there's a bunch of code here, it's really not that complex. The first couple of lines only check which endpoint we're calling and whether we received a valid token. Depending on these variables I either return a refreshed token, a success response or an error response. In really you would of course make a call to your server rather write some code like I did here. I use Combine's Future to publish my prepared response with a delay. I also do some processing on this response like I would in a real implementation to check which http status code I ended up with. If I get a non-200 status code I decode the body data into a ServiceError and fail the publisher by throwing an error that we can catch later when we call publisher(for:token:). If I got a 200 status code I return a Data object.

While this code might look a bit silly in the context of my mock, let's take a look at how you can extend URLSession to make it conform to NetworkSession:

extension URLSession: NetworkSession {
  func publisher(for url: URL, token: Token?) -> AnyPublisher<Data, Error> {
    var request = URLRequest(url: url)
    if let token = token {
      request.setValue("Bearer <access token>", forHTTPHeaderField: "Authentication")
    }

    return dataTaskPublisher(for: request)
      .tryMap({ result in
        guard let httpResponse = result.response as? HTTPURLResponse,
              httpResponse.statusCode == 200 else {

          let error = try JSONDecoder().decode(ServiceError.self, from: result.data)
          throw error
        }

        return result.data
      })
      .eraseToAnyPublisher()
  }
}

This extension looks a lot more reasonable. It's not quite as useful as I'd like because you'll probably want to have a version of publisher(for:) that takes a URLRequest that you configure in your networking layer. But again, my point isn't to teach you how to abstract your networking layer perfectly. It's to show you how you can implement a token refresh flow in Combine that can deal with concurrent requests. The abstraction I've written here is perfect to provide some scaffolding for this.

The final piece in this puzzle (for now) is a networking object that makes an authenticated request:

struct NetworkManager {
  private let session: NetworkSession

  init(session: NetworkSession = URLSession.shared) {
    self.session = session
  }

  func performAuthenticatedRequest() -> AnyPublisher<Response, Error> {
    let url = URL(string: "https://donnys-app.com/authenticated/resource")!

    return session.publisher(for: url: token: nil)
  }
}

This code doesn't quite meet the mark, and that's fine. We'll fix it in the next section.

When we call performAuthenticatedRequest we want to obtain a token from somewhere and then pass this token to publisher(for:token:) if it turns out that this token is invalid we want to try and refresh it exactly once. If we obtain a token and still aren't authenticated it's not very likely that refreshing the token again will fix this. It's probably a better idea to ask the user to login again or head down some other recovery path that's appropriate for your use case. The key component here is that we build an object that can provide tokens to objects that require them, obtain new tokens as needed, and most importantly does this gracefully without duplicate refresh requests. Let's see how.

Building an authenticator

Authenticator, Token provider, authentication manager, name it what you will. I will call it an authenticator since it handles the user's authentication status. You can call it anything you want, it doesn't matter much. Just name it something good.

The idea of an authenticator is that when asked for a valid token it can go down three routes:

  1. A valid token exists and should be returned
  2. We don't have a token so the user should log in
  3. A token refresh is in progress so the result should be shared
  4. No token refresh is in progress so we should start one

Each of the four scenarios above should produce a publisher, and this should all happen in a single method that returns a publisher that emits a token.

Before I show you my implementation for this method, I want to show you the skeleton for my authenticator:

class Authenticator {
  private let session: NetworkSession
  private var currentToken: Token? = Token(isValid: false)
  private let queue = DispatchQueue(label: "Autenticator.\(UUID().uuidString)")

  // this publisher is shared amongst all calls that request a token refresh
  private var refreshPublisher: AnyPublisher<Token, Error>?

  init(session: NetworkSession = URLSession.shared) {
    self.session = session
  }

  func validToken(forceRefresh: Bool = false) -> AnyPublisher<Token, Error> {
    // magic...
  }
}

Since we'll need to make a network call if the token requires refreshing the authenticator depdends on a NetworkSession. It will also keep track of the current token. In this case I use an invalid token as the default. In an app you'll probably want to grab a current token from the user's keychain and use nil as a default token so you can show a log in screen if no token exists.

The authenticator will need to deal with concurrency gracefully and the refreshPublisher property will be used to determine if a refresh is in progress. Since multiple queues could access refreshPublisher at the same time we want to make sure that only one thread can read refreshPublisher at the same time. This is what the queue property is used for. When I kick off a request I assign a value to refreshPublisher and when the request completes I will set this property to nil again.

Learn more about concurrency and synchronizing access in my post on DispatchQueue.sync and DispatchQueue.async.

Note that my validToken method take a forceRefresh argument. This argument is used to tell the authenticator that it should refresh a token even if it might look like a token should be valid. We'll pass true for this argument in case we get a token error from the server back in the NetworkManager. You'll see why in a moment.

Let's look at the implementation of validToken(forceRefresh:):

func validToken(forceRefresh: Bool = false) -> AnyPublisher<Token, Error> {
  return queue.sync { [weak self] in
    // scenario 1: we're already loading a new token
    if let publisher = self?.refreshPublisher {
      return publisher
    }

    // scenario 2: we don't have a token at all, the user should probably log in
    guard let token = self?.currentToken else {
      return Fail(error: AuthenticationError.loginRequired)
        .eraseToAnyPublisher()
    }

    // scenario 3: we already have a valid token and don't want to force a refresh
    if token.isValid, !forceRefresh {
      return Just(token)
        .setFailureType(to: Error.self)
        .eraseToAnyPublisher()
    }

    // scenario 4: we need a new token
    let endpoint = URL(string: "https://donnys-app.com/token/refresh")!
    let publisher = session.publisher(for: endpoint, token: nil)
      .share()
      .decode(type: Token.self, decoder: JSONDecoder())
      .handleEvents(receiveOutput: { token in
        self?.currentToken = token
      }, receiveCompletion: { _ in
        self?.queue.sync {
          self?.refreshPublisher = nil
        }
      })
      .eraseToAnyPublisher()

    self?.refreshPublisher = publisher
    return publisher
  }
}

The entire body for validToken(forceRefresh:) is executed sync on my queue to ensure that I don't have any data races for refreshPublisher. The initial scenario is simple. If we have a refreshPublisher, a request is already in progress and we should return the publisher that we stored earlier. The second scenario occurs if a user hasn't logged in at all or their token went missing. In this case I fail my publisher with an error I defined to tell subscribers that the user should log in. For posterity, here's what that error looks like:

enum AuthenticationError: Error {
  case loginRequired
}

If we have a token that's valid and we're not forcing a refresh, then I use a Just publisher to return a publisher that will emit the existing token immediately. No refresh needed.

Lastly, if we don't have a token that's valid I kick off a refresh request. Note that I use the share() operator how to make sure that any subscribers for my refresh request share the output from my initial request. Normally if you subscribe to the same data task publisher more than once it will kick off a network call for each subscriber. The share() operator makes sure that all subscribers receive the same output without triggering a new request.

I don't subscribe to the output of my refresh request, that's up to the caller of validToken(forceRefresh:). Instead, I use handleEvents to hook into the receiveOutput and receiveCompletion events. When my request produces a token, I cache it for future use. In your app you'll probably want to store the obtained token in the user's keychain. When the refresh request completes (either succesfully or with an error) I set the refreshPublisher to nil. Note that I wrap this in self?.queue.sync again to avoid data races.

Now that you have an authenticator, let's see how it can be used in the NetworkManager from the previous section.

Using the authenticator in your networking code

Since the authenticator should act as a dependncy of the network manager we'll need to make some changes to its init code:

private let session: NetworkSession
private let authenticator: Authenticator

init(session: NetworkSession = URLSession.shared) {
  self.session = session
  self.authenticator = Authenticator(session: session)
}

The same Authenticator can now be used in every network call you make through a single instance of NetworkManager. All that's left to do is use this authenticator in every network call that NetworkManager can perform.

In this case that's only a single method but for you it could be many, many more methods. Make sure that they all use the same instance of Authenticator.

Let's see what my finished example of performAuthenticatedRequest looks like:

func performAuthenticatedRequest() -> AnyPublisher<Response, Error> {
  let url = URL(string: "https://donnys-app.com/authenticated/resource")!

  return authenticator.validToken()
    .flatMap({ token in
      // we can now use this token to authenticate the request
      session.publisher(for: url, token: token)
    })
    .tryCatch({ error -> AnyPublisher<Data, Error> in
      guard let serviceError = error as? ServiceError,
            serviceError.errors.contains(ServiceErrorMessage.invalidToken) else {
        throw error
      }

      return authenticator.validToken(forceRefresh: true)
        .flatMap({ token in
          // we can now use this new token to authenticate the second attempt at making this request
          session.publisher(for: url, token: token)
        })
        .eraseToAnyPublisher()
    })
    .decode(type: Response.self, decoder: JSONDecoder())
    .eraseToAnyPublisher()
}

Before making my network call I call authenticator.validToken(). This will produce a publisher that emits a valid token. If we already have a valid token then the valid token will be published immediately. If we have a token that appears to be expired, validToken() will fire off a refresh immediately and we'll receive a refreshed token eventually. This means that the token that's passed to the flatMap which comes after validToken() should always be valid unless something strange happened and the validity of our token isn't what it looked like initially.

By using flatMap on validToken() you can grab the token and use it to create a new publisher. In this case that should be your network call.

After my flatMap I use tryCatch. Since the publisher(for:token:) implementation is expected to throw an error and fail the publisher if we receive a non-200 http status code we'll want to handle this in the tryCatch.

I check whether the error I received in my tryCatch is indeed a ServiceError and that its errors array contains ServiceErrorMessage.invalidToken. If I receive something else this could mean that the authenticator noticed that we don't have a token and it failed with a loginRequired error. It could also mean that something else went wrong. We want all these errors to be forwarded to the caller of performAuthenticatedRequest(). But if we received an error due to an expired token, we'll want to attempt one refresh to be sure we can't recover.

Note that I call validToken and pass forceRefresh: true at this point. The reason for this is that I already called validToken before and didn't force a refresh. The token that the authenticator holds appears to be valid but for some reason it's not. We'll want to tell the autenticator to refresh the token even if the token looks valid.

On the next line I flatMap over the output of validToken(forceRefresh:) just like I did before to return a network call.

Either the flapMap or the tryCatch will produce a publisher that emits Data. I can call decode on this publisher to obtain an instance of Response.

The whole chain is erased to AnyPublisher so my return type for performAuthenticatedRequest() is AnyPublisher<Response, Error>.

It takes some setup, and it's definitely not something you'll wrap your head around easily but this approach makes a ton of sense ones you've let it sink in. Especially because we begin our initial request with an access token that should be valid and has already been refreshed if the token appears to be expired locally. A single refresh will be attempted if the initial token turns out to be invalid in case the device clock is off, or a token was marked as expired on the server for security reasons or other reasons.

If the token was refreshed succesfully but we still can't perform our request it's likely that something else is off and it's highly unlikely that refreshing again will alleviate the issue.

In Summary

In this week's post you saw an approach that uses Combine and DispatchQueue.sync to build a token refresh flow that can handle multiple incoming requests at the same time without firing off new requests when a token refresh is already in progress. The implementation I've shown you will pro-actively refresh the user's token if it's already known that the token is expired. The implementation also features a forced refresh mechanism so you can trigger a token refresh at will, even if the locally cached token appears to be valid.

Flows like these are often built on top of arbitrary requirements and not every service will work will with the same approach. For that reason I tried to focus on the authenticator itself and the mechanisms that I use to synchronize access and share a publisher rather than showing you how you can design a perfect networking layer that integrates nicely with my Authenticator. I did show you a basic setup that can be thought of as a nice starting point.

If you have any questions or feedback about this article please let me know on Twitter.

Building a simple remote configuration loader for your apps

Remote configuration is a common practice in almost every app I have worked on. Sometimes these configurations can be large and the implications of a configuration change can be far-reaching while other times a configuration is used to change the number of items shown in a list, or to enable or disable certain features. You can even use remote configuration to set up your networking layer. For example by setting certain headers on a request, providing endpoints for your remote data, and more.

In this week's post I will not go into detail about every possible use case that you might have for a remote configuration. Instead, I will show you how you can create a remote configuration, host it, load it, and cache it for subsequent launches.

While it might be tempting to look at a third-party solution for a feature like this, I want to show you how you can set up a remote configuration yourself because it's much more straightforward than you might think.

Hosting a remote configuration file

There are countless ways to create and host a configuration file. For example, you can generate a configuration file using a Content Management System (CMS), write a JSON file by hand, or generate JSON files by concatenating several JSON files into a single, larger JSON file. In this post we'll keep it simple and write a JSON file by hand.

In terms of hosting your remote configuration file, you could upload the file to a server that you own, use a static file server like Amazon's S3, or use any other platform that allows you to serve JSON files. As you can imagine there are tons of solutions that allow you to serve a static file but I'd like to keep things simple and straightforward.

My personal choice for hosting static files is Amazon's S3. I already have an account there, and creating S3 buckets is fairly simple. And most importantly, it's cheap. Amazon has a generous free tier and even when your data traffic exceeds the free tier, S3 is still very affordable.

You can create an S3 bucket by signing up for an AWS account at https://aws.amazon.com. Just to be clear, I am not affiliated with Amazon and will not receive any money if you create an account through me. They are not paying me to promote their services in any way.

After you've signed up for an AWS account go to the S3 page and click the big Create bucket button. Choose an appropriate name for your bucket (for example com.yourname.app-config) and choose a region. The region you pick is the physical location where your bucket is stored. This means that if you pick EU (Frankfurt) your app config will be served from a server in Frankfurt. It's typically a good idea to choose a region that you would consider to be close to the majority of your users. The closer the server is to your users, the less latency the user will experience while loading your app config. However, since we'll be caching the app config and including an initial config in the app bundle later you don't have to worry about this too much. Your user won't notice when the config loaded slightly slower than you'd like.

After choosing a name and region, click Next. On this screen, you can uncheck the Block all public access option and accept the warning that pops up. The whole purpose of this bucket is to be public so you can serve your config from it. Click Next again and then click Create Bucket. That's it, you've created your first Amazon S3 bucket!

To test your bucket, create a file called config.json on your Desktop (or in any other location you find convenient) and add the following contents to it:

{
  "minVersion": "1.0.0"
}

Select your bucket in the overview on your S3 page and click the Upload button to add a file to your bucket. Select your config.json file and set the Managed public permissions field to Grant public read access to this object(s):

Screenshot of the correct settings for the uploaded config file

After doing this, click the Upload button.

Your file will upload and be visible in your bucket. Click the file name to inspect its details. The last field in the detail view is called Object URL and it contains your config's URL. For example my config was uploaded to https://s3.eu-central-1.amazonaws.com/com.donnywals.blog/config.json.

And that's it! You've uploaded your first config to an S3 bucket. Now let's see how you can use this config in an app.

Loading a remote config in your application

Once you have uploaded your app configuration somewhere, you can start thinking about adding it to your app. There are a couple of important requirements that I think are essential to a good app config setup:

  • The app must be able to function offline or if config loading fails
  • It should be possible to mock the app config for testing purposes
  • Config changes should be applied as soon as possible

The first two requirements are essential in my opinion. If your app doesn't work without loading a configuration first your users will likely experience frustratingly slow startup times that get worse as their network quality degrades. Worse, they wouldn't be able to use your app while they are offline.

The third requirement is a somewhat optional requirement that might not be realistic for every configuration property that you use. For example, if you have config flags for your UI you might not want to immediately redraw your UI when you've loaded a new config. You can probably use your older configuration until the next time the user launches your app. Other features like an update prompt are much easier to implement in a way that's dynamic and applied as soon as your configuration is loaded. In this post I'll show you how you can use Combine to subscribe to changes in your configuration.

First, let's see how you can add a default configuration to your app, load a new configuration, and store this new configuration for later use.

We'll write a ConfigProvider class that takes care of all of this. Since we want to pass this object around without copying it we must use a class for this object. If you'd make this a struct Swift would create a copy every time you pass your config provider to a view model, view, or view controller. This would be fine if the config provider was completely immutable but it's not. When the app initially launches we'll load a cached config, and when this config updates we want to use the updated configuration which means we need to mutate the config provider by making it point to a new config.

Before we write this ConfigProvider, copy the config.json file you created to your project in Xcode and make sure it's included in your app by setting its Target Membership.

Before we write the ConfigProvider, you should also define a struct that your configuration JSON is decoded into. For the sample config you wrote earlier the struct would look like this:

struct AppConfig: Codable {
  let minVersion: String
}

You can name this struct whatever you want, and make sure it matches the JSON that you use in your configuration file.

Let's start building our ConfigProvider by writing a skeleton class that contains the properties and methods that I'd like to use:

class ConfigProvider {

  private(set) var config: AppConfig

  func updateConfig() {
    // here we'll load the config
  }
}

This API is nice and simple. There's a config property that can be used to retrieve the current config, and there's an updateConfig() method that can be used to fetch a remote configuration and update the config property.

The ConfigProvider must be able to load local configuration files and remote configuration files as needed. The provider should always return a configuration file, even if no file was loaded before. That's why you added the config.json to the app bundle earlier.

To keep functionality separated, we'll create two helper objects that are abstracted behind a protocol so they can be swapped out when testing the config provider object. Let's define these protocols first:

protocol LocalConfigLoading {
  func fetch() -> AppConfig
  func persist(_ config: AppConfig)
}

protocol RemoteConfigLoading {
  func fetch() -> AnyPublisher<AppConfig, Error>
}

Both protocols are pretty lean. A LocalConfigLoading object is capable of fetching a local configuration and persisting an AppConfig file to the file system. A RemoteConfigLoading object is capable of loading configuration from a remote server. Before we implement objects that conform to these protocols, we can already write the ConfigProvider's implementation.

Let's look at the initializer first:

class ConfigProvider {

  private(set) var config: AppConfig

  private let localConfigLoader: LocalConfigLoading
  private let remoteConfigLoader: RemoteConfigLoading

  init(localConfigLoader: LocalConfigLoading, remoteConfigLoader: RemoteConfigLoading) {
    self.localConfigLoader = localConfigLoader
    self.remoteConfigLoader = remoteConfigLoader

    config = localConfigLoader.fetch()
  }

  func updateConfig() {

  }
}

This is nice and straightforward. The ConfigProvider takes two objects in its initializer and uses the local loader's fetch method to set an initial local configuration.

The updateConfig method will use the remote provider to fetch a new configuration. To make sure don't have more than one update request in flight at a time, we'll use a DispatchQueue and dispatch to it synchronously.To learn more about this you can read my post on using DispatchQueue.sync and DisptachQueue.async.

Since we need to subscribe to a publisher to obtain the remote config, we also need a property to hold on to a cancellable. The following code should be added to the ConfigProvider:

private var cancellable: AnyCancellable?
private var syncQueue = DispatchQueue(label: "config_queue_\(UUID().uuidString)")

func updateConfig() {
  syncQueue.sync {
    guard self.cancellable == nil else {
      return
    }

    self.cancellable = self.remoteConfigLoader.fetch()
      .sink(receiveCompletion: { completion in
        // clear cancellable so we could start a new load
        self.cancellable = nil
      }, receiveValue: { [weak self] newConfig in
        self?.config = newConfig
        self?.localConfigLoader.persist(newConfig)
      })
  }
}

If we already have a cancellable stored we return early to avoid making two requests. If we don't have a cancellable, the remote loader's fetch() method is called. When this call completes the cancellable is cleaned up. When a new config is received, it is assigned to self?.config to make it available to users of the config provider. I also call persist on the local config loader to make sure the loaded config is available locally.

Let's look at the implementation for the local config loader:

class LocalConfigLoader: LocalConfigLoading {
  private var cachedConfigUrl: URL? {
    guard let documentsUrl = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first else {
      return nil
    }

    return documentsUrl.appendingPathComponent("config.json")
  }

  private var cachedConfig: AppConfig? {
    let jsonDecoder = JSONDecoder()

    guard let configUrl = cachedConfigUrl,
          let data = try? Data(contentsOf: configUrl),
          let config = try? jsonDecoder.decode(AppConfig.self, from: data) else {
      return nil
    }

    return config
  }

  private var defaultConfig: AppConfig {
    let jsonDecoder = JSONDecoder()

    guard let url = Bundle.main.url(forResource: "config", withExtension: "json"),
          let data = try? Data(contentsOf: url),
          let config = try? jsonDecoder.decode(AppConfig.self, from: data) else {
      fatalError("Bundle must include default config. Check and correct this mistake.")
    }

    return config
  }

  func fetch() -> AppConfig {
    if let cachedConfig = self.cachedConfig {
      return cachedConfig
    } else {
      let config = self.defaultConfig
      persist(config)
      return config
    }
  }

  func persist(_ config: AppConfig) {
    guard let configUrl = cachedConfigUrl else {
      // should never happen, you might want to handle this
      return
    }

    do {
      let encoder = JSONEncoder()
      let data = try encoder.encode(config)
      try data.write(to: configUrl)
    } catch {
      // you could forward this error somewhere
      print(error)
    }
  }
}

The most interesting parts are the fetch() and persist(_:) methods. In fetch() I first access self.cachedConfig. This property returns an optional AppConfig. It checks whether a config is stored in the app documents directory and decoding it. If no file exists, or the decoding fails this property is nil which means that we should use the default config that was bundled with the app. This config is loaded by the defaultConfig property and can't fail. If it does, the project is misconfigured.

After grabbing the default config it is passed to persist(_:) so it's copied to the documents directory which means it'll be loaded from there on the next launch.

The persist method encodes the AppConfig that it receives and writes it to the documents directory. It's a whole bunch of code, but the principles this is built on are fairly straightforward.

Let's look at the remote config loader next:

class RemoteConfigLoader: RemoteConfigLoading {
  func fetch() -> AnyPublisher<AppConfig, Error> {
    let configUrl = URL(string: "https://s3.eu-central-1.amazonaws.com/com.donnywals.blog/config.json")!

    return URLSession.shared.dataTaskPublisher(for: configUrl)
      .map(\.data)
      .decode(type: AppConfig.self, decoder: JSONDecoder())
      .eraseToAnyPublisher()
  }
}

This class is nice and tiny. It uses Combine to load my configuration file from the S3 bucket that I created earlier. I extract data from the output of my data task, I decode it into AppConfig and then I erase to AnyPublisher to keep my return type nice and clean.

Let's make one last change. When a new config is loaded we want to be able to respond to this. The easiest way to do this is by making ConfigProvider conform to ObservableObject and marking config as @Published:

class ConfigProvider: ObservableObject {

  @Published private(set) var config: AppConfig

  // rest of the code...
}

When using this config loader in a SwiftUI app you could write something like the following in your App struct:

@main
struct ConfigExampleApp: App {
  let configProvider = ConfigProvider(localConfigLoader: LocalConfigLoader(),
                                      remoteConfigLoader: RemoteConfigLoader())

  var body: some Scene {
    WindowGroup {
      ContentView()
        .environmentObject(configProvider)
        .onAppear(perform: {
          self.configProvider.updateConfig()
        })
    }
  }
}

This code makes the config provider available in the ContentView's environment. It also updates the config when the content view's onAppear is called. You can use the config provider in a SwiftUI view like this:

struct ContentView: View {
  @EnvironmentObject var configProvider: ConfigProvider

  var body: some View {
    Text(configProvider.config.minVersion)
      .padding()
  }
}

When the config updates, your view will automatically rerender. Pretty neat, right?

In a UIKit app you would add a property to your AppDelegate and inject the config provider into your view controller. The code would look a bit like this:

class AppDelegate: NSObject, UIApplicationDelegate {
  let configProvider = ConfigProvider(localConfigLoader: LocalConfigLoader(),
                                      remoteConfigLoader: RemoteConfigLoader())

  var window: UIWindow?

  func application(_ application: UIApplication,
                   didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey : Any]? = nil) -> Bool {

    configProvider.updateConfig()

    let window = UIWindow()
    window.rootViewController = ViewController(configProvider: configProvider)
    window.makeKeyAndVisible()

    self.window = window

    return true
  }
}

To receive configuration changes you can subscribe to the provider's $config property as follows:

configProvider.$config.sink { newConfig in
  // use the new config
}
.store(in: &cancellables)

Of course the exact usage will vary per app, but I'm sure this should help you to get started. The main point is that you know how you can load a remote config and cache it locally for future usage.

In Summary

In this week's post you have seen several interesting techniques. You learned how you can upload a configuration file for your app to an S3 bucket. You saw how you can load this file and cache it locally for future use. The contents of the config I've shown you are very basic but you can add tons of information to your config file. Some ideas are a minimum version that your users should have installed or feature flags to enable or disable app features remotely.

I've also shown you how you can make your config provider observable so you can react to changes in both SwiftUI and UIKit. This allows you to present popovers or show / hide UI elements as needed by reading values from the config object.

If you have any questions about this post, or if you have any feedback for me, please make sure to reach out on Twitter.