# What’s the difference between Float and Double in Swift

A `Double`

and `Float`

are both used to represent decimal numbers, but they do so in slightly different ways.

If you initialize a decimal number in Swift using as shown below, the Swift compiler will assume that you meant to create a `Double`

:

`let val = 3.123 // val is inferred to be Double`

The reason for this is that `Double`

is the more precise type when comparing it to `Float`

. A `Float`

holds a total of 8 positions, or **32 bits**. Since `Double`

is more precise, it can hold more positions. It uses **64 bits** to do this. In practice, that means the following:

```
print(Double.pi) // 3.141592653589793
print(Float.pi) // 3.1415925
```

As you can see, `Double`

can represent `pi`

far more accurately than `Float`

. We can make the difference more obvious if we multiply `pi`

by `1000`

:

```
print(Double.pi * 1000) // 3141.592653589793
print(Float.pi * 1000) // 3141.5925
```

Both `Double`

and `Float`

sacrifice some after the comma precision when you multiply `pi`

by `1000`

. For `Float`

this means that it only has four decimal places while `Double`

still has twelve.