In Swift, if your number is an integer, then Int type is used. And if you want a decimal number, then either Float or Double are used.
Float and Double
A typical price is represented as a decimal number. It will have dollar and after the decimal cents. This can be represented as Float.
var price: Float = 19.99
var accountValue = 1596.45
// accountValue is Double
Float is a 32-bit number and is considered single precision number. And Double is a 64-bit number and is considered a double precision number.
If you do not specify the type and assign it a decimal number during the declaration, Swift will infer it as a Double. In the above code example, accountValue will have a type Double. And price is Float, because the type is explicitly specified there.
Double types are more useful in scientific computing. The difference between Float and Double come into play when you need to hold a large number of decimal digits. With Float below, you will start losing some digits after the decimal much sooner than Double. With Double you can have precision up to 15 digits.
var sciNum: Float = 0.1234567
print(sciNum)
// Prints: 0.123457
var sciNum2: Double = 0.1234567890123456
print(sciNum2)
// Prints: 0.123456789012346
In general app programming, you can simply omit the type annotation and assign a decimal value. Swift will infer it as Double. This is a much cleaner way.
CGFloat
In Core Graphics, CGFloat struct is used in various places. For example, CGPoint struct contains x and y and they are of type CGFloat.
let x: CGFloat = 12.5
let y: CGFloat = 14.5
print(“(\(x), \(y))”)
// Prints: (12.5, 14.5)
let point: CGPoint = CGPointMake(x, y)
print(point)
// Prints: (12.5, 14.5)
As you can see, decimal points are represented with the type CGFloat. The following will not work:
let x: Float = 12.5
let y: Float = 14.5
let point: CGPoint = CGPointMake(x, y)
// Error - expects CGFloat, not Float
And the following won’t work either — if you leave the type, it will be interpreted as Double. And, CGPointMake needs CGFloat - it doesn’t take Float or Double. And there is no automatic type conversion in Swift.
let x = 12.5
let y = 14.5
// x and y will be of type Double
let point: CGPoint = CGPointMake(x, y)
// Error - expects CGFloat, not Double
Binary, Octal, and Hexadecimal Representations
Generally, whenever you specify a numeric literal in the code, you would use decimal notation (i.e. digits 0 through 9 are used). Other number representations can also be used in Swift. To specify the numbers in other systems, you would prefix the literal with a code (0 followed by b, o, or x).
Binary numbers are prefixed with 0b.
let binNum = 0b1111_1111
print(binNum)
// prints: 255
The type for binNum is Int. Since it’s an integer number, Swift infers (via Type Inference) it as Int. Binary representations use 0 and 1. So, 255 would be represented as 11111111. You can use the underscores inside the binary literal to make it more readable (probably grouping of four 1s will make it much more readable from a hex perspective; and grouping of three from an Octal perspective).
You would also come across using hexadecimal literals now and then. One of the uses could be in color representations. Each of the RGB colors can go from 0 to 255 or, rather from 00 to FF in hexadecimal representation. In hexadecimal system, the digits used go from 0 to 9 and A, B, C, D, E, and F. So, the hexadecimal digits have a value of 0 through 15. A represents 10, B represents 11, and so on until F represents 15.
Following is the color representation of White. Here each of the red, green, and blue components are 255 or FF in hexadecimal notation.
let r: Float = 0xFF, g: Float = 0xFF, b: Float = 0xFF
let color = UIColor(colorLiteralRed: r, green: g, blue: b, alpha: 1.0)
And finally, the Octal representation. This is base-8 representation, meaning the digits used go from 0 through 7. In most of the regular programming, you would use decimal notation, followed by occasional hex and binary representations. Octal comes into play in systems type programming (byte is 8-bits; there are 16, 32, and 64-bit architectures). Each of these numbers are nicely divisible by 8). To represent an Octal number literal, prefix the value with 0o (zero and letter ‘oh’)
// Number 255, which in binary is 0b11_111_111
let octNumb = 0o377
Numeric Literals with Exponents
Very large numbers or very small numbers can be represented with exponents. This representation is typically used in scientific computing. In decimal numbers, e/E is used as exponent. In hexadecimal numbers, p/P is used to represent the exponent.
let largeNum = 5e6
print(largeNum)
// Prints: 5000000.0
let hexNumWithExponent =0x4p8
print(hexNumWithExponent)
// Prints: 1024.0
With decimal numbers, the exponent uses 10 as base. So, 5e6 is 5 * 10^6. That would be 5 followed by 6 zeroes. That would be 5 million.
With hexadecimal numbers, the exponent uses 2 as base. So, 0x4p8 is 4 * 2^8. That would be: 4 * 256 = 1024 = 1K.