# Core / CodeaLuaNumberType

Back to Beyond the Codea in-app reference

## Codea's Lua number type

'Number' is one of Lua's eight basic types, the other seven being nil, boolean, string, function, userdata, thread, and table. Number represents real numbers.

By default, Lua's number type represents double-precision floating-point numbers. However, Codea's Lua interpreter uses another internal representation for numbers: single-precision float.

## Single-precision float

The IEEE 754 standard specifies a single-precision float as having: sign bit - 1 bit, exponent width - 8 bits, significand precision - 24 bits (23 bits are stored). That is, 32 bits (4 bytes) are stored in total.

This gives a precision of 6 to 9 significant decimal digits and a range for positive values of between about 1.4e−45 to about 3.4e+38.

## Range of precise integers

After 16,777,216, integers may not be recorded precisely in a single precision float. For example:

```function setup()
i = 16777215
print(string.format("%d", i))
i = i + 1
print(string.format("%d", i)) -- Output: 16777216
i = i + 1
print(string.format("%d", i)) -- Output: 16777216
end
```

## Rounding errors

Some numbers that can be represented precisely in decimal cannot be represented precisely in binary. Examples are 0.1 and 0.01.

This can result in rounding errors. For example:

```print(0.1 * 0.1 - 0.01)  -- Outputs: 9.31323e-10 (not 0)
print(0.1 * 0.1 == 0.01) -- Outputs: false
```

The logic of code may need to take into account the lack of precision. For example:

```epsilon = 0.0001                            -- Set tolerance for error
print(math.abs(0.1 * 0.1 - 0.01) < epsilon) -- Outputs: true
```

Updated