Decimal vs. Double
More importantly,
Decimal
does not conform to eitherBinaryFloatingPoint
orFloatingPoint
, but begins its protocol conformance atSignedNumeric
, which has the single requirement to implementfunc negate()
. This reveals whyDecimal
cannot be a drop-in replacement for any of Swift’s floating-point types — most functionality is defined in the lower floating-point protocols. Similar to how mixing numeric types likeInt
andDouble
in a single expression is a compiler error in Swift,Decimal
does not play nicely with the other numerics. Additionally, some common functionality is missing. For example,Decimal
does not conform toStrideable
like the other numerics, which means you cannot create ranges ofDecimal
values. Depending on your problem domain,Decimal
can be difficult to adopt.
I don’t generally recommend
Decimal
for money. I recommendInt
, and store everything in the base unit, most commonly “cents.” It’s more efficient and often easier to use.
Two benefits of
Decimal
: (1) You can do precise decimal calculations … e.g. addDouble
of 0.1 ten times ≠ 1.0 (!); (2) You want to enjoy more significant digits … e.g. printDouble
representation of 12345678901234567890 and it’s not actually 12345678901234567890.
Previously:
6 Comments RSS · Twitter
Storing amounts of money as the smallest representative unit is fair enough, but that only works if, for the life of the program, you intend to never do anything beyond addition and subtraction of those values. Division, percentages or multiplication (aside from with integers) and you have to find another form anyway (or round/truncate), so why not choose a form that will allow them and stick with that form?
It is ridiculous that, in 2022, we still do not have good hardware acceleration for decimal arithmetic on most mainstream CPUs, but it is possibly more ridiculous that the software support is so poor in most standard libraries.
Harumpf. Using Decimal for money is to be recommended. Assuming a smallest unit for money values (cents for example) might easily get you into trouble when partial values of that smallest unit are necessary - and yes, that is a thing. Think the price of a single nail. Or screw.
I would recommend something like https://github.com/dirkschreib/Decimal64. Maybe Iām a little bit biased as the author š
That might work if you only have to work with one currency. But the US Dollar is not used everywhere. So you need to convert to lesser valued currencies and then you end up with unacceptable rounding errors. There are rules to how you convert and round off currencies.
For your consideration, I wrote a library that provides a precise, type-safe way to representation a monetary amount in a given currency. If nothing else, the README includes a discussion of how currencies are modeled according to the ISO 4701 standard, and some of the advantages and limitations of Swift's type system, Decimal
type, and Codable
.
https://github.com/flight-school/money
This library was extracted from a chapter of my Flight School Guide to Swift Numbers, which you can download as a free sample here: https://flight.school/books/numbers/
The standard Money representation at Large Tech Behemoth where I used to work was a (currency [string], micros [int64]) tuple, so you would represent money in millionths of a dollar or peso or whatever. I'm sure there are edge cases where that would be insufficient, but in practice it worked ~fine and had enough precision even for real world cases like daily interest or whatever, without any floating point math.