Wednesday, February 2, 2022

Decimal vs. Double

Jesse Squires:

More importantly, Decimal does not conform to either BinaryFloatingPoint or FloatingPoint, but begins its protocol conformance at SignedNumeric, which has the single requirement to implement func negate(). This reveals why Decimal cannot be a drop-in replacement for any of Swift’s floating-point types — most functionality is defined in the lower floating-point protocols. Similar to how mixing numeric types like Int and Double in a single expression is a compiler error in Swift, Decimal does not play nicely with the other numerics. Additionally, some common functionality is missing. For example, Decimal does not conform to Strideable like the other numerics, which means you cannot create ranges of Decimal values. Depending on your problem domain, Decimal can be difficult to adopt.

Rob Napier:

I don’t generally recommend Decimal for money. I recommend Int, and store everything in the base unit, most commonly “cents.” It’s more efficient and often easier to use.

Rob Ryan:

Two benefits of Decimal: (1) You can do precise decimal calculations … e.g. add Double of 0.1 ten times ≠ 1.0 (!); (2) You want to enjoy more significant digits … e.g. print Double representation of 12345678901234567890 and it’s not actually 12345678901234567890.

Previously:

6 Comments RSS · Twitter


Storing amounts of money as the smallest representative unit is fair enough, but that only works if, for the life of the program, you intend to never do anything beyond addition and subtraction of those values. Division, percentages or multiplication (aside from with integers) and you have to find another form anyway (or round/truncate), so why not choose a form that will allow them and stick with that form?

It is ridiculous that, in 2022, we still do not have good hardware acceleration for decimal arithmetic on most mainstream CPUs, but it is possibly more ridiculous that the software support is so poor in most standard libraries.


Harumpf. Using Decimal for money is to be recommended. Assuming a smallest unit for money values (cents for example) might easily get you into trouble when partial values of that smallest unit are necessary - and yes, that is a thing. Think the price of a single nail. Or screw.


I would recommend something like https://github.com/dirkschreib/Decimal64. Maybe Iā€˜m a little bit biased as the author šŸ˜‰


That might work if you only have to work with one currency. But the US Dollar is not used everywhere. So you need to convert to lesser valued currencies and then you end up with unacceptable rounding errors. There are rules to how you convert and round off currencies.


For your consideration, I wrote a library that provides a precise, type-safe way to representation a monetary amount in a given currency. If nothing else, the README includes a discussion of how currencies are modeled according to the ISO 4701 standard, and some of the advantages and limitations of Swift's type system, Decimal type, and Codable.

https://github.com/flight-school/money

This library was extracted from a chapter of my Flight School Guide to Swift Numbers, which you can download as a free sample here: https://flight.school/books/numbers/


The standard Money representation at Large Tech Behemoth where I used to work was a (currency [string], micros [int64]) tuple, so you would represent money in millionths of a dollar or peso or whatever. I'm sure there are edge cases where that would be insufficient, but in practice it worked ~fine and had enough precision even for real world cases like daily interest or whatever, without any floating point math.

Leave a Comment