Tuesday, 21 October 2008

C#: Decimal type vs. Float and Double types

Have you ever wondered what is the difference between the .NET "decimal" data type and the similar "float" or "double" data types? Ever wonder when you should use one instead of other? This article is going to help you and myself to work these questions out.

First of all, take a look at the following C# code:

var f = 1.1f;
var dbl = 1.1;
var d = 1.1m;

Response.Write((f + 0.1f).ToString("e20"));
Response.Write("<br />");
Response.Write((dbl + 0.1).ToString("e20"));
Response.Write("<br />"); Response.Write((d + 0.1m).ToString("e20"));

Note: when you declare a float variable like float f = 1.1 you will receive a build error says type 'double' cannot be implicitly converted to type 'float'; use an 'F' suffix to create a literal of this type, which means any decimal without suffix will be created as a double type.

Now here is what the ouptput looks like:

1.20000005000000000000e+000
1.20000000000000020000e+000
1.20000000000000000000e+000

You can see the three results are all slightly different. Why is this? Also, why was 1.20000005 instead of the hard-coded 1.20000000? The reason is simple - we’re working on hardware that uses binary floating point representation as opposed to decimal representation. Binary floating point is really an approximation of the true decimal number because it is base two (binary) instead of base 10 (decimal).

The Decimal Type, instead, is simply a floating point type that is represented internally as base 10 instead of base two. Obviously with base 10 (our real-world numbering system) any decimal number can be constructed to the exact value without approximating. :) The Decimal type is really  really a software implementation of base 10 arithmetic.

Which Type Should I Use?

Since Decimal types are perfectly accurate and float’s are not, why would we still want to use the intrinsic float/double types? Short answer - performance. In my speed tests Decimal types ran over 20 times slower than their float counterparts.

So if you’re writing a financial application for a bank that has to be 100% accurate and performance is not a consideration, use the Decimal type. On the other hand, if you need performance and extremely small floating point variations don’t affect your program, stick with the float and double types.

Other Considerations

Another thing the Decimal type can do that the float and double types cannot is encode trailing zero’s. For example, there is a difference between 7.5 and 7.50 in the decimal type, but there is no way to represent this in a standard float/double. Let’s look at another example:

double dbl = 1.23 + 1.27;
Response.Write(string.Format("double: {0}", dbl));
Response.Write("<br />");
decimal d = 1.23m + 1.27m;
Response.Write(string.Format("decimal: {0}", d));

This output looks like this:

double: 2.5
decimal: 2.50

The first part that uses a double outputs 2.5, but the second one that uses a decimal outputs 2.50 - we didn’t even have to specify a format string in order to get that trailing zero. This could be very useful in applications that deal with dollar amounts.

No comments: