When storing exact numeric values with decimal values in SQL Server, there are two data types to choose from: Decimal and Numeric. Both types allow precision and scale to be defined. Precision is the maximum number of total digits allowed, and scale is the number of digits to the right of the decimal point. Both are part of the ANSI standard. Both types appear to be used the same way in SQL Server, the Microsoft documentation states that the two are synonyms. So why are there two data types for the same purpose?
In Joe Celkos’s book “SQL For Smarties”, he mentioned a sight distinction between the two.
“NUMERIC (p,s) specifies the exact precision and scale to be used. DECIMAL(p,s) specifies the exact scale, but the precision is implementation-defined to be equal or greater than the specified value.”
So there’s a slight difference in the ANSI Standard, but no difference in the SQL Server implementation.
I did find it interesting that when that when an error was raised assigning a value to a decimal variable, the error message referred to a numeric data type.
declare @Value decimal(2,1); set @Value = 123.456;
Msg 8115, Level 16, State 8, Line 25
Arithmetic overflow error converting numeric to data type numeric.
I use decimal, since that term is more specific than numeric, which sounds like it would be any kind of number, even integers.