The C programming language supports a variety of integer data types. These are
short int int long intThe data type int may correspond to either short int or long int . All the above data types may also be either signed (the default) or unsigned .
Declarations may be preceded by the keywords signed or unsigned . short and long may be written instead of short int and long int . If an integer data type is described as unsigned it means that the contents of a memory location of that type will always be interpreted as a positive number. If you are familiar with binary number representations this is equivalent to saying that the most significant bit is taken as part of the number rather than being taken as the sign bit.
The ANSI standard requires, indirectly, that a short int occupy at least 16 bits of computer memory and that a long int occupy at least 32 bits of computer memory. These limits are usually those actually in operation on most computers. On PC's an unqualified int is usually equivalent to a short int and on Unix based systems an unqualified int is usually equivalent to a long int .
If you are uncertain whether your compiler's int defaults to short or long, try the following simple program.
main()
{
int x = 30000;
int y;
y = x+x;
printf("twice x is %d\n",y);
}
On a system with long default
int,
such as the SUN Sparc Station the output would be twice x is 60000On a system with short default int, such as the Turbo C compiler running on a PC the output is likely to be
twice x is -5536
The problem here is that the number 60000 is simply too big to be stored in 16 bits of computer memory. When the arithmetic circuits of the computer generated 60000 the result was simply too big to fit into the memory locations set aside for the purpose, this is known as an arithmetic overflow. The ANSI standard says that behaviour is undefined under such circumstances, which means that anything might happen. It would, perhaps, be better if the arithmetic overflow were detected in the same way as division by zero but practically all computer systems simply truncate the generated sequence of bits giving wildly inaccurate results. If you are interested in computer architecture you might care to note that -5536 is 60000 - 2 X 32768.
It is equally possible to make the SUN Sparc Station compiler give erroneous results by replacing 30000 by 2000000000 (that's 2 followed by 9 zeroes) in the previous program. Some PC compilers have options to force int to default to long rather than short , check the relevant manuals for details.
Input and output conversions for signed integers are
| conversion | data type |
|---|---|
| hd | short int |
| d | int |
| ld | long int |
main()
{
short int si = 400;
long int li = 400;
printf(" si = %ld\n",si); /* short converted as long */
printf(" li = %hd\n",li); /* long converted as short */
}
producing the output si = 26214800 li = 400shows what happens when you get it wrong. The above output was produced using Turbo C on a PC, the SUN Sparc station compiler gave completely correct results. It also shows that functional parameters of type short int are promoted to the system default int type, such promotion has no effect on a PC but on the SUN Sparc station accounts for the correct operation of the program.
It is clearly important, when designing and coding programs, to understand the limitations of the int data types in the environment being used. Problems are, not surprisingly, most commonly encountered when moving functional programs from a 32-bit int environment to a 16-bit int environment. It might be thought that portability problems could be avoided by declaring all integer variables as explicitly long . This is wrong because many library functions simply take int parameters using the local default and unwise because it means that programs running in 16-bit environments will occupy more memory and take longer to do arithmetic than necessary.