Skip to content

C Variables and types

New Course Coming Soon:

Get Really Good at Git

An introduction to dealing with variables in C, and the basic types

C is a statically typed language.

This means that any variable has an associated type, and this type is known at compilation time.

This is very different than how you work with variables in Python, JavaScript, PHP and other interpreted languages.

When you create a variable in C, you have to specify the type of a variable at the declaration.

In this example we initialize a variable age with type int:

int age;

A variable name can contain any uppercase or lowercase letter, can contain digits and the underscore character, but it can’t start with a digit. AGE and Age10 are valid variable names, 1age is not.

You can also initialize a variable at declaration, specifying the initial value:

int age = 37;

Once you declare a variable, you are then able to use it in your program code, and you can change its value at any time, using the = operator for example, like in age = 100;, provided the new value is of the same type.

In this case:

#include <stdio.h>

int main(void) {
	int age = 0;
	age = 37.2;
	printf("%u", age);
}

the compiler will raise a warning at compile time, and will convert the decimal number to an integer value.

The C built-in data types are int, char, short, long, float, double, long double. Let’s find out more about those.

Integer numbers

C provides us the following types to define integer values:

Most of the times, you’ll likely use an int to store an integer. But in some cases, you might want to choose one of the other 3 options.

The char type is commonly used to store letters of the ASCII chart, but it can be used to hold small integers from -128 to 127. It takes at least 1 byte.

int takes at least 2 bytes. short takes at least 2 bytes. long takes at least 4 bytes.

As you can see, we are not guaranteed the same values for different environments. We only have an indication. The problem is that the exact numbers that can be stored in each data type depends on the implementation and the architecture.

We’re guaranteed that short is not longer than int. And we’re guaranteed long is not shorter than int.

The ANSI C spec standard determines the minimum values of each type, and thanks to it we can at least know what’s the minimum value we can expect to have at our disposal.

If you are programming C on an Arduino, different board will have different limits.

On an Arduino Uno board, int stores a 2 byte value, ranging from -32,768 to 32,767. On a Arduino MKR 1010, int stores a 4 bytes value, ranging from -2,147,483,648 to 2,147,483,647. Quite a big difference.

On all Arduino boards, short stores a 2 bytes value, ranging from -32,768 to 32,767. long store 4 bytes, ranging from -2,147,483,648 to 2,147,483,647.

Unsigned integers

For all the above data types, we can prepend unsigned to start the range at 0, instead of a negative number. This might make sense in many cases.

The problem with overflow

Given all those limits, a question might come up: how can we make sure our numbers do not exceed the limit? And what happens it we do exceed the limit?

If you have an unsigned int number at 255, and you increment it, you’ll get 256 in return. As expected. If you have a unsigned char number at 255, and you increment it, you’ll get 0 in return. It resets starting from the initial possible value.

If you have a unsigned char number at 255 and you add 10 to it, you’ll get the number 9:

#include <stdio.h>

int main(void) {
  unsigned char j = 255;
  j = j + 10;
  printf("%u", j); /* 9 */
}

If you have a signed value, the behavior is undefined. It will basically give you a huge number which can vary, like in this case:

#include <stdio.h>

int main(void) {
  char j = 127;
  j = j + 10;
  printf("%u", j); /* 4294967177 */
}

In other words, C does not protect you from going over the limits of a type. You need to take care of this yourself.

Warnings when declaring the wrong type

When you declare the variable and initialize it with the wrong value, the gcc compiler (the one you’re probably using) should warn you:

#include <stdio.h>

int main(void) {
  char j = 1000;
}
hello.c:4:11: warning: implicit conversion from 'int' to
      'char' changes value from 1000 to -24
      [-Wconstant-conversion]
        char j = 1000;
             ~   ^~~~
1 warning generated.

And it also warns you in direct assignments:

#include <stdio.h>

int main(void) {
  char j;
  j = 1000;
}

But not if you increase the number using for example +=:

#include <stdio.h>

int main(void) {
  char j = 0;
  j += 1000;
}

Floating point numbers

Floating point types can represent a much larger set of values than integers can, and can also represent fractions, something that integers can’t do.

Using floating point numbers, we represent numbers as decimal numbers times powers of 10.

You might see floating point numbers written as

and in other seemingly weird ways.

The following types:

are used to represent numbers with decimal points (floating point types). All can represent both positive and negative numbers.

The minimum requirements for any C implementation is that float can represent a range between 10^-37 and 10^+37, and is typically implemented using 32 bits. double can represent a bigger set of numbers. long double can hold even more numbers.

The exact figures, as with integer values, depend on the implementation.

On a modern Mac, a float is represented in 32 bits, and has a precision of 24 significant bits, 8 bits are used to encode the exponent. A double number is represented in 64 bits, with a precision of 53 significant bits, 11 bits are used to encode the exponent. The type long double is represented in 80 bits, has a precision of 64 significant bits, 15 bits are used to encode the exponent.

On your specific computer, how can you determine the specific size of the types? You can write a program to do that:

#include <stdio.h>

int main(void) {
  printf("char size: %lu bytes\n", sizeof(char));
  printf("int size: %lu bytes\n", sizeof(int));
  printf("short size: %lu bytes\n", sizeof(short));
  printf("long size: %lu bytes\n", sizeof(long));
  printf("float size: %lu bytes\n", sizeof(float));
  printf("double size: %lu bytes\n", sizeof(double));
  printf("long double size: %lu bytes\n", sizeof(long double));
}

In my system, a modern Mac, it prints:

char size: 1 bytes
int size: 4 bytes
short size: 2 bytes
long size: 8 bytes
float size: 4 bytes
double size: 8 bytes
long double size: 16 bytes
Are you intimidated by Git? Can’t figure out merge vs rebase? Are you afraid of screwing up something any time you have to do something in Git? Do you rely on ChatGPT or random people’s answer on StackOverflow to fix your problems? Your coworkers are tired of explaining Git to you all the time? Git is something we all need to use, but few of us really master it. I created this course to improve your Git (and GitHub) knowledge at a radical level. A course that helps you feel less frustrated with Git. Launching Summer 2024. Join the waiting list!
→ Get my C Handbook

Here is how can I help you: