Page 1 of 1

C# "Primitives" : Built in types

#1 Curtis Rutland   User is offline

  • (╯□)╯︵ (~ .o.)~
  • member icon

Reputation: 5106
  • View blog
  • Posts: 9,283
  • Joined: 08-June 10

Posted 22 February 2011 - 11:30 AM


Learning C# Series
"Primitives" : Built In Types

An important distinction that is often unmentioned when talking about C# is that most of the objects and types used are not native to C# iteslf. They are defined in the assemblies that are part of the .NET Framework. They're no different than the types you or I can create. But a few of these types are special. These are the "built in types," often called "primitive types," though that is a misnomer.

These built in types are part of the language itself. They all have keywords. In some languages, like C, these types would actually be defined as part of the language. In C# though, they are simply aliases to .NET defined types. That's why they're not properly "primitives," since they're actually hiding standard classes and structs.

Why is this important to learn about?

These basic types are the most important of all to learn, because they are the foundation of everything you will program. These types are the most basic expression of data. If you don't understand how they work from the beginning, anything you build can be flawed.

Definitions of terms used.

Integer: a whole, real number.
Floating Point a number that can be fractional, stored in two parts: significant digits and an exponent.
Value Types: types who's variables directly store the represented value; the opposite of a reference type. This is explained in greater detail later.
Reference Types: types who's variables store a reference to the memory address of the represented value, rather than the actual value itself; the opposite of a value type. This is explained in greater detail later.

Note: All examples were created using Visual Studio 2010, targetting the .NET Framework 4.0. We'll do our best to point out anything that might not work in older versions.

Built in types.

A full listing of all the built in types can be found here.

These built in types can be broken up into three major groups: Integral types, Floating-point types, and Reference types, with two types that don't fit into any of these categories. We'll start with one of the exceptions:


This is the simplest possible type. It can only hold one of two values: true or false (1 or 0, if you prefer to think of it that way). It's also one of the most important types. Bools will be used in practically every program you write, mainly to control the flow of your program. Understanding how to use these means you know how to make your program do things based on conditions.

As I said, bools can only hold two values. There's no true boolean literal, but they're represented by the C# keywords true and false.

Usage Examples
bool t = true;
bool f = false;
bool fromExpr = (t == f); //we'll explain more about this operator later

We'll explain more about how to actually use bools in later tutorials about operators and control structures.

Integral Types

Integral types are all integers of varying capacities. There are two factors that define what each can hold: the size of the memory allocated, and whether or not the integer is "signed." Signed defines whether or not the type can hold negative values.

Integral types are all value types. Variables declared as any of these types hold values directly, not references to values.

These types include: sbyte, byte, short, ushort, int, uint, long, and ulong. Folow these links to see what ranges each can hold.

int and long are the two most commonly used types, int being the more common of the two. Use int when you need a non-fractional number, and use long if the number you need won't fit in an int. In C#, and int is a 32-bit integer. A long is a 64-bit integer. Both of these can also be represented by literals. Any number without a decimal within an integer's range is considered an int literal. Integers larger than that are automatically long literals. Longs within an int's range can be explicitly represented by adding an "L" or "l" to the end of the value (see the last line of the following example).

Usage Examples
int i = 10;
int fromHex = 0x1F;
int smallest = int.MinValue;
int biggest = int.MaxValue;
long l = 1000000000000;
long smallLong = 1L;

One other type that must be mentioned in the Integral types is char. This is somewhat different than the other integer types, in that each number is mapped to a Unicode character. This type is used to store single characters. One interesting and very important fact to understand is that the character representation of a numeral is not equal to the numeral itself. What I mean is that '1' is not the same thing as 1. As a matter of fact, the numeric value of '1' is 49.

Some characters can only be represented by escape sequences. These always start with a backslash ( \ ). When you look at one, it seems to be made up of multiple characters, but to the compiler, it's interpreted as a single character.

Usage Examples
char c = 'c';
char one = '1';
char alsoOne = (char)49; //casting an integer as a char
char smiley = '\u263A'; //unicode escape sequence for: ☺

Floating Point Types

Floating point types are all real numbers of varying range and precision. Both of these values are very important to understand! Range defines the maximum and minimum values the variable can hold, and precision determines the number of significant figures the variable holds. Floating point precision is often overlooked, and it can cause a lot of grief if you don't know what's actually going on under the covers.

Lets say for argument's sake, I have a type that can hold numbers between 0 and 1 million, to a precision of five digits. Some values I can store without any problem: 0, 10, 3.1416, 1000000000. None of these will actually lose any precision if stored in my imaginary type. However, a number like this: 12253462 may cause you problems. When this number is stored, only 5 of these digits can be kept. so this number is stored as (12253 * 10^3), which if you multiply back out is 12253000. The issue comes when you want to do comparisons on numbers like this. Is 12253462 greater than 12253461? Of course, but if you stored them both in this type, they'd both be truncated to the same value. See the issue? Be sure to know exactly what your floating point variables can store.

There are three floating point types: float, double, and decimal.

Floating points are all value types.

Float is the least flexible of the three, but with the smallest memory footprint (32 bits).
Float minimum: -3.4 × 10^38
Float maximum: +3.4 × 10^38
Float precision: 7 digits

Double (called that because it's twice the size of float, 64 bits) has the largest range of any built in value, but not the greatest precision. This is the most commonly used floating point type.
Double minimum: ±5.0 × 10^−324
Double maximum: ±1.7 × 10^308
Double precision: 15-16 digits

Decimals have the smallest range of all floating-point types, but the greatest precision by a large margin. This type was included to allow for computations with great precision, like monetary transactions.
Double minimum: -7.9 x 10^28
Double maximum: 7.9 x 10^28
Double precision: 28-29 digits


Any floating point literal without a suffix is automatically considered a double. Append a "F" or "f" to represent a float, or a "M" or "m" to represent a decimal. "D" and "d" can be used to explicitly represent a double. Integral literals are implicitly converted to the proper type, so no suffix is needed for them.

Usage Examples
float f = 3.141f; //"f" suffix
double d = 3.141; //no suffix required
decimal m = 3.141m; //"m" suffix
float fromInt = 10; //an int implicitly converted to a float
double eNot = 6.022e23; //exponential notation. 6.022x10^23

Reference Types

Reference types are very interesting types. The variables for these don't actually contain the value represented by these directly. They contain a reference to the memory address where the value is actually stored. This seems like an odd idea, but understand that some reference types can be huge! Passing these around, making copies to send to methods...that's not a good idea. So instead of sending the entire value, a pointer is sent that anything can follow to find the original.

This may seem like a very fine distinction, but it's absolutely imperative you understand this concept, since most of the types you'll be working with later are reference types. When you pass something as a reference, if it's modified, the original is modified! The same is not true for a value type. We will cover this in more depth in our Classes and Objects tutorial later.

There are two built-in reference types: object and string.

object is, at it's core, the basis for every other type construct in C#. In fact, you'll see that instances of other types are often reffered to as objects. It's an important distinction that they're not referring to actual System.Objects, but things that have derived from object, which in C# is everything.

The purpose for object is as a base for everything else. Directly, they're not that useful, but they're the fundamental building block for everything else.

The explanation for object is tied heavily to Inheritance and Polymorphism, so we'll discuss this much further in a later tutorial.

string is the second, and far more useful of the two built in reference types. Strings represent multiple characters. Simple in concept, really, but incredibly important.

In older languages, there were no actual strings. Strings were actually arrays of single characters, with a terminator character (\0) to define where the string stops. In more modern languages like C#, that array is mostly obfuscated, and you can use strings like any other type. String literals are multiple characters surrounded by double-qoutes (the " mark).

Usage Examples
string s = "this is a string";
string s2 = "this too!";

There are lots and lots of things you can do with strings. So many that they deserve their own tutorial. We'll come back to this in our tutorial on String Manipulation.

There's one more built in type that does not fit properly in any other category. This type was added in C# 4 (so you can't use it without VS 2010): dynamic. This type can hold literally anything; ints, strings, doubles, IPAddresses, Forms; any type that exists in C# can be assigned to a dynamic.

Dynamics aren't bound at compile time, so there's no type checking. You can store anything in a dynamic, and attempt to do anything with them as well, such as trying to add string to an int. If this is something that can't be done, an exception will be thrown.

dynamic has very specific uses, none of which are for the beginner (or even intermediate) programmer. Using dynamic can make a few things easier, but it can and will make debugging and programming absolute hell if you use it when you don't need it.

Steer clear of it for now. We'll revisit it at a later date in another tutorial.

In Conclusion

It seems like a lot of information to process, but many of these types you'll never actually see (for instance, I've never seen a sbyte used in the source of any application I've ever worked on), and most of them behave just like all the others in its category. An int has the same methods as a long, they just hold different sized values.

If I had to pick the most important built-in types to get familiar with, I'd suggest: int, double, char, and string. They're the most common, and if you learn them, you learn all the others in their category.

See all the C# Learning Series tutorials here!

This post has been edited by insertAlias: 22 February 2011 - 03:41 PM

Is This A Good Question/Topic? 7
  • +

Replies To: C# "Primitives" : Built in types

#2 k0b13r   User is offline

  • D.I.C Head
  • member icon

Reputation: 15
  • View blog
  • Posts: 243
  • Joined: 18-July 06

Posted 22 February 2011 - 11:41 AM


Nice post ;)
I would like to point one little mistake:


"Prepend a "F" or "f" to represent a float, or a "M" or "m" to represent a decimal."

I think it should be 'append' ;)
Was This Post Helpful? 1
  • +
  • -

#3 Curtis Rutland   User is offline

  • (╯□)╯︵ (~ .o.)~
  • member icon

Reputation: 5106
  • View blog
  • Posts: 9,283
  • Joined: 08-June 10

Posted 22 February 2011 - 11:43 AM

Nice catch. Fixed.
Was This Post Helpful? 0
  • +
  • -

#4 jtenos   User is offline

  • New D.I.C Head

Reputation: 2
  • View blog
  • Posts: 5
  • Joined: 31-May 06

Posted 22 February 2011 - 11:53 PM

Regarding the floating point numbers. It's worth mentioning that float and double are approximations, where decimal represents a decimal number exactly (it's a lot more complicated than this, but this is a reasonable way of thinking about it). This makes a huge difference in real-life - any real-world number (such as currency, percentages, etc.) should probably use decimal instead of double, since in the double world, 3.4 - 3.1 does not equal 0.3.


most of the objects and types used are not native to C#

I'd say "all of the objects".


This type was added in C# 4 (so you can't use it without VS 2010)

Just being picky here, but Visual Studio is not required for .NET development - the SDK works just fine without Visual Studio.


the most important built-in types to get familiar with, I'd suggest: int, double, char, and string

I'd add decimal and byte to the list. Decimal for the reason above - it's more useful than double in the majority of scenarios. Byte arrays are used all the time in file access, or really anything related to Streams, which are very important to learn and understand.
Was This Post Helpful? 2
  • +
  • -

#5   User is offline

  • D.I.C Head

Reputation: 20
  • View blog
  • Posts: 55
  • Joined: 15-February 11

Posted 14 April 2011 - 01:44 PM

View Postjtenos, on 22 February 2011 - 11:53 PM, said:

Should probably use decimal instead of double, since in the double world, 3.4 - 3.1 does not equal 0.3.

Since when? :S
Was This Post Helpful? 0
  • +
  • -

Page 1 of 1