Page 1 of 1

## Data Structures: Recursion, Stacks, and Trees Rate Topic: 1 Votes //<![CDATA[ rating = new ipb.rating( 'topic_rate_', { url: 'http://www.dreamincode.net/forums/index.php?app=forums&module=ajax&section=topics&do=rateTopic&t=198160&amp;s=9a88f93f1a66807c1682b9d7e8ce9f38&md5check=' + ipb.vars['secure_hash'], cur_rating: 5, rated: 0, allow_rate: 0, multi_rate: 1, show_rate_text: true } ); //]]>

### #1 macosxnerd101

• Self-Trained Economist

Reputation: 10094
• Posts: 37,164
• Joined: 27-December 08

Posted 03 November 2010 - 10:17 AM

POPULAR

In this tutorial, I will cover recursion from a data structures perspective. I will begin by discussion recursion and how it is handled on a call stack. I will also discuss modeling recursive algorithms with Trees, as well as the recursive nature of Trees themselves, and how we can take advantage of this recursion when working with them.

Part I- Stacks
While it isn't necessary to understand exactly how Stacks work to understand recursion and implement recursive solutions, it helps when modeling a recursive solution to understand the order in which the calls will be evaluated. Remember- commands in Java are executed based on the ordering in the Call Stack.

So let's get started. What is a Stack? Basically, it is a subset of the List structure which enforcees Last-in-first-Out ordering. Stacks only use three functions to enforce this. They are:
• peek()- View the element on the top of the Stack (the last element added)
• push()- Add an element to the top of the Stack
• pop()- Remove an element from the top of the Stack

Now that we've gone over what a Stack is, let's look at a non-recursive example to better understand how the Call Stack evaluates method calls.
void a(){
b();
c();
}

void b(){
d();
}

void c(){}
void d(){}



So above, we see a bunch of methods, some of which call each other. Let's start by invoking a(). Immediately, the invocation of a() is pushed onto the Call Stack as the first element. In order to pop a() from the Call Stack, we must finish evaluating it. However, the first is to invoke method b(), which is now pushed onto the Call Stack, followed by method d() as we try to evaluate b(). After d() is evaluated, it is popped from the Call Stack, followed immediately by method b(). We now go back to method a() to finish evaluating it, which involves pushing c() onto the Call Stack, and popping it off as it has no more commands to push onto the Call Stack. Finally, we can pop a() from the Call Stack, and as it is empty now, we are finished.

This is exactly how recursion works- except that the method call we are pushing on to the Call Stack is the method we are in. The way recursive calls are ended is through something called a base case. This is the point where the top call on the Call Stack is evaluated to completion, and then popped from the Stack. This usually effects the previous method calls, allowing them to finish evaluating, and be popped off the Call Stack. This occurs in a domino effect until the first call is reached and finished evaluating.

Let's take a look at an example of how recursion is evaluated on the Call Stack with the classic factorial example.
public int factorial(int n){
if(n <= 1)
return 1;

return n*factorial(n-1);
}



Here, our base case is n <= 1 because that is when no more recursive calls are pushed onto the Call Stack, and previous factorial() calls can be popped off the Call Stack. In other words, everything starts to come together when we hit the base case of n <= 1. Our recursive call is when we invoke factorial(n-1). So basically, as long as n > 1, then we push another call to factorial() passing n-1.

So if we invoke factorial(3), that is immediately pushed onto the Call Stack. Then we push factorial(2), followed by factorial(1). However, once we reach factorial(1), we hit the base case. As 1 is returned, we have finished evaluating factorial(1), so that is popped off the Call Stack. At this point, everything starts to come together, as popping factorial(1) allows factorial(2) to finish evaluating and be popped from the Call Stack. In turn, this allows factorial(3) to finish evaluating and be popped from the Call Stack. So really, recursion isn't that much different from non-recursive method calls. If you don't feel comfortable with recursion, this is a good way to visualize and conceptualize it.
_________________________________________

Part II.A- Modelling Recursive Solutions with Trees

In this part, I will cover recursion and trees. This is a fairly broad topic, including modeling recursive solutions with trees, understanding how these solutions are evaluated on the Call Stack, and the recursive nature of trees.

Before we delve into this section, I feel it is important to first give a brief introduction to the data structure we are using. Note that this is not a tutorial on Trees from the ground up, but our focus is instead looking at an attribute of this data structure.

Linked List:
Head Node --> Node --> Node --> Node

Tree:
Root
/     |     \
Node   Node   Node
|     / \       \
Node  Node Node  Node
/   \
Node   Node



The structure with the Tree has a few key points to notice. For one, each Node has a varying number of Children Nodes. As you can see with the Tree, there are varying depths depending on the path followed.

So what types of Recursive algorithms are Trees appropriate to model? Trees are good structures to model divide and conquer recursive algorithms with multiple recursive calls in the method body. Let's go ahead and walk through an example of modeling recursive solutions with a Mergesort algorithm.

So to begin, let's examine some pseudo-code and discuss the algorithm.
mergesort(x){

if length(x) <= 1
return

else if length(x) == 2
if x[0] > x[1]
swap(x[0], x[1])
return

left <-- elements in x from 0 through length(x)/2
right <-- elements in x from length(x)/2 + 1 through the end of x

mergesort(left)
mergesort(right)

merge(left,right) into x
}



Basically, Mergesort is a divide and conquer algorithm with a recursive implementation. Given some list x, we divide it up into two sub-lists, left and right, and then recursively sort each until we hit one of two base cases: we only have one element or we have two elements. If we have one element, the list is sorted, and if we have two elements, we may need to swap them. The merge() function is used to place the elements in left and right into x in proper order.

So now that we understand how Mergesort, let's model it with Trees, given the array [2,3,1,5,7,6,4]. As we are starting with the unsorted array, it makes sense that this is our root node, or starting point. As we have two recursive calls in Mergesort, it makes sense that we will have no more than two children Nodes per case. Now when we recurse and invoke mergesort() on left or right, that sub-list we have created is another Node in our Tree model. Finally, we get down to our base cases on the third level. Notice how like on the Call Stack where we don't push any more recursive calls, our base-case Nodes don't have any children.
                    [2,3,1,5,7,6,4]
/ left         \right
[2,3,1]           [5,7,6,4]
left /   \right     left /   \ right
[2,3]  [1]          [5,7] [6,4]



So how exactly would we evaluate a Tree Model on the Call Stack? The specific algorithm we would use is a Depth-First traversal. Basically, we start at the Root Node. From there, two means of traversal exist for Trees: evaluate each of the root's children Nodes before evaluating the grandchildren Nodes, or we can evaluate each child Node totally (ie., all of it's nth generation children) before evaluating the next child Node. The first is called a breath-first search, and the latter is a depth-first search. Tree-models are evaluated on the Call Stack using a depth-first traversal. So we start off with our root Node, then go to the first child, [2,3,1] in our example. We then evaluate [2,3,1]'s children Nodes. Note that as [2,3] doesn't have any children, it is popped off the Call Stack immediately after it is evaluated, then [1] is pushed onto the Call Stack. We can then pop [2,3,1] off the Call Stack and push [5,7,6,4] onto it, and evaluate it in the same way. When we get back to the root node and pop it off the Call Stack, we will have completed the Mergesort routine on the original array.
________________________________________

Part II.B- Recursive Nature of Trees

The last two sections focused more on understanding how recursion works. This section deals more heavily with Trees and the uses of recursion when working with them.

In the last section, we touched a little on the recursive nature of Trees. In this section, we will look at this in more detail. So what exactly makes Trees recursive? Basically, a Tree is defined by a Root Node with 0 or more Children Nodes. So if we want to get a sub-tree, we basically go down to a child Node. Hence, for the new tree, the Child Node is the new Root Node. So in short, we have a recursive definition.

As we also discussed in the last section, Trees are very similar to Linked Lists. So why exactly should we use Recursion with Trees when Iteration works just well with Linked Lists? The reason is because each Node can have a variable number of Children Nodes, which allows for varying breadth and depth of the Tree. So a recursive solution allows us to focus on the definition of a Tree: a root node with n children.

Let's examine this concept with a depth-first search, something we had touched on in the last section with Tree Models of Algorithms. As a refresher, a depth-first search focuses on evaluating all the Children of a single Child Node before proceeding to the next Child Node of the same generation/depth. Hence, we traverse deeper and deeper in the Tree until we hit the bottom. I will be using the TreeNode class from my Tree Data Structure Snippet, with a small excerpt below for reference.
public class TreeNode<E> {

private E value;

//children nodes
private List<TreeNode<E>> connections;

public TreeNode<E> get(int index){
return connections.get(index);
}

public int numberChildren(){return connections.size();}



Now, onto the depth-first traversal. What this snippet does is find the maximum depth of the Tree using a recursive depth-first traversal. However, it is the concept that is more important of utilizing the recursive definition of Trees to traverse them. So to start the traversal, I would invoke the depthFirst() method passing the root Node of the Tree I wanted to traverse. Now, if we pass no Node (null) or a Node with no children, there isn't anything to Traverse so we have hit our base case and the bottom of the Node. Basically what we are doing is comparing the depth of two subtrees, and returning the larger of them. Now as we go deeper within a subtree, we encounter an increasing number of subtrees, and a perfect example of divide-and-conquer (divide up the tree into subtrees, and get their depths to get the depth of the whole Tree). So to find the depth of Tree1, we compare the depths of Trees 2 and 3. To find the depth of Tree2, we compare the depths of Trees 4, 5, and 6. As the name of the algorithm suggests, in order to be able to compare the depth of Tree 2 to the depth of Tree 3, we need to completely evaluate Tree 2's depth first. So Trees 4, 5, 6, 7, 8, and 9 will be evaluated first before we even look at Tree 3.
//the Tree to examine

1
/    \
2      3
/ | \
4  5  6
|    / \
7   8   9
public int depthFirst(TreeNode<E> root){

//if somehow we don't have a Node to examine
//or the Node has no children, then we are done examining
//this particular Tree
if(root == null || root.numberChildren() == 0)
return 0;

int depth = -1; //an impossible depth for a Tree

//now repeat this process for
//each child Node of root
for(int i = 0; i < root.numberChildren(); i++){
Math.max(depthFirst(root.get(i)),depth);
}

//if we haven't started tallying depth yet
//set max to 0
if(max < 0) max = 0;

//now we are going to add a tally for this Node
//to satisfy the previous call, as well as return
//the current max so the previous call has a value
//to compare against
return 1 + max;
}



Is This A Good Question/Topic? 9

## Replies To: Data Structures: Recursion, Stacks, and Trees

### #2 toshiro

Reputation: 21
• Posts: 137
• Joined: 27-June 09

Posted 03 November 2010 - 09:43 PM

Just one small detail.

You give a very comprehensive overview of he main structures and their object-oriented characteristics. But I think a section on pointers and their powerful properties needs to be addressed somewhere under the sections about stacks and their linked-list brethren. Java does a poor job of expressing what a pointer really is, a reference to a memory location that points to a series of other contiguous or fragmented memory locations that contain a piece of data or maybe an abstract object construct. This is a critical point that students come into contact with on 100-level data structure courses past the AP exam and Into to CS. For example, my data structure course was taught in C++ to better analyze the pointers and their associated caveats: memory leaks, dangling pointers, and illegal modification, where as the abstraction java provides was advantageous in the into to CS course.

### #3 macosxnerd101

• Self-Trained Economist

Reputation: 10094
• Posts: 37,164
• Joined: 27-December 08

Posted 04 November 2010 - 05:55 AM

I thought pointers were covered in the intro class? They were for me last year in AP CS, so I went on the assumption that my audience was familiar with them. Pointers in Java seems like a different tutorial to cover; and honestly, I was more focused on the recursion aspects with these data structures vs. covering pointers.