The best way to introduce you to data flow modelling is to throw you in at the deep end!
So without further a due, let's consider an example. We will take a very simple model that will add two numbers together to produce a result. Traditional programming paradigms state that in order to compute the result of an addition x + y = z, the x and y components need to be known before the computation can produce the result. Furthermore, the value of x and the value of y need to be stable at the time the computation is realized otherwise the result would not be predictable. So far so good! What would happen though if say the value of x was known, but the value of y was not? Well, the computation could not be completed obviously.
Now let's imagine that the value of x and the value of y are fluctuating continuously which would mean that the result z was also fluctuating continuously. This scenario is not possible to simulate using conventional models, so we would have to resort to either sampling at set periods of time or through the use of hardware interrupts. So long as the model was fast enough, we could program a solution that appears to be continuous (an example of this would be the movement of the mouse cursor where x and y represent their respective x- and y-coordinates of the mouse cursor and z is the actual position the mouse cursor is displayed).
The Fibonacci Sequence
To take us one step closer to appreciating where we are heading with all of this, let's extend our simple addition model to deal with the Fibonacci numbers. Let's write down the steps we need to take.
Set x = 0 | | Initialize values Set y = 1 | Add x and y to give z | Compute result Set x = y | | Fix values Set y = z | Go to 5 | Repeat operation
Once this code has been initiated at step 1, it will carry on indefinitely.
Now, let's look at the problem in a slightly different light by imagining the data flows between a set of objects where each object performs a specific task. To implement the sequence of steps would entail the following:
Latch a zero into multiplexer one port A, latch a one into multiplexer two port A Multiplexers select port A as output (internally changing state to output port B on subsequent ticks), multiplexer two output to feed into multiplexer one port B Adder to take inputs from multiplexer one and two and produce result to feed into multiplexer two port B
The following diagram shows this data flow.
Traditional versus Data Flow
In order to clarify what all this is about (my apologies because I sometimes forget how way off the beaten track this concept is) let me try to explain. Let's take a simple sequence of steps.
1. User inputs data into a field called x
2. Using x as a parameter, a function is called to manipulate x into some other value y
3. Using y as a parameter, a function is called to manipulate y into some other value z
9. Value is output
At the end of this sequence of processing, control is returned back to the top so that the user can input another value for processing.
Data flow differs in that each step is repeated independently of the other steps. The output from step 1 being placed as the input to step 2 leaving step 1 free to accept another input. Similarly, the output from step 2 is placed as the input to step 3 leaving step 2 free to process the next output from step 1. This is repeated for each of the steps. Data is cascading between each of the processing steps resulting ultimately in all steps being active at the same time.
Now you may think that this all seems a little odd verging on the insane, but trust me there are situations where this approach provides a simpler solution to a problem than conventional techniques. Process control systems and modular software synthesizers are but two examples where data flow is preferable to conventional methodologies.
An implementation example of this technique can be found in the C++ Tutorials here.
This post has been edited by Martyn.Rae: 15 April 2011 - 10:14 PM