2 Replies - 3348 Views - Last Post: 15 January 2013 - 05:24 PM Rate Topic: -----

#1 MitulP91   User is offline

  • D.I.C Head

Reputation: 1
  • View blog
  • Posts: 64
  • Joined: 18-July 12

Time Complexity of Euclid's Algorithm

Posted 15 January 2013 - 11:09 AM

Hello everyone,

I have been given an assignment to code and then find the time complexity of Euclid's algorithm. The code is as follows:

int gcd(int m, int n) {
    int r;
    while (n != 0) {
        r = m % n;
        m = n;
        n = r;
    return m;

I had no issues on that part. However, I have no idea how to go about finding the worst case and average case efficiencies.
I tried looking online for some information, but it is all flying over my head.

Hopefully someone here can help.

Is This A Good Question/Topic? 0
  • +

Replies To: Time Complexity of Euclid's Algorithm

#2 mojo666   User is offline

  • D.I.C Addict
  • member icon

Reputation: 408
  • View blog
  • Posts: 882
  • Joined: 27-June 09

Re: Time Complexity of Euclid's Algorithm

Posted 15 January 2013 - 04:29 PM

You could set up an experiment. Put a counter in the while loop, print the result when it is done. You could even return the counter instead of m. Then try all combinations of m and n in a range and print the worst case and average.
Was This Post Helpful? 0
  • +
  • -

#3 macosxnerd101   User is offline

  • Games, Graphs, and Auctions
  • member icon

Reputation: 12305
  • View blog
  • Posts: 45,401
  • Joined: 27-December 08

Re: Time Complexity of Euclid's Algorithm

Posted 15 January 2013 - 05:24 PM

The Wikipedia page does a good job of explaining this, I think. Putting this in plainer terms, let gcd(a,c), where a > c and h is the number of digits in c. Then the algorithm is O(h). Another way of saying this is that the algorithm is O(log10©). Try it out. If c = 999 (the largest 3-digit number), then log10(999) = 2.999 (or roughly 3). Now there are 3 digits in 999, so h = 3.

Cite: http://en.wikipedia....idean_algorithm


If the Euclidean algorithm requires N steps for a pair of natural numbers a > b > 0, the smallest values of a and b for which this is true are the Fibonacci numbers FN+2 and FN+1, respectively.[91] This can be shown by induction.[92] If N = 1, b divides a with no remainder; the smallest natural numbers for which this is true is b = 1 and a = 2, which are F2 and F3, respectively. Now assume that the result holds for all values of N up to M − 1. The first step of the M-step algorithm is a = q0b + r0, and the second step is b = q1r0 + r1. Since the algorithm is recursive, it required M − 1 steps to find GCD(b, r0) and their smallest values are FM+1 and FM. The smallest value of a is therefore when q0 = 1, which gives a = b + r0 = FM+1 + FM = FM+2. This proof, published by Gabriel Lamé in 1844, represents the beginning of computational complexity theory,[93] and also the first practical application of the Fibonacci numbers.[91]

This result suffices to show that the number of steps in Euclid's algorithm can never be more than five times the number of its digits (base 10).[94] For if the algorithm requires N steps, then b is greater than or equal to FN+1 which in turn is greater than or equal to φN−1, where φ is the golden ratio. Since b ≥ φN−1, then N − 1 ≤ logφb. Since log10φ > 1/5, (N − 1)/5 < log10φ logφb = log10b. Thus, N ≤ 5 log10b. Thus, the Euclidean algorithm always needs less than O(h) divisions, where h is the number of digits in the smaller number b.

Was This Post Helpful? 1
  • +
  • -

Page 1 of 1