Hi everyone i have been given this assignment, i am not sure the route to take but i think the algorithm needed would resemble something of a heap sort. Please correct me if i am wrong. The question goes..

You are given a set of n distinct numbers and you would like to know whether given a number

x, there are two elements of the set that sum to x. Describe an algorithm that runs in O(nlgn)

time that solves this problem, explaining why your algorithm works.

# Sorting

Page 1 of 1## 11 Replies - 1290 Views - Last Post: 10 April 2010 - 07:17 PM

##
**Replies To:** Sorting

### #2

## Re: Sorting

Posted 04 April 2010 - 05:04 PM

This is actually the knapsack problem, not a sort. One way to solve this problem would be to sort and iterate, though it wouldn't be the most efficient solution.

I've actually written a snippet to solve a less-restricted version of this problem, if you want to check it out. It tests to see if, given an int[] and a target, the target can be formed from the elements in the array.

Link: http://www.dreaminco...snippet5011.htm

I've actually written a snippet to solve a less-restricted version of this problem, if you want to check it out. It tests to see if, given an int[] and a target, the target can be formed from the elements in the array.

Link: http://www.dreaminco...snippet5011.htm

### #3

## Re: Sorting

Posted 04 April 2010 - 06:25 PM

You can split this problem into two parts,

First part is sorting the numbers which takes O(nlogn).

Second part is matching. For each element i, subtract it from x and then search the set for an element x-elem(i) using binarysearch O(log n). This part also takes O(nlogn) and therefore the algorithm runs in O(nlogn).

Knapsack is more general than this approach but maybe not as simple to understand. Also, knapsack is pseudopolynomial so you may not want to claim that it runs in O(nlogn).

First part is sorting the numbers which takes O(nlogn).

Second part is matching. For each element i, subtract it from x and then search the set for an element x-elem(i) using binarysearch O(log n). This part also takes O(nlogn) and therefore the algorithm runs in O(nlogn).

Knapsack is more general than this approach but maybe not as simple to understand. Also, knapsack is pseudopolynomial so you may not want to claim that it runs in O(nlogn).

### #4

## Re: Sorting

Posted 04 April 2010 - 06:26 PM

If you really wanted to go the sorting route, you could use Radix Sort, which operates in O(n), and search for your target using binary search which takes O(log(n)) time or fibonacci search which takes O(log(x)) time, where x is the quantity of fibonacci numbers you are using. So you've now got the solution down to O(n).

### #5

## Re: Sorting

Posted 04 April 2010 - 06:44 PM

Radix sort is O(n*k) where k can be arbitrarily large and in worst case k >> n. I'd also like to know how you accomplish the rest using one run using fibonacci search. You still need to run it n times, right?

### #6

## Re: Sorting

Posted 04 April 2010 - 06:55 PM

We've got it down to O(n) (or O(kn)) using Radix Sort and Fibonacci search vs. O(n log(n)) with Quicksort or Heapsort and binary (or fibonacci search). Worse case, depending on K, an n log(n) algorithm may be more efficient. But that is mostly for small collections with small number sizes.

One run? Fibonacci search works very similarly to binary search, except it uses the fibonacci numbers to narrow down options, making it O(log(x)) vs. O(log(n)), so slightly more efficient.

Quote

I'd also like to know how you accomplish the rest using one run using fibonacci search.

One run? Fibonacci search works very similarly to binary search, except it uses the fibonacci numbers to narrow down options, making it O(log(x)) vs. O(log(n)), so slightly more efficient.

### #7

## Re: Sorting

Posted 04 April 2010 - 10:04 PM

macosxnerd101, on 04 April 2010 - 04:04 PM, said:

This is actually the knapsack problem, not a sort. One way to solve this problem would be to sort and iterate, though it wouldn't be the most efficient solution.

I've actually written a snippet to solve a less-restricted version of this problem, if you want to check it out. It tests to see if, given an int[] and a target, the target can be formed from the elements in the array.

Link: http://www.dreaminco...snippet5011.htm

I've actually written a snippet to solve a less-restricted version of this problem, if you want to check it out. It tests to see if, given an int[] and a target, the target can be formed from the elements in the array.

Link: http://www.dreaminco...snippet5011.htm

Thanks macosxnerd101, it seems to be doing the job, but not in O(nlogn) time, but perhaps O(n) time, i'll probably try gloin's method and see if it works

### #8

## Re: Sorting

Posted 05 April 2010 - 03:44 AM

Actually, macosxnerd101's method is faster (O(n) is better than O(nlogn)) but I would argue that he can't prove it can run in O(n) for any problem instance. In particular, it's not what your teacher is asking for in this case.

### #9

## Re: Sorting

Posted 05 April 2010 - 04:41 PM

Radix sort is linear in terms of the total number of input bits. It compares individual bits or pieces of keys, rather than keys in their entirety (like quicksort, for example). It is still essentially an "n logn" algorithm, because the number of bits per key must be at least logn in order to accommodate n distinct keys of input.

So if you're going to compare radix with usual sorts like heapsort or mergesort, using the same criteria to denote "n", they will all have "n logn" complexity.

So if you're going to compare radix with usual sorts like heapsort or mergesort, using the same criteria to denote "n", they will all have "n logn" complexity.

This post has been edited by **Galois**: 05 April 2010 - 04:43 PM

### #10

## Re: Sorting

Posted 08 April 2010 - 06:48 PM

As we are comparing Radix sort to other comparison sorting algorithms, the Big-O notation describes the number of comparisons. Since the fastest time for a comparison is k, Radix becomes linear on k, and other comparison sort algorithms like Mergesort and Quicksort become O(kn log(n)).

### #11

## Re: Sorting

Posted 09 April 2010 - 09:06 AM

When you write Radix as O(kn), "k" does not stand for the time of the fastest comparison. It stands for the number of bits of the largest number. Here is another way of saying what I said in my previous post:

Radix sort starts by taking the first bit of all numbers and putting it in some bucket. This procedure is O(n). Then the radix sort does the same thing for the second bit, third bit, etc... How many times it's going to do it? The number of bits in the largest number. For simplicity just assume distinct numbers are chosen as n increases. Then the largest number after n choices will be

If you constrain all your arithmetic to a fixed number of bits, then you can let k be a constant. As far as theory and big-O are concerned, Radix sort is still essentially an O(n logn) algorithm.

Radix sort starts by taking the first bit of all numbers and putting it in some bucket. This procedure is O(n). Then the radix sort does the same thing for the second bit, third bit, etc... How many times it's going to do it? The number of bits in the largest number. For simplicity just assume distinct numbers are chosen as n increases. Then the largest number after n choices will be

*at least*n, which has logn bits.If you constrain all your arithmetic to a fixed number of bits, then you can let k be a constant. As far as theory and big-O are concerned, Radix sort is still essentially an O(n logn) algorithm.

### #12

## Re: Sorting

Posted 10 April 2010 - 07:17 PM

I'll concede here that, theoretically, Radix Sort is arguably an O(n log(n)) algorithm. However, in application, if you recorded the number of steps for sorting in comparison to collection size, and performed a linear regression test, you would get a strong, positive, linear correlation with r >= .85 at least.

Page 1 of 1