0 Replies - 7171 Views - Last Post: 01 January 2015 - 10:53 PM Rate Topic: -----

#1 macosxnerd101  Icon User is online

  • Games, Graphs, and Auctions
  • member icon




Reputation: 12189
  • View blog
  • Posts: 45,251
  • Joined: 27-December 08

Complexity Classes P and NP

Posted 01 January 2015 - 10:53 PM

Complexity Classes P and NP

See the attached LaTeX PDF for a typeset version of this tutorial.
Attached File  P_NP.pdf (178.23K)
Number of downloads: 309

Introduction
In this tutorial, I will introduce the complexity classes P and NP. Complexity theory is the study of how efficiently a problem can be solved. While there are many more complexity classes beyond P and NP, these are the two most commonly studied. This tutorial will address three points of interest: defining the classes P and NP, formalizing the notion of what constitutes a hard problem, and proving a problem is in one of these classes or hard for one of these classes.

While a deep understanding of computability theory is not necessary, some familiarity with the concept of a Turing Machine is useful, as the complexity classes are defined in terms of Turing Machines. This tutorial provides more information on how Turing Machines work, from a practical standpoint. I will gloss over details of Turing Machines as appropriate in this tutorial, so one will be able to follow along without much background on Turing Machines.


Defining the Classes P and NP
If you are familiar with the problem Does P = NP?, then it may be motivating to study this topic for your chance to win $1 Million. For those not familiar with this problem, we will explore what it means and seek to better understand it in this section. Let's start with formal definitions of the classes P and NP.

P: A language L is in the complexity class P if and only if L can be decided by a deterministic Turing Machine in polynomial time on the size of the input.

NP: A language L is in the complexity class NP if and only if for any string w in L, there exists a non-deterministic Turing Machine that can verify w is in L in polynomial time on |w|. Note that there will be no false-positives. That is, if w is not in L, the Turing Machine will never accept w.

These definitions are somewhat clunky and may not make a lot of sense right now. So let's unpack them and develop some intuition. The Church-Turing Thesis conjectures that there is no model of computation more powerful than the Turing Machine. So we equate the concept of an algorithm with the formalization of a Turing Machine. That is, determining if a Turing Machine exists can be answered by providing an algorithm.

Notice that in both P and NP, the problem is string membership in a language (ie., is "word" in the English language?). That is, we are asking the question: is w in L? This is a yes or no question. So our Turing Machine (read- algorithm) takes the string as an input. If the Turing Machine halts on w and accepts it, then w is in the language L. If the Turing Machine halts on w and rejects it, we conclude w is not in L. There is a third option as well: the Turing Machine may not halt on w. This will only happen if w is not in L, but we cannot tell when an arbitrary Turing Machine is not halting. So this third option prohibits us from making a conclusion.

In the context of the Turing Machine model, languages are the problems we seek to solve. So for example, consider the problem of determining whether a graph has a Hamiltonian circuit. The corresponding language would then be: LHC = { G : G is a graph that has a Hamiltonian Circuit }. We would then ask if a graph H is in LHC. In other words, does H have a Hamiltonian Circuit?

So P and NP are sets of decision problems. The problems in P can all be answered correctly in polynomial time, while the problems in NP can only have correct answers of "yes" verified in polynomial time. So if LHC is in NP, and a graph G is in L, then an algorithm can verify that G has a Hamiltonian Circuit provided said algorithm is given both G and the Hamiltonian circuit in G.

The Hamiltonian Circuit problem is actually in the class NP. Some other problems in NP include the Knapsack problem, the Minimum Spanning Tree problem, and the Shortest Path problem. Incidentally, the Minimum Spanning Tree and Shortest Path problems are also in P, as we have polynomial time algorithms to solve them. From the definition of P (and the examples provided), we can see that P is a subset of NP. We don't know if NP is contained in P, though. So while we don't know if P = NP, it is widely believed that P != NP. The importance of the P = NP problem will become clearer in the next section, where the concept of a computationally hard problem is formalized.


(N)P-Hard and (N)P-Complete Problems
In this section, the notion of a computationally hard problem will be formalized, providing the context for why the P = NP problem is important.

Consider two problems H, J. These need not necessarily be decision problems. Problem J is harder than problem H if any algorithm to solve J can also be used to solve H. We denote this by saying H <= J. Notice that this defines an ordering on problems (called a partial ordering). This also indicates that H can be reduced to J. When looking at a particular complexity class, such a reduction usually has constraints such as being computed in polynomial time or a logarithmic amount of space.

Such a reduction would be a function f : H -> J, which transforms each instance of H into an instance of J. The transformation procedure also must satisfy the constraints imposed by the complexity class.

Now let's define the hardest problems in P and NP.

NP-Hard: A problem Q is NP-Hard if every X in NP is reducible to Q in polynomial time. We denote this as X <=p Q.

That is, given such a transformation f: X -> Q, for every x in X, f(x) is computable in polynomial time with respect to the size of the input.


Similarly, we define P-Hard problems:

P-Hard: A problem Q is P-Hard if for every X in P, X is reducible to Q in log-space. We denote this as X <=l Q.

That is, given such a transformation f : X -> Q, for every x in X, f(x) is computable using a logarithmic amount of space with respect to the size of the input.


Notice that P-Hard and NP-Hard problems need not necessarily be in the classes P and NP. A good example is the Traveling Salesman optimization problem (TSPopt), which is NP-Hard. As the class NP consists of only decision problems, TSPopt is not in NP.

There are P-Hard and NP-Hard problems in the classes P and NP respectively, though. We call such problems P-Complete and NP-Complete. So an NP-Complete problem has three characteristics:
  • It is a decision problem.
  • A correct answer of "YES" can be verified in polynomial time, given a certificate.
  • All other problems in NP are reducible to our NP-Complete problem in polynomial time.


It follows that all NP-Complete problems are reducible to each other in polynomial time. So all NP-Complete Problems are equally as hard.

Similarly, P-Complete problems have three characteristics:
  • It is a decision problem.
  • It can be solved in polynomial time.
  • All other problems in P are reducible to our P-Complete problem in log-space.



So what is the relation between a hard problem for a complexity class and computational intractability? A problem is efficiently computable if it can be solved in polynomial time. That is, the problem is in P. Problems that are NP-Complete are believed to be computationally intractable, but we do not know this for certain. In other words, it is unknown if an NP-Complete problem is in P. If this were the case, then P = NP and any problem in NP could be solved in polynomial time. However, this is not believed to be the case, so NP-Complete and NP-Hard problems are generally believed to be computationally intractable.


Writing Complexity Proofs- NP
In this section, we look at writing NP-Completeness proofs and NP-Hardness proofs. These proofs can be a bit lengthy and hard to follow. So the goal is to really break down the proof structure to understand what needs to be shown and how to show each part.

Recall that the definition of NP-Complete states that an NP-Complete problem is in NP and is NP-Hard. To show that a problem is in NP, it suffices to exhibit a polynomial time verification algorithm. That is, the algorithm accepts some certificate and checks it against the given instance. If the certificate is valid, then the algorithm runs in polynomial time and recognizes the certificate as valid. Looking back at the Hamiltonian Circuit problem, an example of a certificate would be a Hamiltonian circuit contained in the graph. It would then be sufficient to construct an algorithm to verify that the provided certificate was indeed a Hamiltonian cycle of the graph.

The second part of an NP-Completeness proof is to prove that the problem is NP-Hard. That is, it is necessary to show that every problem in NP is reducible in polynomial time to the NP-Hard problem. This is done in one of two ways. The first is by explicitly showing the existence of a reduction for every problem in NP to the problem in question. The second way is by taking an existing NP-Complete or NP-Hard problem and providing a polynomial time reduction to the problem in question. The composition of polynomial time reductions is computable in polynomial time, so this is a valid and much easier approach.

In order to be able to use a reduction, it is necessary first to have an NP-Complete problem. Stephen Cook proved the Boolean Satisfiability problem (SAT) to be NP-Complete, and Richard Karp used this result to prove 21 other problems as NP-Complete. The proof of SAT being NP-Complete is quite intense and will not be covered in this tutorial. Instead, we demonstrate the reduction technique.

We begin with some conventions. Recall that NP-Complete problems are decision problems. Each problem has an instance, and a decision question. Think of the instance like a function parameter in computer programming. The problem name is also written in all capital letters. Let's look at the HAMILTONIAN CIRCUIT problem as an example:

HAMILTONIAN CIRCUIT
  • INSTANCE: Let G(V, E) be a simple graph.
  • DECISION: Does there exist a Hamiltonian Circuit in G?


We note that HAMILTONIAN CIRCUIT is NP-Complete, and we will use this to prove that the HAMILTONIAN PATH problem is NP-Complete. The HAMILTONIAN PATH problem is defined similarly as HAMILTONIAN CIRCUIT:

HAMILTONIAN PATH
  • INSTANCE: Let G(V, E) be a simple graph.
  • DECISION: Does there exist a Hamiltonian Path in G?


For convenience, I will abbreviate HAMILTONIAN CIRCUIT as HC, and HAMILTONIAN PATH as HP.

Proof: In order to show that HP is NP-Complete, it will be shown that HP is in NP, and HC <=p HP.

Claim 1: HP is in NP.

In order to show that HP is in NP, a polynomial time verification algorithm will be exhibited. Let G be the graph associated with the instance of HP, and let P be the path in G to serve as the certificate. That is, let P be the Hamiltonian Path in G. The algorithm must verify that P is in fact a Hamiltonian Path. To check that P is a Hamiltonian Path, it suffices to check that it is connected and all but two vertices, v1 and vn, have degree 2. The vertices v1 and vn must have degree 1. This can be checked using a breadth-first traversal of P, which can be done in polynomial time. And so HP is in NP.


Showing that HC <=p HP is more involved. We have three main parts of the proof: constructing the reduction, arguing that the reduction can be computed in polynomial time, and arguing the validity of the reduction. Constructing the reduction is done by transforming the instance of HC into an instance of HP. To prove validity, we argue that there is an answer of YES to an instance of HC if and only if there is an answer of YES to the corresponding instance of HP (via the transformation). Let's now construct the polynomial time reduction from HC to HP.


Claim 2: HC is polynomial-time reducible to HP.

Let G be the graph associated with the instance of HC. That is, G is a graph with a Hamiltonian Circuit. From G, we construct a graph G' such that G has a Hamiltonian Circuit if and only if G' has a Hamiltonian path. If G is a single vertex or edge, then G = G' and we are done. Otherwise, let x be a vertex of G. Let x' be a copy of x, and let all edges incident to x be copied and incident to x'. That is, if {x, v} is an edge in G, {x', v} is incident to x'. Let G' contain all vertices and edges of G, as well as x' and all incident edges to x'. Finally, add new vertices v, v' and edges {v, x}, {v', x'} to G'. As G is simple, x can have at most |V(G)| - 1 adjacencies, so this operation takes O(|V(G)|) time. Thus, the reduction is polynomial in time.

The validity of the reduction will now be shown. This is done by showing that G has a Hamiltonian Circuit if and only if G' has a Hamiltonian Path. Suppose G has a Hamiltonian Circuit. We show that G' has a Hamiltonian Path. Start at v on G' and move to x. Then follow the Hamiltonian Circuit from G on G' (this exists, as G is a subgraph of G'). Rather than returning to x, though, move to x'. Then from x', move to v'. This completes the Hamiltonian Path on G'. Conversely, suppose G' has a Hamiltonian Path. Removing v, v', and x' renders the Hamiltonian Path as a Hamiltonian Circuit. Hence, G has a Hamiltonian Circuit if and only if G' has a Hamiltonian Path. And so HC <=p HP, and HP is NP-Complete. QED.



So when we have two decision problems, the reduction is quite straight-forward. I offer a second proof reducing HC to TSPOPT, the optimization version of the Traveling Salesman Problem. We define TSPOPT as follows.

TSPOPT
  • INSTANCE: Let G(V, E) be a simple graph.
  • PROBLEM: Find the minimum cost Hamiltonian Circuit in G.


Notice that TSPOPT is not a decision problem, and so it is not NP. To show TSPOPT is NP-Hard, it suffices to show a polynomial time reduction from an existing NP-Complete problem to TSPOPT.

Claim: TSPOPT is NP-Hard.

Proof: Let G(V, E) be a simple graph with at least 3 vertices, and that G is associated with an instance of HC. So G has a Hamiltonian Circuit. Let n = |V(G)|, and construct a weighted complete graph on n vertices Kn, as follows. The vertices of Kn will be those of G. For each { vi, vj } in E(G), the edge { vi, vj } is weighted 1 in Kn. For each { vi, vj } not in G, { vi, vj } is weighted 2 in Kn. This construction examines C(n, 2) edges (where C(n, 2) represents the binomial coefficient), examining each edge one at a time. So the reduction is polynomial in time. As the minimum edge weight is 1 and n edges are required for a Hamiltonian Circuit, the optimal solution to the TSPOPT problem is at least n.

It suffices to show that G has a Hamiltonian Circuit if and only if the optimal solution of the TSP tour in G' is n. Suppose G has a Hamiltonian circuit. By construction, each edge in G is contained in the weighted Kn with each edge weighted 1. It suffices to trace along the Hamiltonian Circuit from G in Kn, providing a TSP tour of weight n, which is the optimal solution. Conversely, suppose that the optimal TSP tour has weight n. As the tour is a cycle on n vertices, there are n edges. The minimum edge weight on the Kn is 1, and so each edge in the tour must be weighted 1. By construction, each such edge is in G. Thus, tracing the TSP tour produces a Hamiltonian Circuit in G. Thus, HC <=p TSPOPT. And so TSPOPT is NP-Hard. QED.


Writing Complexity Proofs- P
The purpose of this section is to better understand how to construct complexity proofs related to the class P. Recall that P is the class of decision problems which can be correctly answered in polynomial time. We examine two proofs in this section. The first is to demonstrate a reduction showing a problem is in P. The second proof demonstrates the procedure for showing a problem is P-Complete.

Let's first examine how to show a problem is in P. There are two approaches to accomplish this. The first is to explicitly demonstrate a polynomial time algorithm to decide the problem. The other approach is to provide a polynomial time reduction from the given decision problem to an existing problem in P. I will not focus on algorithm correctness, as this topic is too broad for this tutorial. Instead, I will exhibit a polynomial time reduction between two problems in P.

The problem we will examine is 2-SAT, which is defined as follows.

2-SAT
  • INSTANCE: Let x be in {0, 1}n and consider conjunction C of clauses chosen from the components of x and their negations. Each clause is restricted to contain exactly two components of x (or their negations).

  • DECISION: Is there a configuration x in {0, 1}n such that the expression C(x) evaluates to true?


Note: Observe that 2-SAT is a subset of the general SAT problem, which is NP-Complete. So there do exist subsets of NP-Complete problems that are decidable in polynomial time (ie., are in P). This is important to note and remember when working with classes of NP-Complete problems.

In order to prove that 2-SAT is in P, we reduce it to the STRONGLY CONNECTED COMPONENT (SCC) problem defined as follows:

STRONGLY CONNECTED COMPONENT
  • INSTANCE: Let G(V, A) be a directed graph.
  • DECISION: Do there exist vertices i, j in V(G) such that there is a directed i -> j path and a directed j -> i path?


This problem can be decided in polynomial time using Tarjan's Algorithm, and so SCC is in P.


Claim: 2-SAT is in P.

Proof: The proof is by reduction to SCC. We begin by constructing the implication graph G, which is a directed graph. The vertices of G are the components of x and their negations, yielding 2n vertices in total. For each clause in C (xi v xj), add directed edges (~xi, xj), (~xj, xi), where the tilde represents variable negation. (So for example, if a clause contained (~x2 v x3), the edges added to G would be (x2, x3), (~x3, ~x2).) The reduction to SCC looks at the implication graph to determine if there is a component xi such that there is a directed path xi to ~xi, and a directed path ~xi to xi. Notice that there are at most C(n, 2) clauses to examine and C(2n, 2) edges to add to G, so the reduction is polynomial in time.

So we need to prove a couple facts:
  • If there exists an a -> b directed path in G, then there exists a directed ~b -> ~a path in G. We will use this fact to justify the existence of a strongly connected component if there is a directed xi -> ~xi.

  • C is satisfiable if and only if there is no strongly connected component in G containing both a variable and its negation. This will substantiate the validity of the reduction.



Claim 1: If there exists a directed a -> b path in G, then there exists a directed ~b -> ~a path in G.

Proof of Claim 1: Suppose there exists an a -> b directed path in G. By construction, for each edge (c, d) in G, there exists an edge (d, c). So given the a -> b directed path: a -> p1 -> ... -> pk -> b, there exist directed edges in G: (~b, ~pk), ..., (~p1, ~a), yielding a directed ~b -> ~a path, as claimed.


Claim 2: C is satisfiable if and only if there is no strongly connected component in G.

It will first be shown that if C is satisfiable, then there is no strongly connected component in G. This will be done by contradiction. Let x be a satisfying configuration of C, and let xi such that there is a strongly connected component including xi and ~xi. Let xi -> p1 -> ... -> pn -> ~xi be the directed xi -> ~xi path in G. Suppose first xi = TRUE. By construction, the edges (~xi, p1), (~pi, pi+1) for each i = 1, ..., n-1; and (~pn, ~xn) are in G. And so for each i = 1, ..., n, pi must be true to satisfy the corresponding clause in C. However, since ~xi is false, pn must be false to satisfy (~pn, ~xn), a contradiction. By similar analysis, xi cannot be FALSE either. And so C is unsatisfiable. Thus, if C is satisfiable, there is no strongly connected component containing both xi and ~xi.

Now suppose there is no strongly connected component in G. It will be shown that C is satisfiable by contradiction. Suppose there are no xi in x such that there exists a directed xi -> ~xi path. For each unmarked vertex v in V(G) such that no v -> ~v path exists, mark v as true. Now mark each neighbor of v as true, and the negations of each marked variable as false. Repeat this process until all vertices have been marked. By finiteness of the graph, this process terminates. Since there are no strongly connected components in G, all vertices will be marked. As C is unsatisfiable, let i, j in {1, ..., n} such i != j and that there exists directed xi -> xj and xi -> ~xj paths. So xi implies both xj and ~xj, which is a fallacy. By construction of G, there must exist a ~xj -> ~xi directed path in G, which implies that G has a strongly connected component. However, G has no strongly connected component by assumption, so a contradiction has been reached.

Thus, we conclude C is satisfiable if and only if there exists no strongly connected component in G, proving claim 2.

As the polynomial time reduction to SCC is valid, it follows that 2-SAT is in P. QED.



The last problem we will explore is the MONOTONE CIRCUIT VALUE (MCV) problem. MCV is P-Complete. To prove it takes a bit of work, and a few prior results are needed. In fact, many P-Completeness proofs are either ciruit-related or rely on reductions from circuit-related problems. We start with a few facts.

Fact 1: All Boolean functions f: {0, 1}n -> {0, 1}m can be computed using the operations AND, OR, and NOT.

Fact 2: All Boolean functions can be computed using operations equivalent to AND, OR, and NOT.

Fact 3: All logical circuits can be written as straight-line programs. That is, we have only variable assignments and arithmetic being performed. There are no loops, conditionals, or control structures of any kind.

Now let's define MCV:

MONOTONE CIRCUIT VALUE (MCV)
  • INSTANCE: Let C be a circuit over the monotone basis {AND, OR} (only AND, OR operations are used), and let x in {0, 1}n be a fixed configuration.

  • DECISION: Does x satisfy C?


Notice that MCV is different from SAT. In SAT, we are given a Boolean expression and are trying to determine if there is a satisfying configuration. In MCV, we are given the circuit and trying to determine if a particular input configuration will satisfy it. In other words, MCV deals with a "here, try this one" for a single input configuration.

An example instance of MCV is x = (0, 1) with C = x1 AND x2. Notice that x takes on specific values here.

In order to prove MCV is P-Complete, we reduce from CIRCUIT VALUE (CV), which is P-Complete. The difference between MCV and CV is that CV permits the NOT operation. So CV is defined as follows:

CIRCUIT VALUE (CV)
  • INSTANCE: Let C be a circuit over the monotone basis {AND, OR, NOT} (only AND, OR, and NOT operations are used), and let x in {0, 1}n be a fixed configuration.

  • DECISION: Does x satisfy C?



Since MCV is a subset of CV (we take circuits without the NOT operation, which are also instances of CV), MCV is in P. To show MCV is P-Hard, we reduce CV <=l MCV. That is, for each instance of CV, a corresponding instance of MCV will be constructed. This is difficult though, as it is necessary to get rid of the NOT operations from CV. We do this by flushing the NOT operations down each layer of the circuit using DeMorgan's Law.

We then construct Dual-Rail Logic (DRL) circuits, where each variable xi from x in the CV instance is represented as (xi, ~xi) in the MCV instance. So any negations we may want are constructed up-front, so the NOT operation becomes unnecessary. The DRL operations are defined as follows:
  • DRL-AND: (x, ~x) AND (y, ~y) = (x AND y, ~(x AND y)) = (x AND y, ~x OR, ~y)
  • DRL-OR: (x, ~x) OR (y, ~y) = (x OR y, ~(x OR y)) = (x OR y, ~x AND ~y)
  • DRL-NOT: ~(x, ~x) is given just by twisting the wires, sending x and ~x in opposite directions.


Since the NOT operation is given upfront in the variable declarations, the DRL operations are all realizable over the monotone basis of {AND, OR}. DRL is also equally as powerful as the basis {AND, OR, NOT}. So any Boolean function can be computed with DRL.

Now let's prove that MCV is P-Complete.

Claim: MCV is P-Complete.

Proof:: In order to show MCV is P-Complete, we show that MCV is in P and every problem in P is log-space reducible to MCV (ie., MCV is P-Hard). As MCV is a subset of CV and CV is in P, it follows that MCV is in P.

To show MCV is P-Hard, we show CV <=l MCV. Let (x, C) be an instance of CV where x is the input sequence and C is the circuit over the basis {AND, OR, NOT}. We construct C' over the monotone basis {AND, OR} from C, by rewriting C as a dual-rail circuit. Let P© be the straight-line program representing C. Let P' be the straight-line program used to construct C'. For each line n in P©, let this instruction be line 2n in P'. Line 2n+1 in P' corresponds to the negation of line n in P©.

The NOT operation in P(C ) is realized in P' by twisting the wires. That is, the step (2k = NOT 2i) is realized by the steps (2k = 2i + 1) and (2k + 1 = 2i). The AND operation in P(C ) (2k = 2i AND 2j) is replaced by the steps (2k = 2i AND 2j) and (2k + 1 = (2i + 1) OR (2j + 1)). Finally, the OR operation (2k = 2i OR 2j) is realized by (2k = 2i OR 2j) and (2k + 1 = (2i + 1) AND (2j + 1)). And so P(C ) = P' for all inputs. So P(C ) = 1 if and only if P' = 1, and P(C ) = 0 if and only if P' = 0. So the reduction is valid.

It now suffices to argue the reduction takes a logarithmic amount of space. Generating P' from P(C ) can be done using a counter variable. So for each step i in P(C ), we perform operations at lines 2i and 2i+1 in P'. So if there are n steps in P(C ), we need log2(ceil(2n + 1)) bits, which grows asymptotically with c(log2(2) + log2(n)) = c(1 + log2(n)) for some integer constant c > 1. So the amount of space required is O(log(n)). And so we conclude that MCV is P-Complete. QED.

Is This A Good Question/Topic? 1
  • +

Page 1 of 1