Motivation
- Solving equations that allow just integer solutions is usually harder than solving for fractions. An example is the knapsack problem, whose fractional form can be solved in linear time, but its integer form is NP-complete.
- An intuitive way to reduce the computational complexity of integer programming is to approximate the integer solution by the following process:
- Solve the fractional problem instead.
- Round off fractions to integers "intelligently" and
- Know the bounds on error or non-optimality introduced by this approximation.
- In this article, we round off fractions randomly (based on some probability), hence the name.
Introduction
This article is motivated by the "Wiring Problem" as discussed in the book Randomized Algorithms by Motwani and Raghavan. This problem can be stated as follows:
Given a grid of squares, lay out wires between the given squares. Number of wires crossing any boundary of any square is to be constrained to no more than a given number $\omega_0$. Furthermore, minimize the maximum number of wires that cross any boundary.
Refer to the diagram below that shows a 3x3 grid, with 3 wires (labeled in red). For clarity, boundaries are labeled in blue separately. Each row has 2 vertical and 3 horizontal boundaries, for a total of 12 for the grid.
We limit ourselves to solutions containing a maximum of one "turn" in the wire. In the above diagram, wires 1 and 2 have one turn in them, while wire 3 has no turns ("it is straight").
Solving this problem involves solving the integer programming problem, which is NP-complete. Instead, we will solve it using randomization, which reduces the complexity significantly (down to the complexity of solving the fractional linear programming problem). Given below is a mathematical analysis of this solution.
Notation
- We number wires from $1$ to $a$. So, in the above diagram, we have $a=3$. We use the subscript $i$ for wires.
- For each wire $i$, we have 2 variables - $h_i$ and $v_i$ denoting whether the first part of the wire is horizontal or vertical.
- If the first part is horizontal, then $h_i=1$ and $v_i=0$.
- If the first part is vertical, then $h_i=0$ and $v_i=1$.
- So, for all wires $h_i,v_i\in\{0,1\}$
- In the above wiring, {$h_1=0$, $v_1=1$}, {$h_2=1$, $v_2=0$} and {$h_3=0$, $v_3=1$}.
- Additionally, this means that only one of $h_i$ and $v_i$ is 1 and the other is 0, since the first part of any wire can either be horizontal or vertical (at least one) and not both. So, for all wires: \[h_i+v_i=1\]
- We number boundaries from $1$ to $b$. So, in the above diagram, we have $b=12$. We use the subscript $j$ for boundaries.
- For each boundary $j$, we have 2 variables - $H_j$ and $V_j$ - denoting the wires that pass through that boundary.
- $H_j$ is the set of all wires that start out horizontally and pass through boundary $j$.
- Analogously, $V_j$ is the set of all wires that start out vertically and pass through boundary $j$.
- As examples, {$H_3=1$, $V_3=0$}, {$H_7=0$, $V_7=1$}. It is important to note that {$H_9=1$, $V_9=1$}, since the number of wires that start out horizontally and pass through boundary 9 is 1 (wire 2)
- Let $F_j$ denote the number of wires passing through boundary $j$. Since that number is no more than $\omega_0$: \[F_j=\sum_{i\in{H_j}}{h_i}+\sum_{i\in{V_j}}{v_i}\leq\omega_0\]
- The first summand corresponds to the number of wires that start out horizontally and pass through boundary $j$.
- The second summand corresponds to those that start out vertically and pass through boundary $j$.
- Their sum, which denotes the total number of wires passing through boundary $j$, obviously has to be no more than the specified constraint, $\omega_0$.
- Note that $F_j$ is an objective function in the linear programming terminology.
- Let $F=\max{F_j}$, i.e. the maximum number of wires passing through any boundary with a given solution. The solution with the least $F$ value is the optimal solution. In other words, the solution that requires the lowest maximum of wires passing through any boundary is defined as the optimal solution.
Randomized Rounding
- We first solve the linear programming problem using fractional values. There are well-known algorithms that solve this problem in polynomial time.
- Let $\widehat{h_i}$ and $\widehat{v_i}$ denote the fractional value solutions (e.g. 0.65 and 0.35, they are constrained to add up to 1). Note that they are the fractional counterparts to integer solutions, $h_i$ and $v_i$ (e.g. 1 and 0). So we have \[\widehat{F_j}=\sum_{i\in{H_j}}\widehat{h_i}+\sum_{i\in{V_j}}\widehat{v_i}\leq\widehat{\omega}\leq\omega_0\] where $\widehat{F_j}$ is the number of wires crossing boundary $j$ in the fractional case (so this will be a fraction, generally, e.g. $2.8$).
- The reason $\widehat{\omega}\leq\omega_0$ is that the fractional solution $\widehat{\omega}$ could potentially be "more optimal" (because it is not constrained by integer values) and hence lower than $\omega_0$.
- Since $h_i$ and $v_i$ denote how wire $i$ is to be oriented, $\widehat{h_i}$ and $\widehat{v_i}$ can be considered as approximations (in fact, probabilities) to this orientation.
- Here comes the crux of the randomized rounding approach. We round off $\widehat{h_i}$ to 1 with probability $\widehat{h_i}$ and to 0 with the remaining probability. E.g. if the fractional linear programming solution has $\widehat{h_i}=0.65$ and consequently, $\widehat{v_i}=0.35$, then we round off $\widehat{h_i}$ to 1 with probability 0.65 and to 0 with probability 0.35.
- Lets call the rounded off variable as $\overline{h_i}$ and $\overline{v_i}$ respectively. Note that these are random variables and not values. That means, probability $\mathbf{P}(\overline{h_i}=1)=\widehat{h_i}$ and $\mathbf{P}(\overline{h_i}=0)=1-\widehat{h_i}$. Note that mean of this variable, $\mathbf{E}(\overline{h_i})=\widehat{h_i}$.
- What can we say about the value of the objective function for these random variables (which denotes the number of wires crossing boundary $j$ in the randomized scenario)? In other words, what can we say about the random variables \[\overline{F_j}=\sum_{i\in{H_j}}\overline{h_i}+\sum_{i\in{V_j}}\overline{v_i}\]
- We derive the expected value (mean) of these random variables. \[\mathbf{E}(\overline{F_j})=\mathbf{E}(\sum_{i\in{H_j}}\overline{h_i}+\sum_{i\in{V_j}}\overline{v_i})\] By linearity of expecations, we have \[=\mathbf{E}(\sum_{i\in{H_j}}\overline{h_i})+\mathbf{E}(\sum_{i\in{V_j}}\overline{v_i})\] \[=\sum_{i\in{H_j}}\mathbf{E}(\overline{h_i})+\sum_{i\in{V_j}}\mathbf{E}(\overline{v_i})\] \[=\sum_{i\in{H_j}}\widehat{h_i}+\sum_{i\in{V_j}}\widehat{v_i}\[\leq\widehat\omega\hspace{4mm}\text{(from (1))}\]
- In other words, $\mathbf{E}(\widehat{F_j})\leq\widehat{\omega}\hspace{2mm}\forall{j}$. This means that the mean value of each of the objective functions $\widehat{F_j}$ is bounded from above by $\widehat\omega$ (which, by the way, happens to be better than our original bound of $\omega_0$ (from (1)).
- This means that, on an average, the number of wires crossing a boundary in the randomized scenario is equal to the value in the fractional scenario, for each boundary. This is huge, since that value is better than the value in the integer scenario!
Probabilistic analysis
- Let $\omega_S$ be the maximum number of wires passing through any boundary in our solution $S$ obtained from the above randomized rounding algorithm.
- We would like to determine how far this $\omega_S$ deviates from either our optimal solution $\omega_0$ for integer values or $\widehat{\omega}$ for fractional values.
- Graphically, in the diagram below, we are trying to find out how far $\omega_S$ is from $\widehat{\omega}$ or $\omega_0$, if we are willing to tolerate a maximum error of $\epsilon$.
- If the grid has $n$ squares, we have no more than $2n$ boundaries. This implies that for each boundary, we can tolerate an error of no more than $\epsilon/2n$.
- Chernoff bound gives us the following: \[\mathbf{P}[X\geq(1+\delta)\mu]<\left\{\frac{e^\delta}{(1+\delta)^{(1+\delta)}}\right\}^\mu=\epsilon/2n\] where $\mu$ is the mean and $\delta$ is the relative distance away from the mean.
- In our case, $\mu=\widehat{\omega}$ and we would like to find the value of $\delta$ such that the error is $\epsilon$. The exact value of $\delta$ that gives a specified error of $\epsilon/2n$ is quite complicated to determine and we will denote it as $\Delta^{+}(\mu,\epsilon/2n)$.
- Substituting this value in the Chernoff bound equation, we get \[\omega_S<\widehat{\omega}(1+\Delta^{+}(\mu,\epsilon/2n))\]
- While $\Delta^{+}(\mu,\epsilon/2n)$ is hard to determine, it is usually quite small, $O(1/n)$.
- This means that as the number of squares in the grid increases, the solution given by the randomized case increasingly approximates the solution given by the fractional case! In other words, the orientation of each wire determined by the randomized case is increasingly the optimal orientation, as the size of the grid increases.
We took an integer programming problem (which is NP-complete), solved it using fractional linear programming (which is polynomial time), used randomization to approximate each fraction to its integer
equivalent and found that for large grids, the solution that this approximation yields is fairly close to the optimal solution for the original, integer programming problem.
References
- Motwani, R. and Raghavan, P., Randomized Algorithms, 2007.
No comments:
Post a Comment