- Published on
Aspects of OR - Linear Programming (Part 2)
- Authors

- Name
- Till Heller
In Part 1 we introduced linear programming and solved LPs using Gurobi and SciPy. But what happens inside these solvers? The answer, in most cases, is the simplex method — an algorithm developed by George Dantzig in 1947 that remains one of the most efficient methods for solving LPs in practice.
The simplex method exploits the geometric insight from Part 1: the optimal solution lies at a vertex of the feasible polytope. Rather than checking all vertices (which can be exponentially many), the simplex method starts at one vertex and moves along edges to neighboring vertices, improving the objective at each step, until it reaches the optimum.
From inequality to equality: slack variables
Recall the standard form LP from Part 1:
To apply the simplex method, we first convert the inequality constraints into equalities by introducing slack variables. For each constraint , we add a non-negative variable such that:
The slack variable represents the "unused" portion of constraint . If , the constraint is binding; if , there is slack.
After adding slack variables (one per constraint), we get the augmented form:
or equivalently, with :
where and .
Basic feasible solutions
With original variables and slack variables, we have variables but only equality constraints. A basic solution is obtained by setting of the variables to zero (the non-basic variables) and solving the resulting system for the remaining variables (the basic variables).
If the basic solution also satisfies , it is called a basic feasible solution (BFS). The fundamental theorem of linear programming states:
If an LP has an optimal solution, then it has an optimal basic feasible solution.
Each BFS corresponds to a vertex of the feasible polytope. The simplex method systematically moves from one BFS to an adjacent one (differing in exactly one basic variable) while improving the objective.
The simplex tableau
The simplex method is typically carried out using a tableau — a compact table that tracks all the information needed at each step. For a minimization problem in augmented form, the initial tableau looks like this:
| RHS | |||||||
|---|---|---|---|---|---|---|---|
| obj |
The last row is the objective row (sometimes called the -row). The left column lists the current basic variables. Initially, the slack variables form the basis, giving the starting BFS , .
The simplex algorithm
Each iteration of the simplex method performs one pivot operation that swaps one non-basic variable into the basis and one basic variable out. The steps are:
Step 1 — Optimality check. Look at the objective row. If all coefficients (called reduced costs) are non-negative (for minimization), the current BFS is optimal. Stop.
Step 2 — Entering variable (pivot column). Choose the non-basic variable with the most negative reduced cost. This variable will enter the basis. The corresponding column is the pivot column.
Step 3 — Leaving variable (pivot row). For each row where the pivot column entry , compute the ratio . The row with the smallest ratio determines the leaving variable — the basic variable that will exit the basis. This is the minimum ratio test, and it ensures feasibility is maintained. If no entry in the pivot column is positive, the LP is unbounded.
Step 4 — Pivot. Perform row operations to make the pivot element equal to 1 and all other entries in the pivot column equal to 0, just like Gaussian elimination. Update the basis listing.
Step 5 — Repeat from Step 1.
Worked example
Let us solve the production planning problem from Part 1 by hand:
Since this is a maximization problem, we look for the most negative coefficient in the objective row when we write it as . Equivalently, for maximization, we look for the most positive coefficient and pivot to increase it.
Initial tableau
Introduce slack variables :
| Basis | RHS | ||||
|---|---|---|---|---|---|
| 4 | 2 | 1 | 0 | 240 | |
| 2 | 3 | 0 | 1 | 180 | |
| obj | -70 | -50 | 0 | 0 | 0 |
Current BFS: , objective .
Iteration 1
Entering variable: has the most negative coefficient .
Minimum ratio test: Row 1: . Row 2: . Minimum is in Row 1.
Leaving variable: leaves the basis.
Pivot: Divide Row 1 by 4. Then eliminate from all other rows.
Row 1: .
Row 2: .
Obj: .
| Basis | RHS | ||||
|---|---|---|---|---|---|
| 1 | 0.5 | 0.25 | 0 | 60 | |
| 0 | 2 | -0.5 | 1 | 60 | |
| obj | 0 | -15 | 17.5 | 0 | 4200 |
Current BFS: , objective .
Iteration 2
Entering variable: has reduced cost .
Minimum ratio test: Row 1: . Row 2: . Minimum is in Row 2.
Leaving variable: leaves the basis.
Pivot: Divide Row 2 by 2. Then eliminate from all other rows.
Row 2: .
Row 1: .
Obj: .
| Basis | RHS | ||||
|---|---|---|---|---|---|
| 1 | 0 | 0.375 | -0.25 | 45 | |
| 0 | 1 | -0.25 | 0.5 | 30 | |
| obj | 0 | 0 | 13.75 | 7.5 | 4650 |
Optimality check: All reduced costs are non-negative. We are done!
Optimal solution: tables, chairs, with a maximum profit of €4,650. This matches the result from Part 1.
The simplex method needed only 2 iterations to solve this problem, visiting three of the four vertices of the feasible polytope: .
Degeneracy
A BFS is degenerate if one or more basic variables are equal to zero. This can cause the simplex method to perform a pivot that does not change the objective value (a degenerate pivot). In the worst case, the algorithm might cycle — visiting the same sequence of BFS repeatedly without making progress.
In practice, cycling is extremely rare, but it can be prevented entirely using Bland's rule: always choose the entering and leaving variables with the smallest index among all eligible candidates. Bland's rule guarantees finite termination of the simplex method.
Other anti-cycling strategies exist (e.g. lexicographic pivoting or perturbation methods), but Bland's rule is the simplest to implement and to reason about.
Complexity
The simplex method performs remarkably well in practice. For most real-world LPs, the number of iterations is roughly proportional to (the number of constraints), not exponential. However, the worst-case complexity of the simplex method is exponential — there exist pathological examples (such as the Klee-Minty cube) where the simplex method visits every vertex of the polytope.
This motivated the development of polynomial-time algorithms for LP:
- Ellipsoid method (Khachiyan, 1979) — theoretically polynomial but slow in practice.
- Interior point methods (Karmarkar, 1984) — both theoretically polynomial and practically efficient. These methods traverse the interior of the polytope rather than its edges.
Modern solvers like Gurobi and CPLEX use both the simplex method and interior point methods, choosing automatically based on the problem structure.
Implementation in Python
While solvers handle the simplex method internally, it is instructive to implement a basic version. Here is a minimal simplex implementation in NumPy:
import numpy as np
def simplex(c, A, b):
"""
Solve max c^T x subject to Ax <= b, x >= 0.
Returns (optimal_value, optimal_x).
"""
m, n = A.shape
# Build initial tableau with slack variables
tableau = np.zeros((m + 1, n + m + 1))
tableau[:m, :n] = A
tableau[:m, n:n+m] = np.eye(m)
tableau[:m, -1] = b
tableau[-1, :n] = -c # objective row (negated for max)
basis = list(range(n, n + m)) # slack variables are initial basis
while True:
# Step 1: check optimality (all obj row coeffs >= 0)
obj_row = tableau[-1, :-1]
if np.all(obj_row >= -1e-10):
break
# Step 2: entering variable (most negative in obj row)
pivot_col = np.argmin(obj_row)
# Step 3: minimum ratio test
column = tableau[:m, pivot_col]
rhs = tableau[:m, -1]
ratios = np.full(m, np.inf)
for i in range(m):
if column[i] > 1e-10:
ratios[i] = rhs[i] / column[i]
if np.all(ratios == np.inf):
raise ValueError('LP is unbounded')
pivot_row = np.argmin(ratios)
# Step 4: pivot
pivot_val = tableau[pivot_row, pivot_col]
tableau[pivot_row] /= pivot_val
for i in range(m + 1):
if i != pivot_row:
tableau[i] -= tableau[i, pivot_col] * tableau[pivot_row]
basis[pivot_row] = pivot_col
# Extract solution
x = np.zeros(n)
for i, var in enumerate(basis):
if var < n:
x[var] = tableau[i, -1]
return tableau[-1, -1], x
Using this on the production planning example:
c = np.array([70.0, 50.0])
A = np.array([[4.0, 2.0],
[2.0, 3.0]])
b = np.array([240.0, 180.0])
opt_val, opt_x = simplex(c, A, b)
print(f'Tables: {opt_x[0]:.0f}, Chairs: {opt_x[1]:.0f}')
print(f'Profit: €{opt_val:.2f}')
# Tables: 45, Chairs: 30
# Profit: €4650.00
This simple implementation lacks features like Bland's rule, two-phase initialization, and numerical stability safeguards — all of which are essential for a production solver. But it captures the core idea: inspect the objective row, pick a pivot column and row, perform the pivot, and repeat.
In Part 3, we will explore the dual of a linear program, the powerful relationship between primal and dual, and how sensitivity analysis helps answer "what if" questions about the problem data.