- Published on
Aspects of OR - Linear Programming (Part 1)
- Authors

- Name
- Till Heller
Linear Programming (LP) is one of the most fundamental and widely used tools in Operations Research. The idea is simple: optimize a linear objective function subject to linear constraints. Despite this simplicity, LP can model a surprisingly broad class of real-world problems — from production planning and logistics to finance and telecommunications.
The origins of LP go back to the 1940s. George Dantzig developed the simplex method in 1947 while working for the U.S. Air Force on planning and scheduling problems. Around the same time, Leonid Kantorovich in the Soviet Union had independently formulated similar ideas for industrial production optimization, work for which he later received the Nobel Prize in Economics (1975). Today, LP solvers are at the heart of virtually every Operations Research toolkit.
In this first part of the LP mini-series, we introduce the formal definitions, build geometric intuition, and walk through two practical examples with Python implementations. In Part 2, we will look under the hood at the simplex method, and in Part 3 we explore duality and sensitivity analysis.
What is a linear program?
A linear program consists of three ingredients:
- Decision variables — the unknowns we want to determine, e.g. .
- Objective function — a linear function of the decision variables that we want to minimize or maximize.
- Constraints — linear inequalities or equalities that the decision variables must satisfy.
A general LP in standard form can be written as:
where is the cost vector, is the constraint matrix, and is the right-hand side vector.
Equivalently, in component notation:
subject to
Of course, maximization problems fit this framework too: maximizing is the same as minimizing . Similarly, equality constraints can be expressed as two inequalities ( and ), and "greater-than-or-equal" constraints can be flipped. So the standard form above is fully general.
Geometric intuition
For problems with two or three variables, LP has a beautiful geometric interpretation. Each constraint defines a half-space, and the intersection of all half-spaces (together with ) forms a convex polytope — the feasible region.
The objective function for a fixed value defines a hyperplane. As we decrease (for minimization), this hyperplane sweeps across the feasible region. The optimal solution is the last point of contact between the moving hyperplane and the polytope.
A key insight: the optimal solution always occurs at a vertex (corner point) of the polytope, unless the objective is parallel to a face, in which case every point on that face is optimal. This is the geometric foundation of the simplex method, which we will cover in Part 2. In fact, the simplex method works by moving from vertex to vertex along the edges of the polytope, always improving the objective value, until it reaches the optimum.
To illustrate this, consider a small example with two variables:
subject to
The feasible region is a polygon in the - plane with vertices at , , , and . Evaluating the objective at each vertex gives , , , and respectively. The maximum is , attained at . Notice that we only needed to check the vertices — not the infinitely many interior points.
Key terminology
Before diving into examples, let us pin down some terminology:
- Feasible solution: a vector that satisfies all constraints.
- Feasible region (or feasible set): the set of all feasible solutions.
- Optimal solution: a feasible solution that achieves the best (min or max) objective value.
- Optimal value: the objective function value at the optimal solution.
- Infeasible LP: an LP whose feasible region is empty (the constraints are contradictory).
- Unbounded LP: an LP where the objective can be improved without limit (the feasible region extends to infinity in the direction of improvement).
- Binding constraint: a constraint that holds with equality at the optimal solution, i.e. .
Example 1: Production planning
A furniture workshop produces tables and chairs. Each table requires 4 hours of carpentry and 2 hours of finishing, while each chair requires 2 hours of carpentry and 3 hours of finishing. The workshop has 240 hours of carpentry time and 180 hours of finishing time available per week. Each table yields a profit of €70, and each chair yields €50. How many tables and chairs should be produced to maximize profit?
Let be the number of tables and the number of chairs. The LP is:
subject to
Solving with Python and Gurobi
An implementation using Gurobi looks as follows:
import gurobipy as gp
from gurobipy import GRB
model = gp.Model('production_planning')
# Decision variables
x1 = model.addVar(name='tables', lb=0)
x2 = model.addVar(name='chairs', lb=0)
# Objective: maximize profit
model.setObjective(70 * x1 + 50 * x2, GRB.MAXIMIZE)
# Constraints
model.addConstr(4 * x1 + 2 * x2 <= 240, 'carpentry')
model.addConstr(2 * x1 + 3 * x2 <= 180, 'finishing')
model.optimize()
print(f'Tables: {x1.X:.0f}, Chairs: {x2.X:.0f}')
print(f'Profit: €{model.ObjVal:.2f}')
The solver finds the optimal solution , with a profit of €4,650. Both constraints are binding at this point — all available carpentry and finishing hours are fully utilized.
Alternative: SciPy
If you do not have access to Gurobi, the same problem can be solved with SciPy's linprog. Note that linprog only minimizes, so we negate the objective:
from scipy.optimize import linprog
# linprog minimizes, so we negate the profit coefficients
c = [-70, -50]
A_ub = [[4, 2], [2, 3]]
b_ub = [240, 180]
x1_bounds = (0, None)
x2_bounds = (0, None)
result = linprog(c, A_ub=A_ub, b_ub=b_ub, bounds=[x1_bounds, x2_bounds], method='highs')
print(f'Tables: {result.x[0]:.0f}, Chairs: {result.x[1]:.0f}')
print(f'Profit: €{-result.fun:.2f}')
Example 2: Blending problem
A chemical company produces a fertilizer blend from three raw materials. The blend must contain at least 10% nitrogen and at least 6% phosphorus. The raw materials have the following compositions and costs:
| Raw material | Nitrogen (%) | Phosphorus (%) | Cost (€/kg) |
|---|---|---|---|
| A | 15 | 3 | 8 |
| B | 5 | 10 | 6 |
| C | 10 | 5 | 7 |
The company wants to find the cheapest blend that meets the nutrient requirements. Let denote the fraction of each raw material in the blend (so they must sum to 1).
subject to
Solving with Python and Gurobi
import gurobipy as gp
from gurobipy import GRB
model = gp.Model('blending')
# Decision variables: fraction of each raw material
x = model.addVars(['A', 'B', 'C'], name='x', lb=0)
# Objective: minimize cost
model.setObjective(8 * x['A'] + 6 * x['B'] + 7 * x['C'], GRB.MINIMIZE)
# Nutrient constraints
model.addConstr(15 * x['A'] + 5 * x['B'] + 10 * x['C'] >= 10, 'nitrogen')
model.addConstr(3 * x['A'] + 10 * x['B'] + 5 * x['C'] >= 6, 'phosphorus')
# Fractions must sum to 1
model.addConstr(x['A'] + x['B'] + x['C'] == 1, 'blend')
model.optimize()
for mat in ['A', 'B', 'C']:
print(f'Raw material {mat}: {x[mat].X:.2%}')
print(f'Cost: €{model.ObjVal:.2f}/kg')
The solver finds the optimal blend: roughly 28.6% A, 28.6% B, and 42.9% C, at a cost of €7.00/kg. This satisfies both the nitrogen and phosphorus requirements exactly — both nutrient constraints are binding.
Why LP matters
Linear programming is not just an academic exercise. Modern LP solvers can handle problems with millions of variables and constraints in seconds. Moreover, LP serves as the foundation for more advanced techniques:
- Integer Programming relaxations use LP as a building block.
- The simplex method and interior point methods are the algorithmic workhorses behind LP.
- Duality theory provides deep insights into the structure of optimization problems and has practical applications in sensitivity analysis and pricing.
We will explore the simplex method in detail in Part 2 of this mini-series.