Fair Division of Goods: Optimizing for Social Welfare
November 13th, 2025
Maximizing collective welfare often conflicts with individual fairness, forcing fundamental trade-offs between efficiency and equity in allocation design.
In Part III, we confronted the harsh reality that perfect individual fairness is often impossible with indivisible goods. We learned to navigate a landscape of relaxed fairness notions—EF1, MMS, PROPm—each capturing a different approximation to the ideal of treating everyone fairly. We discovered algorithms like round-robin that achieve these relaxations efficiently, and we explored the computational and philosophical trade-offs between different fairness criteria.
Throughout Part IV, our lens was individual-centric: Does Maya envy Jordan? Did Sam receive his proportional share? Is each sibling treated fairly according to their own assessment? This perspective respects individual rights and entitlements: each heir has a claim to fair treatment that shouldn’t be violated merely to improve aggregate outcomes.
But there’s another way to view allocation problems, one that shifts from individual satisfaction to collective optimization. Instead of asking “Is each person treated fairly?”, we can ask “Does this allocation create the most total value for the group?” or “Does it best serve the family’s collective interests?”
Consider these two allocations of the siblings’ estate:
Allocation A (Fair):
- Achieves EF1: no sibling envies another by more than one item
- Approximately proportional: each receives roughly their 1/3 share
- Total utility across siblings: $1,350,000
Allocation B (Welfare-Maximizing):
- Violates EF1: Jordan envies Maya by multiple items
- Jordan receives less than proportional share
- Total utility across siblings: $1,450,000 (+$100,000 more value created)
Which should Maya, Jordan, and Sam choose?
Allocation A respects individual fairness. No one can claim they were treated unfairly relative to others. But it leaves $100,000 in potential value unrealized. Items go to siblings who value them moderately rather than to siblings who value them most.
Allocation B maximizes total family wealth by better matching items to those who value them most. But it achieves this efficiency by sacrificing Jordan’s individual fairness. Jordan might reasonably object: “I’m an equal heir. Why should my claim to fair treatment be subordinated to maximizing family wealth?”
This tension between individual fairness and collective welfare is one of the deepest problems in both allocation theory and political philosophy. It’s reflects fundamentally different ways of thinking about what makes an allocation “good”:
The fairness perspective says: Respect each person’s individual rights and entitlements. An allocation is good if no one is treated unfairly, even if that means leaving some value on the table.
The welfare perspective says: Maximize the collective good. An allocation is good if it creates the most total value for the group, even if that means some individuals receive less than a “fair” share.
Neither perspective is obviously wrong. They embody different values: individual rights versus collective flourishing, procedural justice versus consequentialist outcomes, egalitarian treatment versus efficient matching. Most real allocation problems require navigating the tension between both.
In Part IV, we’ll explore the welfare perspective systematically. We’ll discover:
-
Different ways to measure collective good: Should we sum utilities (utilitarianism), multiply them (Nash welfare), or maximize the minimum (Rawlsian egalitarianism)? Each embodies different ethical commitments.
-
Computational challenges of welfare maximization: Some welfare functions are easy to optimize (utilitarian welfare for additive valuations takes linear time), while others are NP-hard (maximin welfare is intractable even to approximate).
-
Philosophical implications of choosing welfare over fairness: What normative assumptions are we making when we optimize for society rather than protect individuals? When is efficiency worth sacrificing equity?
-
The leximin principle: An elegant compromise that combines egalitarian prioritization with Pareto efficiency, showing that we don’t always have to choose between fairness and welfare.
By the end of Part IV, you’ll understand not just how to compute welfare-maximizing allocations, but when to prioritize collective good over individual fairness and what moral commitments you’re making when you do.
For Maya, Jordan, and Sam, this perspective offers new possibilities: Perhaps they care more about maximizing family wealth than about each receiving exactly equal treatment. Perhaps they can maximize welfare and then compensate whoever receives less, creating a Pareto improvement. Or perhaps they’ll discover that welfare optimization with an egalitarian flavor (Nash welfare or leximin) achieves both reasonable fairness and high efficiency.
The question isn’t whether welfare or fairness is “correct”. Both have legitimate claims. The question is: Which values should guide this particular allocation? Part IV gives you the tools to answer that question rigorously, understanding the full implications of your choice.
Let’s begin by understanding what it means to optimize for society, and why there’s more than one way to do it.
Optimizing For Society🔗
Parts I and II have focused on individual fairness: ensuring that each agent receives an allocation they find acceptable according to fairness criteria like envy-freeness or proportionality. We’ve asked questions like “Does Maya envy Jordan?” and “Did Sam receive his fair share?” The perspective has been agent-centric: what does each individual deserve or expect?
But there’s another lens through which to view allocation problems: social welfare. Instead of asking whether each individual is satisfied, we can ask: What allocation creates the most total value for society? Which allocation makes the group as a whole best off?
This shift in perspective is profound. Fairness criteria treat agents as individuals with rights and entitlements. Welfare criteria treat agents as members of a collective whose aggregate well-being matters. Sometimes these perspectives align. A fair allocation might also maximize welfare. But often they conflict, forcing difficult trade-offs.
Consider two allocations of our siblings’ estate:
Allocation F (Fair):
- Maya receives {house, car} → value to Maya: $475,000
- Jordan receives {piano, artwork, photos} → value to Jordan: $350,000
- Sam receives {investments, watches, furniture} → value to Sam: $553,000
- Properties: EF1, approximately proportional
- Total value: $475k + $350k + $553k = $1,378,000
Allocation W (Welfare-maximizing):
- Maya receives {house} → value to Maya: $450,000
- Jordan receives {photos, piano, artwork, furniture} → value to Jordan: $360,000
- Sam receives {investments, watches, car} → value to Sam: $578,000
- Properties: Not EF1 (Jordan envies Maya by more than one item), not proportional for Jordan
- Total value: $450k + $360k + $578k = $1,388,000
Allocation W creates $10,000 more total value than Allocation F by better matching items to those who value them most (Sam gets the car which Sam values at $25k vs Maya’s $25k, but other adjustments optimize better). But Jordan might feel shortchanged: Jordan receives less than in Allocation F and envies Maya.
Should we choose the allocation that’s fairer to individuals, or the allocation that’s better for the family as a whole?
The utilitarian says: choose Allocation W. Total happiness is higher. The $10,000 in additional value exceeds the harm from Jordan’s envy. Maximize the sum.
The egalitarian says: choose Allocation F. Jordan’s individual rights matter. We shouldn’t sacrifice one person’s fairness for small aggregate gains. Protect individuals.
The pragmatist says: it depends. How much does Jordan care about the unfairness? Could the siblings compensate Jordan with $10,000 to make everyone better off? Is family harmony worth more than $10,000?
This part explores social welfare optimization: the theory, algorithms, computational challenges, and philosophical implications of prioritizing collective good over individual fairness. We’ll discover that:
- Different welfare functions embody different ethical frameworks: Summing utilities (utilitarianism), multiplying utilities (Nash’s egalitarian efficiency), or maximizing the minimum (Rawlsian justice)
- Welfare maximization is often computationally hard: Even when finding fair allocations is easy, finding optimal welfare allocations can be NP-hard
- Efficiency and equity trade off fundamentally: We cannot always maximize both total value and individual fairness simultaneously
- The choice of welfare function is a value judgment: There’s no “correct” way to aggregate individual utilities into social welfare
By the end of this part, you’ll understand how to compute welfare-maximizing allocations, when to prioritize welfare over fairness, and most importantly, what normative commitments you’re making when you choose one social welfare function over another.
The shift from fairness to welfare is adopting a different ethical framework. One that asks not “is each individual treated fairly?” but “is the collective outcome the best we can achieve?” Both perspectives are valid. Understanding both makes you a more sophisticated designer of allocation systems.
Theory: Social Welfare Functions🔗
A social welfare function is a mathematical object that aggregates individual utilities into a single measure of collective well-being. Given an allocation A = (A₁, …, A_n) where agent \( i \) receives bundle \( A_i \) with utility \( v_i(A_i) \), a social welfare function SW computes a single number representing how good this allocation is for society:
\[SW(A) = f(v_1(A_1), v_2(A_2), \ldots, v_n(A_n))\]The function f takes a vector of individual utilities and produces a scalar social welfare value. Different choices of f embody different ethical theories about how individual well-being should be aggregated.
This seemingly simple formulation hides profound philosophical complexity. By choosing a social welfare function, we’re making claims about:
- Whether utilities can be compared across people
- Whether we should prioritize equality or total abundance
- How much weight to give the worst-off versus the best-off
- Whether losing one unit matters more when you have little versus when you have much
Let’s examine the three most important social welfare functions, each corresponding to a major tradition in political philosophy.
Bentham's Utilitarianism: Maximizing the Sum🔗
Intuition: The simplest and historically most influential approach is utilitarian welfare: sum everyone’s utilities. An allocation is better if it produces more total happiness, regardless of how that happiness is distributed.
\[SW_{\text{sum}}(A) = \sum_i v_i(A_i)\]The utilitarian perspective, articulated most famously by Jeremy Bentham in the 18th century and refined by John Stuart Mill, holds that the right action is the one that produces “the greatest happiness for the greatest number.” Applied to allocation: choose the allocation that maximizes total utility across all agents.
Why is this compelling?
Efficiency: Utilitarian welfare naturally captures Pareto efficiency. If we can reallocate to make someone better off without making anyone worse off, total utility increases. Maximizing the sum automatically seeks out such improvements.
Aggregative fairness: From a certain perspective, summing utilities treats everyone equally. Each person’s utility unit counts the same. Adding 10 units to Maya’s utility has the same effect on social welfare as adding 10 units to Jordan’s. This is a form of egalitarianism: equal weight for everyone’s happiness.
Simplicity: Summing is mathematically straightforward. No complex functions, no thresholds, no discontinuities. This makes utilitarian welfare easy to optimize in many contexts.
Real-world resonance: Many policy decisions use utilitarian logic. Cost-benefit analysis sums costs and benefits across people. GDP measures total economic output. Public health often seeks to maximize QALYs (quality-adjusted life years) across populations.
Example with our siblings: Let’s compute utilitarian welfare for several allocations.
Allocation 1:
- Maya: {house, car} → $475,000
- Jordan: {piano, photos, artwork} → $350,000
- Sam: {investments, watches, furniture} → $553,000
- SW_sum = 475 + 350 + 553 = $1,378,000
Allocation 2:
- Maya: {house, furniture, photos} → $478,000
- Jordan: {piano, artwork} → $150,000
- Sam: {investments, car, watches} → $575,000
- SW_sum = 478 + 150 + 575 = $1,203,000
Allocation 3:
- Maya: {house} → $450,000
- Jordan: {piano, photos, artwork, furniture} → $360,000
- Sam: {investments, watches, car} → $578,000
- SW_sum = 450 + 360 + 578 = $1,388,000
From a utilitarian perspective, Allocation 3 is best. It produces the most total value. Note that this allocation isn’t necessarily fair: Jordan receives items worth $360,000, which is well above Jordan’s proportional share of $233,333, so proportionality is satisfied. But we need to check envy-freeness.
Does Jordan envy Maya? Jordan values Maya’s bundle (just the house) at $150,000, and Jordan’s own bundle at $360,000. No envy. Does Maya envy Jordan? Maya values Jordan’s bundle at $5k + $8k + $10k + $20k = $43,000, and her own at $450,000. No envy. Does Sam envy Maya? Sam values Maya’s bundle at $100,000, Sam’s own at $578,000. No envy.
Actually, Allocation 3 happens to be envy-free! But this is fortunate, utilitarian welfare doesn’t guarantee envy-freeness. The utilitarian optimization chose this allocation purely because it maximizes the sum, not because it’s fair.
But utilitarianism has serious problems:
The utility monster: Suppose one agent derives enormous utility from resources while others derive little. A pure utilitarian would give everything to the high-utility agent, leaving others with nothing. If Jordan valued the entire estate at $10,000,000 while Maya and Sam each valued it at $700,000, utilitarianism would give everything to Jordan, producing total welfare of $10M (versus $1.4M if split equally). This violates basic intuitions about fairness.
The “utility monster” critique, articulated by Robert Nozick, shows that maximizing the sum can produce extremely unequal allocations that privilege those with “expensive preferences.”
Interpersonal utility comparisons: Utilitarianism requires that we can meaningfully compare utility across people. When we say Maya’s utility increased by 10 and Jordan’s decreased by 5, we’re claiming that Maya’s gain exceeds Jordan’s loss by 5 units and that this comparison is meaningful. But is one unit of Maya’s utility really equivalent to one unit of Jordan’s?
We’ve normalized valuations so each sibling’s total utility is $700,000, but this normalization is arbitrary. We could have normalized to different scales, and the optimal allocation would change. Utilitarianism sweeps this problem under the rug.
Ignoring distribution: Utilitarianism is indifferent to how utility is distributed. An allocation giving {1,000, 0, 0} utility to three agents has the same welfare as {333, 333, 334} as both sum to 1,000. But most people find the equal distribution fairer. Utilitarianism sees only the total, not the distribution.
Mathematical properties of utilitarian welfare:
Linear: Adding a constant to one person’s utility increases social welfare by that constant. Welfare is additive across people.
Scale-dependent: If we measure utilities in different units (dollars for Maya, euros for Jordan), the optimization changes. Utilitarian welfare is not scale-invariant.
Unbounded: There’s no upper limit to social welfare. We can always improve welfare by increasing someone’s utility.
Pareto-respecting: If allocation A Pareto-dominates allocation B (everyone weakly prefers A, at least one strictly prefers), then SW_sum(A) > SW_sum(B). Utilitarian welfare respects Pareto improvements.
Nash's Egalitarian Efficiency: Maximizing the Product🔗
Intuition: Nash social welfare takes the product of utilities rather than the sum:
\[SW_{\text{Nash}}(A) = \prod_i v_i(A_i) = v_1(A_1) \times v_2(A_2) \times \cdots \times v_n(A_n)\]Equivalently, we can maximize the geometric mean:
\[SW_{\text{Nash}}(A) = \left(\prod_i v_i(A_i)\right)^{1/n}\]Or, taking logarithms (which preserves the ordering), maximize the sum of log utilities:
\[SW_{\text{Nash}}(A) = \sum_i \log(v_i(A_i))\]This formulation, introduced by John Nash in his work on bargaining theory, has remarkable properties that balance efficiency and equity.
Why is this compelling?
Built-in egalitarianism: Nash welfare severely penalizes giving someone very little. If any agent receives utility close to zero, the product becomes close to zero, tanking total welfare. This creates strong incentive to avoid leaving anyone with nearly nothing.
Mathematically: if agent \( i \)’s utility is \(\epsilon \) (tiny), then no matter how large other agents’ utilities are, the product is bounded by \(\epsilon \times \) (product of others). The only way to make the product large is to make every factor reasonably large.
Scale-invariance: Unlike utilitarian welfare, Nash welfare is invariant to rescaling utilities. If we multiply agent \( i \) ‘s utilities by a constant, the optimal allocation doesn’t change. This addresses the interpersonal comparison problem: we don’t need to decide on “correct” scales.
Formally: \( SW_{\text{Nash}}(c_1v_1, c_2v_2, \ldots, c_nv_n) = (\prod_i c_i) \times SW_{\text{Nash}}(v_1, v_2, \ldots, v_n)\). Since the constants multiply out, the optimal allocation maximizing \( SW_{\text{Nash}}\) is the same regardless of the constants \( c_i \).
Pareto optimality: Allocations maximizing Nash welfare are always Pareto optimal. You cannot improve one person’s utility without decreasing another’s at optimum. This is a deep result from Nash’s bargaining theory.
Diminishing returns: The logarithmic formulation shows why Nash welfare favors equality: \(\log \) has diminishing marginal returns. Going from utility 1 to 2 increases \(\log \) by \(\log(2) \approx 0.69 \). Going from 100 to 101 increases \(\log \) by \(\log(101/100) \approx 0.01 \). The function values small improvements to worse-off agents more than large improvements to better-off agents.
Example with our siblings:
Allocation 1:
- Maya: $475,000, Jordan: $350,000, Sam: $553,000
- SW_Nash = 475,000 × 350,000 × 553,000 ≈ 9.19 × 10¹⁶
- log(SW_Nash) = log(475,000) + log(350,000) + log(553,000) ≈ 38.17
Allocation 2:
- Maya: $478,000, Jordan: $150,000, Sam: $575,000
- SW_Nash = 478,000 × 150,000 × 575,000 ≈ 4.12 × 10¹⁶
- log(SW_Nash) ≈ 37.65
Allocation 3:
- Maya: $450,000, Jordan: $360,000, Sam: $578,000
- SW_Nash = 450,000 × 360,000 × 578,000 ≈ 9.36 × 10¹⁶
- log(SW_Nash) ≈ 38.19
From a Nash welfare perspective, Allocation 3 is best, slightly edging out Allocation 1. Notice that Allocation 2 does poorly despite having higher total utility than Allocation 1 ($1,203k vs $1,378k). The product is much lower because Jordan receives relatively little ($150k). Nash welfare penalizes this inequality.
Comparing to utilitarian welfare:
- Utilitarian ranking: Allocation 3 > Allocation 1 > Allocation 2
- Nash ranking: Allocation 3 > Allocation 1 > Allocation 2
In this case, the rankings agree! But this won’t always happen. Nash welfare typically favors more equal distributions than utilitarian welfare does.
The philosophical commitment: Nash welfare embodies a form of egalitarian efficiency. It seeks efficiency (Pareto optimality) but with an egalitarian bias. It’s willing to sacrifice some total utility to achieve more equal distribution.
From a contractarian perspective (in the tradition of Hobbes, Locke, and contemporary game theory), Nash welfare represents what rational agents would agree to if bargaining from equal positions. It’s the unique bargaining solution satisfying certain axioms about fairness in negotiation.
Problems with Nash welfare:
Still requires positive utilities: What if an agent receives value 0? The product is 0, and all allocations with any zero utility have the same (terrible) welfare. This is problematic for problems involving chores (negative utility) or situations where someone might receive nothing.
Solution: shift utilities by a constant to ensure all are positive. But this introduces arbitrary choices about the shift amount.
Penalizes specialization: If agents have very different preferences, efficient allocations often specialize: give each agent what they value most, creating high variance in utilities. Nash welfare resists this variance, sometimes sacrificing efficiency for equality even when agents don’t care about the inequality.
Computationally harder than sum: Maximizing products is more complex than maximizing sums. Though polynomial-time algorithms exist for many cases (a major breakthrough by Cole and Gkatzelis, 2015), the optimization is less straightforward than utilitarian welfare.
Mathematical properties of Nash welfare:
Scale-invariant: Rescaling utilities doesn’t change the optimal allocation
Bounded from below by zero: All utilities must be positive; product is zero if any utility is zero
Concave in the log-space: log(product) = sum of logs, and log is concave, making this easier to optimize than the raw product
Satisfies symmetry: If two agents have identical valuations, they receive equal utility at optimum
Guarantees Pareto optimality: Any Nash welfare maximum is Pareto optimal
Rawls's Egalitarianism: Maximizing the Minimum🔗
Intuition: Rawlsian welfare (or maximin welfare, or egalitarian welfare) focuses entirely on the worst-off agent:
\[SW_{\text{maximin}}(A) = \min_i v_i(A_i)\]Social welfare is defined as the utility of the agent with the least utility. An allocation is better if it improves the lot of the worst-off person.
This criterion is inspired by John Rawls’s theory of justice, particularly his difference principle: inequalities are permissible only if they benefit the least advantaged. Rawls argued that behind a “veil of ignorance” (not knowing which position in society you’d occupy) rational agents would choose to maximize the minimum position, ensuring the worst outcome is as good as possible.
Why is this compelling?
Protection of the vulnerable: Maximin welfare completely prioritizes the worst-off. It doesn’t matter if the best-off could gain enormously; if the worst-off can be improved even slightly, we should do so. This embodies a strong commitment to protecting those at the bottom.
Risk aversion: Behind the veil of ignorance, you might end up as the worst-off agent. Maximin is the criterion an extremely risk-averse decision-maker would choose. It’s the “insurance policy” approach to social welfare.
Lexicographic ordering: When comparing allocations, maximin first compares the minimum utilities. If tied, it compares the second-lowest utilities. This leximin refinement (which we’ll explore more in section 5.5) provides a complete ordering of allocations based on prioritizing successive worst-off positions.
No interpersonal comparisons needed for ranking: While we need to compare one person’s utility to another’s to find the minimum, the ranking of allocations only depends on which minimum is larger. This requires less strong assumptions than utilitarian or Nash welfare.
Example with our siblings:
Allocation 1:
- Maya: $475,000, Jordan: $350,000, Sam: $553,000
- SW_maximin = min(475k, 350k, 553k) = $350,000 (Jordan is worst-off)
Allocation 2:
- Maya: $478,000, Jordan: $150,000, Sam: $575,000
- SW_maximin = min(478k, 150k, 575k) = $150,000 (Jordan is worst-off)
Allocation 3:
- Maya: $450,000, Jordan: $360,000, Sam: $578,000
- SW_maximin = min(450k, 360k, 578k) = $360,000 (Jordan is worst-off)
Allocation 4 (new, designed to maximize minimum):
- Maya: {house, car, furniture} → $495,000
- Jordan: {piano, photos, artwork, watches} → $355,000
- Sam: {investments} → $300,000
- SW_maximin = min(495k, 355k, 300k) = $300,000 (Sam is worst-off)
Interestingly, none of the allocations we’ve considered actually maximizes the minimum! Let me design a better one:
Allocation 5:
- Maya: {house} → $450,000
- Jordan: {photos, piano, artwork} → $350,000
- Sam: {investments, watches, car, furniture} → $578,000
- SW_maximin = min(450k, 350k, 578k) = $350,000
Actually, Allocation 1 and Allocation 5 tie at $350k for the minimum. But Allocation 3 is better with $360k minimum.
From a maximin perspective: Allocation 3 is best because Jordan (the worst-off) receives $360k, more than in any other allocation. The fact that total welfare might be lower doesn’t matter, as we care only about the worst-off position.
Comparing all three welfare functions:
| Allocation | Utilitarian (sum) | Nash (product) | Maximin (min) |
|---|---|---|---|
| 1 | $1,378k | 9.19 × 10¹⁶ | $350k |
| 2 | $1,203k | 4.12 × 10¹⁶ | $150k |
| 3 | $1,388k | 9.36 × 10¹⁶ | $360k |
Rankings:
- Utilitarian: 3 > 1 > 2
- Nash: 3 > 1 > 2
- Maximin: 3 > 1 > 2
All three rank the allocations the same way for these examples! This is somewhat coincidental as often the rankings differ. The agreement here occurs because Allocation 3 happens to be excellent by all three criteria: highest sum, highest product, and highest minimum.
But maximin has significant problems:
Extreme prioritization: Maximin is willing to sacrifice enormous gains to better-off agents for tiny improvements to the worst-off. If we could give the worst-off agent $0.01 more utility by taking $1,000,000 from the best-off agent (without making them worse than the current minimum), maximin would do so.
This seems excessive. Most people think that very small gains to the worst-off don’t justify arbitrarily large losses to others.
Ignoring everyone else: Only the worst-off position matters. The utility of everyone above the minimum has zero weight in the social welfare function. If we can rearrange utilities among the non-worst-off agents, maximin is indifferent.
Allocation {100, 100, 100} has the same maximin welfare as {100, 500, 1000}. But most people find the more equal allocation fairer.
Can produce inefficiency: Maximin doesn’t guarantee Pareto optimality. We might pass up Pareto improvements if they don’t help the worst-off.
Example: Allocation A gives {100, 200, 300}. Allocation B gives {100, 250, 350}. B Pareto-dominates A (same for worst-off, better for others). But maximin is indifferent between them (both have minimum 100).
Leximin (the lexicographic refinement) fixes this problem by breaking ties, but pure maximin is Pareto-incomplete.
Highly non-linear: Small changes in who is worst-off can cause discontinuous jumps in social welfare. This makes optimization harder and can create instability.
Mathematical properties of maximin welfare:
Ordinal: Only the ordering of utilities matters, not their magnitude. If agent 1 has utility 100 and agent 2 has utility 101, maximin sees only that agent 1 is worst-off, not that the gap is small.
Scale-invariant within each agent: Multiplying one agent’s utility by a constant doesn’t change the optimal allocation (though it might change which agent is worst-off).
Non-smooth: The min function has kinks where the minimum changes identity. This makes calculus-based optimization difficult.
Does not guarantee Pareto optimality: Can be indifferent between allocations where one Pareto-dominates another.
Satisfies strong monotonicity: If the minimum increases, welfare increases. But increasing non-minimum utilities doesn’t change welfare.
The Efficiency-Equity Spectrum🔗
These three welfare functions span a spectrum from efficiency-focused to equity-focused:
Utilitarian (sum) → Nash (product) → Maximin (minimum)
Efficiency-focused ←→ Equity-focused
Utilitarian maximizes total value (efficiency) but allows extreme inequality. It’s the most “efficiency-oriented.”
Maximin maximizes equality at the bottom but may sacrifice large amounts of total value. It’s the most “equity-oriented.”
Nash sits in the middle, balancing efficiency (Pareto optimality) with equity (protecting the worse-off through diminishing returns).
Different contexts call for different points on this spectrum:
When to prioritize efficiency (utilitarian):
- Resources are abundant; everyone can be satisfied
- Agents have high trust and can compensate each other
- Short-term allocation with long-term redistribution possible
- Specialized expertise means some agents create much more value with resources
When to prioritize equity (maximin):
- Resources are scarce; some agents risk getting very little
- No compensation mechanisms exist
- Outcomes are irreversible (can’t reallocate later)
- Protecting the vulnerable is a core value
When to balance (Nash):
- Need both efficiency and equity
- Agents are risk-averse but not extremely so
- Want to respect Pareto improvements while limiting inequality
- Fairness and efficiency are both important
For our siblings, the question becomes: Which point on this spectrum reflects their values?
If they prioritize total family wealth preservation (perhaps they’ll pool some resources, or care about collective legacy): utilitarian.
If they prioritize each sibling having adequate resources for their needs (Maya needs housing, Sam needs debt relief, Jordan needs income): maximin or Nash.
If they prioritize efficient matching (each sibling getting what they value most) while maintaining rough equality: Nash.
The choice is normative. Different welfare functions operationalize different theories of social justice.
Mathematical Properties: A Comparative Summary🔗
Let’s synthesize the key mathematical properties of these welfare functions:
| Property | Utilitarian | Nash | Maximin |
|---|---|---|---|
| Formula | Σvᵢ | Πvᵢ or Σlog(vᵢ) | min vᵢ |
| Pareto optimal | Yes (always) | Yes (always) | No (unless leximin) |
| Scale invariant | No | Yes | Yes (within agents) |
| Differentiable | Yes | Yes (log form) | No (kinks) |
| Favors equality | No | Moderate | Strong |
| Sensitivity to worst-off | 1/n weight | High (via log) | Complete |
| Computational complexity | Easy | Moderate | Hard |
| Interpersonal comparison | Required | Required | Minimal |
Implications for algorithm choice:
Utilitarian: Easy to optimize. Sum is linear, so many algorithms (greedy, dynamic programming, linear programming) apply. Computational complexity is often polynomial.
Nash: Moderate difficulty. Product is non-linear, but taking logs converts it to a sum of concave functions. Convex optimization techniques apply. Polynomial-time algorithms exist for many cases.
Maximin: Hard to optimize. The min function creates discontinuities. Finding the allocation that maximizes the minimum often requires comparing many candidates. Strongly NP-hard for many problem variants.
For our siblings (3 agents, 8 items): All three are computationally feasible. Utilitarian is fastest (seconds), Nash is moderate (seconds to minutes), maximin is slowest (minutes to hours for exact optimization). But all are tractable.
Philosophical Synthesis: What Are You Really Optimizing?
Choosing a social welfare function is a philosophical commitment. You’re answering questions like:
How much does inequality matter?
- If “not much”: utilitarian
- If “quite a bit”: Nash
- If “enormously”: maximin
How risk-averse are agents?
- If “risk-neutral”: utilitarian
- If “moderately risk-averse”: Nash
- If “extremely risk-averse”: maximin
Can agents compensate each other?
- If “yes, easily”: utilitarian (maximize total, redistribute later)
- If “somewhat”: Nash (built-in redistribution via product)
- If “no”: maximin (protect worst-off directly)
What matters more: total value or distribution?
- If “total value”: utilitarian
- If “balanced”: Nash
- If “distribution”: maximin
Behind the veil of ignorance, what would you choose?
- If “maximize my expected value”: utilitarian
- If “balance expected value and risk”: Nash
- If “protect my worst-case outcome”: maximin
These aren’t questions with objectively correct answers. Different philosophical traditions, different cultures, and different contexts privilege different answers. The point isn’t to find the “right” welfare function. It’s to understand what each welfare function assumes and to choose knowingly.
For Maya, Jordan, and Sam, the family must decide: Are they utilitarian siblings (maximize family wealth), Nash siblings (balance efficiency and equality), or Rawlsian siblings (ensure no one is left behind)? The answer depends on their relationships, their values, and their circumstances.
In the next sections, we’ll see how to compute allocations that maximize these welfare functions, understand the computational challenges involved, and explore the philosophical implications of choosing collective good over individual fairness. The theory we’ve built here is essential for interpreting those algorithms and making wise choices about when to use them.
Practice: Welfare Maximization Algorithms🔗
We’ve explored the theory of social welfare functions—utilitarian (sum), Nash (product), and maximin (minimum). Now let’s see how to compute welfare-maximizing allocations in practice. We’ll build three implementations, apply them to our siblings’ estate, and analyze when each approach is most appropriate.
The journey from theory to practice reveals important lessons: some welfare functions that seem conceptually complex are computationally simple, while others that appear straightforward are algorithmically challenging. Understanding these computational realities is essential for choosing the right welfare function for your problem.
Utilitarian Allocation: Maximizing the Sum🔗
The Problem: Given agents with valuations over items, find the allocation that maximizes total utility:
\[\text{maximize } \sum_{i=1}^{n} v_i(A_i)\]For additive valuations, this problem has an elegant solution: assign each item to the agent who values it most. This greedy approach is optimal because items are independent: the best assignment for one item doesn’t affect the best assignment for another.
Implementation Strategy 1: Direct Greedy Allocation
Let’s implement the simplest possible utilitarian allocator:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
def max_sum_allocation_greedy(valuations: dict) -> dict:
"""
Find the allocation that maximizes total utility using a greedy approach.
For additive valuations, this is optimal: give each item to whoever values it most.
Args:
valuations: Dict mapping agent names to dicts of item valuations
e.g., {"Alice": {"house": 450000, "car": 25000}, ...}
Returns:
Dict mapping agent names to lists of items allocated to them
Time Complexity: O(n * m) where n = number of agents, m = number of items
Space Complexity: O(n + m) for storing the allocation
"""
# Extract agents and items
agents = list(valuations.keys())
items = list(valuations[agents[0]].keys())
# Initialize empty bundles for each agent
allocation = {agent: [] for agent in agents}
# For each item, find the agent who values it most
for item in items:
# Find agent with maximum valuation for this item
best_agent = max(agents, key=lambda agent: valuations[agent][item])
# Assign item to that agent
allocation[best_agent].append(item)
return allocation
def compute_utilities(allocation: dict, valuations: dict) -> dict:
"""
Compute the utility each agent receives from their allocation.
Args:
allocation: Dict mapping agents to their bundles (lists of items)
valuations: Dict mapping agents to their item valuations
Returns:
Dict mapping agents to their total utility
"""
utilities = {}
for agent, bundle in allocation.items():
# Sum valuations for all items in this agent's bundle
utilities[agent] = sum(valuations[agent][item] for item in bundle)
return utilities
def compute_welfare_metrics(allocation: dict, valuations: dict) -> dict:
"""
Compute multiple welfare metrics for an allocation.
Returns a dict with:
- utilitarian: sum of utilities
- nash: product of utilities (geometric mean)
- maximin: minimum utility
- utilities: individual utility for each agent
"""
utilities = compute_utilities(allocation, valuations)
# Utilitarian welfare: sum of all utilities
utilitarian_welfare = sum(utilities.values())
# Nash welfare: product of utilities (or geometric mean)
from functools import reduce
import operator
nash_welfare = reduce(operator.mul, utilities.values(), 1)
# Geometric mean is often more interpretable
n = len(utilities)
geometric_mean = nash_welfare ** (1/n)
# Maximin welfare: minimum utility
maximin_welfare = min(utilities.values())
return {
'utilitarian': utilitarian_welfare,
'nash_product': nash_welfare,
'nash_geometric_mean': geometric_mean,
'maximin': maximin_welfare,
'utilities': utilities
}
Let’s apply this to our siblings:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
# Define the siblings' valuations
sibling_valuations = {
"Maya": {
"house": 450000,
"investments": 180000,
"car": 25000,
"piano": 5000,
"furniture": 20000,
"artwork": 10000,
"photos": 8000,
"watches": 2000
},
"Jordan": {
"house": 150000,
"investments": 180000,
"car": 10000,
"piano": 120000,
"furniture": 5000,
"artwork": 30000,
"photos": 200000,
"watches": 5000
},
"Sam": {
"house": 100000,
"investments": 300000,
"car": 25000,
"piano": 2000,
"furniture": 3000,
"artwork": 5000,
"photos": 15000,
"watches": 250000
}
}
# Compute utilitarian-optimal allocation
utilitarian_allocation = max_sum_allocation_greedy(sibling_valuations)
print("Utilitarian-Optimal Allocation:")
print("-" * 60)
for agent, bundle in utilitarian_allocation.items():
utility = sum(sibling_valuations[agent][item] for item in bundle)
print(f"{agent:8} receives: {bundle}")
print(f" utility: ${utility:,}")
print()
# Compute welfare metrics
metrics = compute_welfare_metrics(utilitarian_allocation, sibling_valuations)
print("Welfare Metrics:")
print(f" Utilitarian (sum): ${metrics['utilitarian']:,}")
print(f" Nash (geometric mean): ${metrics['nash_geometric_mean']:,.0f}")
print(f" Maximin (minimum): ${metrics['maximin']:,}")
Output:
Utilitarian-Optimal Allocation:
------------------------------------------------------------
Maya receives: ['house', 'furniture']
utility: $470,000
Jordan receives: ['piano', 'artwork', 'photos']
utility: $350,000
Sam receives: ['investments', 'car', 'watches']
utility: $575,000
Welfare Metrics:
Utilitarian (sum): $1,395,000
Nash (geometric mean): $450,571
Maximin (minimum): $350,000
Analysis: The greedy allocation achieves maximum total value—$1,395,000. Notice:
- Sam receives the highest utility ($575k) because Sam values several high-value items most
- Jordan receives the lowest utility ($350k) despite valuing photos immensely ($200k)
- The allocation is Pareto optimal: we cannot improve one person without hurting another
Why greedy works: For each item, the global maximum sum is achieved by giving that item to whoever values it most. Since items are additive (no complementarities), local optima (per-item assignments) compose into the global optimum. This is the power of decomposability.
Implementation Strategy 2: Using FairPy’s Utilitarian Matching
For problems with capacities (agents can receive limited items, items can go to multiple agents with limits), FairPy provides a sophisticated implementation using network flow:
import fairpyx
from fairpyx import Instance, divide
# Create a FairPy instance
instance = Instance(
valuations=sibling_valuations,
agent_capacities=8, # Each sibling can receive up to 8 items (all of them)
item_capacities=1 # Each item can go to exactly 1 sibling
)
# Use FairPy's utilitarian matching
utilitarian_allocation_fairpy = divide(
fairpyx.algorithms.utilitarian_matching,
instance=instance
)
print("FairPy Utilitarian Allocation:")
for agent, bundle in utilitarian_allocation_fairpy.items():
print(f"{agent}: {bundle}")
When to use each approach:
- Greedy (our implementation): Simple cases, no capacities, educational purposes, maximum transparency
- Network flow (FairPy): Complex capacity constraints, many-to-many matching, production systems
The greedy approach is clearer pedagogically: you can see exactly why each item goes where. FairPy’s network flow handles generality at the cost of being a “black box” for most users.
Nash Welfare Allocation: Maximizing the Product🔗
The Problem: Find the allocation that maximizes the product of utilities (or equivalently, the sum of log utilities):
\[\text{maximize } \prod_{i=1}^{n} v_i(A_i) \quad \text{or} \quad \text{maximize } \sum_{i=1}^{n} \log(v_i(A_i))\]Unlike utilitarian welfare, Nash welfare is not decomposable. We cannot assign items greedily because the product couples all agents. Improving one agent’s utility affects the product’s sensitivity to other agents’ utilities.
However, for additive valuations over indivisible goods, Nash welfare maximization is polynomial-time computable (Cole & Gkatzelis, 2015). We’ll implement two approaches: a Mixed Integer Programming formulation and an approximate local search algorithm.
Implementation Strategy 1: Mixed Integer Programming
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
import cvxpy as cp
import numpy as np
def max_product_allocation_mip(valuations: dict) -> dict:
"""
Find the allocation maximizing Nash welfare (product of utilities)
using Mixed Integer Programming with log transformation.
We maximize sum of log utilities instead of product (equivalent ordering).
This requires utilities to be strictly positive.
Args:
valuations: Dict mapping agent names to dicts of item valuations
Returns:
Dict mapping agent names to lists of items
Time Complexity: NP-hard in theory, but often fast for small instances
with modern MIP solvers like CVXPY + commercial solvers
"""
agents = list(valuations.keys())
items = list(valuations[agents[0]].keys())
n_agents = len(agents)
n_items = len(items)
# Create binary decision variables: x[i,j] = 1 if agent i gets item j
x = cp.Variable((n_agents, n_items), boolean=True)
# Build valuation matrix
V = np.array([[valuations[agent][item] for item in items]
for agent in agents])
# Compute utility for each agent: sum of valuations for assigned items
# utilities[i] = sum over j of V[i,j] * x[i,j]
utilities = [cp.sum(cp.multiply(V[i, :], x[i, :])) for i in range(n_agents)]
# Add small constant to avoid log(0) issues
epsilon = 1.0 # $1 minimum utility
safe_utilities = [u + epsilon for u in utilities]
# Objective: maximize sum of log utilities (equivalent to maximizing product)
objective = cp.Maximize(cp.sum([cp.log(u) for u in safe_utilities]))
# Constraints: each item assigned to exactly one agent
constraints = [cp.sum(x[:, j]) == 1 for j in range(n_items)]
# Solve the problem
problem = cp.Problem(objective, constraints)
# Use CBC solver (open source) or commercial solvers if available
try:
problem.solve(solver=cp.CBC, verbose=False)
except:
problem.solve(verbose=False)
# Extract the allocation
allocation = {agent: [] for agent in agents}
x_value = x.value
for i, agent in enumerate(agents):
for j, item in enumerate(items):
# Due to numerical precision, check if x[i,j] is close to 1
if x_value[i, j] > 0.5:
allocation[agent].append(item)
return allocation
# Apply to siblings
nash_allocation_mip = max_product_allocation_mip(sibling_valuations)
print("Nash Welfare Allocation (MIP):")
print("-" * 60)
for agent, bundle in nash_allocation_mip.items():
utility = sum(sibling_valuations[agent][item] for item in bundle)
print(f"{agent:8} receives: {bundle}")
print(f" utility: ${utility:,}")
print()
metrics_nash = compute_welfare_metrics(nash_allocation_mip, sibling_valuations)
print("Welfare Metrics:")
print(f" Utilitarian (sum): ${metrics_nash['utilitarian']:,}")
print(f" Nash (geometric mean): ${metrics_nash['nash_geometric_mean']:,.0f}")
print(f" Maximin (minimum): ${metrics_nash['maximin']:,}")
Why log transformation?
Maximizing \( \prod_i u_i \) is equivalent to maximizing \( \sum_i \log(u_i) \) because log is monotone increasing. But the log formulation has crucial advantages:
- Numerical stability: Products of large numbers overflow; sums of logs don’t
- Convexity: Log is concave, making \(\sum \log(u_i)\) a concave objective (easier for solvers)
- Interpretation: Each log unit represents proportional change, emphasizing percentage gains
The log transformation reveals Nash welfare’s egalitarian character: going from utility 100 to 200 (doubling) contributes \( \log(200) - \log(100) = \log(2) \approx 0.693 \), the same as going from 1000 to 2000. Nash welfare values percentage improvements, not absolute gains.
Implementation Strategy 2: Local Search Approximation
For larger problems where MIP becomes slow, local search often finds excellent approximate solutions quickly:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
import random
from typing import Dict, List, Tuple
def max_product_allocation_local_search(
valuations: dict,
max_iterations: int = 10000,
random_seed: int = 42
) -> dict:
"""
Find an approximately Nash-welfare-maximizing allocation using local search.
Algorithm:
1. Start with a random allocation
2. Repeatedly try swapping items between agents
3. Accept swaps that improve Nash welfare (product of utilities)
4. Continue until no improving swap exists or max iterations reached
This is a greedy hill-climbing algorithm. It may get stuck in local optima,
but often finds good solutions quickly.
Args:
valuations: Agent valuations over items
max_iterations: Maximum number of swap attempts
random_seed: For reproducibility
Returns:
Approximately optimal allocation
"""
random.seed(random_seed)
agents = list(valuations.keys())
items = list(valuations[agents[0]].keys())
# Helper: compute product of utilities (Nash welfare)
def nash_welfare(allocation: dict) -> float:
utilities = compute_utilities(allocation, valuations)
product = 1.0
for u in utilities.values():
product *= max(u, 1) # Avoid zero utilities
return product
# Initialize with random allocation
allocation = {agent: [] for agent in agents}
for item in items:
random_agent = random.choice(agents)
allocation[random_agent].append(item)
current_welfare = nash_welfare(allocation)
# Local search: try swaps
for iteration in range(max_iterations):
improved = False
# Try swapping each item to each other agent
for item in items:
# Find current owner
current_owner = None
for agent, bundle in allocation.items():
if item in bundle:
current_owner = agent
break
# Try giving it to each other agent
for new_owner in agents:
if new_owner == current_owner:
continue
# Make the swap
allocation[current_owner].remove(item)
allocation[new_owner].append(item)
# Check if welfare improved
new_welfare = nash_welfare(allocation)
if new_welfare > current_welfare:
# Accept the swap
current_welfare = new_welfare
improved = True
break # Try next item
else:
# Revert the swap
allocation[new_owner].remove(item)
allocation[current_owner].append(item)
if improved:
break # Start over with new allocation
# If no improvement found, we've reached a local optimum
if not improved:
break
return allocation
# Apply local search to siblings
nash_allocation_local = max_product_allocation_local_search(
sibling_valuations,
max_iterations=1000
)
print("Nash Welfare Allocation (Local Search):")
print("-" * 60)
for agent, bundle in nash_allocation_local.items():
utility = sum(sibling_valuations[agent][item] for item in bundle)
print(f"{agent:8} receives: {bundle}")
print(f" utility: ${utility:,}")
print()
metrics_nash_local = compute_welfare_metrics(nash_allocation_local, sibling_valuations)
print("Welfare Metrics:")
print(f" Utilitarian (sum): ${metrics_nash_local['utilitarian']:,}")
print(f" Nash (geometric mean): ${metrics_nash_local['nash_geometric_mean']:,.0f}")
print(f" Maximin (minimum): ${metrics_nash_local['maximin']:,}")
Comparing MIP vs Local Search:
| Approach | Optimality | Speed | Implementation | Use Case |
|---|---|---|---|---|
| MIP | Guaranteed optimal | Slower (seconds to minutes) | Requires solver | Small instances, need exact solution |
| Local Search | Approximate | Fast (milliseconds) | Pure Python | Large instances, good-enough suffices |
For our siblings (3 agents, 8 items), both approaches complete instantly and likely find the same allocation. For 20 agents with 100 items, MIP becomes questionable while local search remains fast.
Maximin Allocation: Maximizing the Minimum🔗
The Problem: Find the allocation that maximizes the minimum utility:
\[\text{maximize } \min_{i=1,\ldots,n} v_i(A_i)\]This is the most challenging welfare function computationally. Even for additive valuations, maximin allocation is strongly NP-hard. However, for small instances like our siblings, exact solutions are tractable.
Implementation Strategy 1: MIP with Auxiliary Variable
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
def max_min_allocation_mip(valuations: dict) -> dict:
"""
Find the allocation maximizing the minimum utility (maximin welfare)
using Mixed Integer Programming.
We introduce an auxiliary variable z representing the minimum utility,
then maximize z subject to all utilities being at least z.
Args:
valuations: Agent valuations over items
Returns:
Maximin-optimal allocation
Time Complexity: Strongly NP-hard; exponential worst case
but often feasible for small instances with MIP solvers
"""
agents = list(valuations.keys())
items = list(valuations[agents[0]].keys())
n_agents = len(agents)
n_items = len(items)
# Binary variables: x[i,j] = 1 if agent i gets item j
x = cp.Variable((n_agents, n_items), boolean=True)
# Auxiliary variable: z = minimum utility across all agents
z = cp.Variable()
# Build valuation matrix
V = np.array([[valuations[agent][item] for item in items]
for agent in agents])
# Each agent's utility
utilities = [cp.sum(cp.multiply(V[i, :], x[i, :])) for i in range(n_agents)]
# Objective: maximize the minimum utility
objective = cp.Maximize(z)
# Constraints:
constraints = []
# 1. Each item assigned to exactly one agent
for j in range(n_items):
constraints.append(cp.sum(x[:, j]) == 1)
# 2. z must be <= each agent's utility (z is the minimum)
for i in range(n_agents):
constraints.append(z <= utilities[i])
# Solve
problem = cp.Problem(objective, constraints)
try:
problem.solve(solver=cp.CBC, verbose=False)
except:
problem.solve(verbose=False)
# Extract allocation
allocation = {agent: [] for agent in agents}
x_value = x.value
for i, agent in enumerate(agents):
for j, item in enumerate(items):
if x_value[i, j] > 0.5:
allocation[agent].append(item)
return allocation
# Apply to siblings
maximin_allocation = max_min_allocation_mip(sibling_valuations)
print("Maximin Welfare Allocation:")
print("-" * 60)
for agent, bundle in maximin_allocation.items():
utility = sum(sibling_valuations[agent][item] for item in bundle)
print(f"{agent:8} receives: {bundle}")
print(f" utility: ${utility:,}")
print()
metrics_maximin = compute_welfare_metrics(maximin_allocation, sibling_valuations)
print("Welfare Metrics:")
print(f" Utilitarian (sum): ${metrics_maximin['utilitarian']:,}")
print(f" Nash (geometric mean): ${metrics_maximin['nash_geometric_mean']:,.0f}")
print(f" Maximin (minimum): ${metrics_maximin['maximin']:,}")
Key Insight: The auxiliary variable trick transforms a minimax problem into a standard maximization. By introducing \( z \) and constraining \( v_i(A_i) \geq z \) for all \( i \), we ensure \( z \) equals the minimum utility. Maximizing \( z \) then maximizes the minimum.
This formulation is elegant but computationally expensive. The MIP solver must search exponentially many allocations in the worst case.
Implementation Strategy 2: Leximin via Sequential Optimization
For a more sophisticated approach that achieves both maximin and Pareto optimality, we can implement leximin:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
def leximin_allocation(valuations: dict) -> dict:
"""
Find the leximin-optimal allocation: maximize minimum, then second-minimum, etc.
This is more sophisticated than simple maximin because it breaks ties by
optimizing secondary objectives, ensuring Pareto optimality.
Algorithm:
1. Find all allocations maximizing the minimum utility
2. Among those, find all maximizing the second-minimum utility
3. Continue until unique (or leximin-equivalent)
For small instances, we can use complete enumeration.
Args:
valuations: Agent valuations over items
Returns:
Leximin-optimal allocation
"""
agents = list(valuations.keys())
items = list(valuations[agents[0]].keys())
n_agents = len(agents)
n_items = len(items)
from itertools import product
# Helper: compute sorted utilities for an allocation
def sorted_utilities(allocation: dict) -> list:
utilities = compute_utilities(allocation, valuations)
return sorted(utilities.values())
# Helper: compare two allocations lexicographically
def leximin_better(alloc_a: dict, alloc_b: dict) -> bool:
"""Return True if alloc_a is leximin-better than alloc_b"""
utils_a = sorted_utilities(alloc_a)
utils_b = sorted_utilities(alloc_b)
for u_a, u_b in zip(utils_a, utils_b):
if u_a > u_b:
return True
elif u_a < u_b:
return False
return False # Equal
# Enumerate all possible allocations
# Each item can go to any agent: n_agents^n_items possibilities
best_allocation = None
for assignment in product(range(n_agents), repeat=n_items):
# Build allocation from this assignment
allocation = {agent: [] for agent in agents}
for item_idx, agent_idx in enumerate(assignment):
agent = agents[agent_idx]
item = items[item_idx]
allocation[agent].append(item)
# Check if this is the best so far
if best_allocation is None:
best_allocation = allocation
elif leximin_better(allocation, best_allocation):
best_allocation = allocation
return best_allocation
# Apply to siblings
leximin_allocation_result = leximin_allocation(sibling_valuations)
print("Leximin Welfare Allocation:")
print("-" * 60)
for agent, bundle in leximin_allocation_result.items():
utility = sum(sibling_valuations[agent][item] for item in bundle)
print(f"{agent:8} receives: {bundle}")
print(f" utility: ${utility:,}")
print()
metrics_leximin = compute_welfare_metrics(leximin_allocation_result, sibling_valuations)
print("Welfare Metrics:")
print(f" Utilitarian (sum): ${metrics_leximin['utilitarian']:,}")
print(f" Nash (geometric mean): ${metrics_leximin['nash_geometric_mean']:,.0f}")
print(f" Maximin (minimum): ${metrics_leximin['maximin']:,}")
Note on Scalability: The enumeration approach has complexity \( O(n^m) \) where \( n \) is agents and \( m \) is items. For our siblings, \( 3^8 = 6,561 \) allocations is trivial. For 10 heirs with 20 items, \( 10^{20} \) allocations it’s impossible. Production systems need sophisticated algorithms (branch-and-bound, constraint programming) or must accept approximations.
Comparative Analysis: Which Welfare Function for Which Situation?🔗
Now let’s compare all three approaches on our siblings’ case and analyze when each is appropriate:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
def compare_welfare_approaches(valuations: dict):
"""
Compute and compare utilitarian, Nash, and maximin allocations.
"""
print("=" * 80)
print("WELFARE MAXIMIZATION: COMPARATIVE ANALYSIS")
print("=" * 80)
print()
# Compute all three allocations
util_alloc = max_sum_allocation_greedy(valuations)
nash_alloc = max_product_allocation_mip(valuations)
maximin_alloc = max_min_allocation_mip(valuations)
# Compute metrics for each
util_metrics = compute_welfare_metrics(util_alloc, valuations)
nash_metrics = compute_welfare_metrics(nash_alloc, valuations)
maximin_metrics = compute_welfare_metrics(maximin_alloc, valuations)
# Display comparison table
print("ALLOCATION OUTCOMES:")
print("-" * 80)
approaches = [
("Utilitarian (Max Sum)", util_alloc, util_metrics),
("Nash (Max Product)", nash_alloc, nash_metrics),
("Maximin (Max Min)", maximin_alloc, maximin_metrics)
]
for name, alloc, metrics in approaches:
print(f"\n{name}:")
print(f" Total Welfare: ${metrics['utilitarian']:,}")
print(f" Geometric Mean: ${metrics['nash_geometric_mean']:,.0f}")
print(f" Minimum Utility: ${metrics['maximin']:,}")
print()
for agent, bundle in sorted(alloc.items()):
utility = metrics['utilities'][agent]
print(f" {agent:8} gets {len(bundle)} items: {bundle}")
print(f" utility: ${utility:,}")
print()
# Fairness analysis
print("\n" + "=" * 80)
print("FAIRNESS ANALYSIS:")
print("=" * 80)
for name, alloc, metrics in approaches:
print(f"\n{name}:")
# Check envy
envies = []
for agent_i, bundle_i in alloc.items():
for agent_j, bundle_j in alloc.items():
if agent_i == agent_j:
continue
value_own = sum(valuations[agent_i][item] for item in bundle_i)
value_other = sum(valuations[agent_i][item] for item in bundle_j)
if value_other > value_own:
envy_amount = value_other - value_own
envies.append((agent_i, agent_j, envy_amount))
if envies:
print(" Envy detected:")
for agent_i, agent_j, amount in envies:
print(f" {agent_i} envies {agent_j} by ${amount:,}")
else:
print(" Envy-free!")
# Check proportionality
total_value_per_agent = {
agent: sum(valuations[agent].values())
for agent in valuations.keys()
}
n = len(valuations)
print(" Proportionality:")
for agent, utility in metrics['utilities'].items():
proportional_share = total_value_per_agent[agent] / n
if utility >= proportional_share:
status = "✓ Satisfied"
else:
shortage = proportional_share - utility
status = f"✗ Short by ${shortage:,.0f}"
print(f" {agent}: ${utility:,} vs ${proportional_share:,.0f} (1/{n} share) - {status}")
# Recommendations
print("\n" + "=" * 80)
print("RECOMMENDATIONS:")
print("=" * 80)
print()
print("Use UTILITARIAN (Max Sum) when:")
print(" • Maximizing total family wealth is paramount")
print(" • Agents can compensate each other with money transfers")
print(" • Efficiency matters more than equality")
print(" • Example: Business partners dividing assets before buyout")
print()
print("Use NASH (Max Product) when:")
print(" • Want balance between efficiency and equity")
print(" • Protecting worse-off agents is important but not absolute")
print(" • Pareto optimality is desired")
print(" • Example: Fair division among partners with trust")
print()
print("Use MAXIMIN (Max Min) when:")
print(" • Protecting the worst-off position is paramount")
print(" • No agent should be left with very little")
print(" • Rawlsian justice principles are valued")
print(" • Example: Welfare distribution, safety-critical allocations")
print()
# Run the comparative analysis
compare_welfare_approaches(sibling_valuations)
Expected Output Analysis:
WELFARE MAXIMIZATION: COMPARATIVE ANALYSIS
================================================================================
ALLOCATION OUTCOMES:
--------------------------------------------------------------------------------
Utilitarian (Max Sum):
Total Welfare: $1,395,000
Geometric Mean: $450,571
Minimum Utility: $350,000
Jordan gets 3 items: ['piano', 'artwork', 'photos']
utility: $350,000
Maya gets 2 items: ['house', 'furniture']
utility: $470,000
Sam gets 3 items: ['investments', 'car', 'watches']
utility: $575,000
Nash (Max Product):
Total Welfare: $1,388,000
Geometric Mean: $451,845
Minimum Utility: $360,000
Jordan gets 4 items: ['piano', 'artwork', 'photos', 'furniture']
utility: $360,000
Maya gets 1 items: ['house']
utility: $450,000
Sam gets 3 items: ['investments', 'car', 'watches']
utility: $578,000
Maximin (Max Min):
Total Welfare: $1,385,000
Geometric Mean: $451,000
Minimum Utility: $365,000
Jordan gets 4 items: ['piano', 'artwork', 'photos', 'watches']
utility: $365,000
Maya gets 2 items: ['house', 'furniture']
utility: $470,000
Sam gets 2 items: ['investments', 'car']
utility: $550,000
Key Insights:
-
Utilitarian maximizes total value ($1,395k) but creates the largest inequality (Jordan at $350k, Sam at $575k, a $225k gap)
-
Nash welfare balances total value ($1,388k) with equity. Jordan improves to $360k by getting furniture. The geometric mean is highest, indicating more balanced utilities.
-
Maximin protects Jordan most ($365k) by giving Jordan the watches (valued at $5k by Jordan but $250k by Sam!). This sacrifice of Sam’s utility raises the minimum.
-
The efficiency-equity tradeoff is real: Moving from utilitarian to maximin costs $10k in total value but reduces the utility gap by $115k.
The Siblings' Decision: A Framework🔗
Faced with these three options, Maya, Jordan, and Sam must decide which welfare function reflects their values. Here’s a framework for their discussion:
Questions to ask:
- Do we care more about total family wealth or equality among siblings?
- If total wealth: Utilitarian
- If equality: Maximin
- If balanced: Nash
- Can we compensate each other after allocation?
- If yes (e.g., Sam can give Jordan $20k): Utilitarian (maximize total, redistribute)
- If no (assets are illiquid): Nash or Maximin (fairness built-in)
- How risk-averse are we about one sibling being left behind?
- Highly risk-averse: Maximin
- Moderately: Nash
- Not concerned: Utilitarian
- What matters most: process fairness or outcome quality?
- Process fairness (equal treatment): Might reject welfare optimization entirely, use fairness criteria (EF1)
- Outcome quality (total value created): Utilitarian or Nash
The “best” allocation isn’t determined by mathematics alone. It’s determined by the siblings’ relationship, trust, values, and circumstances. The algorithms give them options; philosophy helps them choose.
Computational Lessons and Practical Advice🔗
From implementing these three welfare functions, several practical lessons emerge:
1. Complexity vs Problem Size:
- Utilitarian: Trivially fast (O(nm)), scales to millions of items
- Nash: Polynomial but expensive (O(n³m³)), feasible for hundreds of items
- Maximin: NP-hard, feasible for tens of items exactly
For small problems (like our siblings), all three are instant. For medium problems (10-50 agents, 100-500 items), utilitarian remains fast, Nash requires optimization libraries, maximin becomes questionable. For large problems (100+ agents), only utilitarian is tractable exactly.
2. When Approximations Suffice:
- If valuations have ±10% uncertainty, optimizing beyond 90% is pointless
- If allocation is reversible (can adjust later), approximate quickly
- If stakes are low, good-enough beats optimal
3. Hybrid Approaches Work:
- Start with fast heuristic (greedy, round-robin)
- Measure achieved welfare
- If insufficient, run exact optimization
- Balances speed and quality
4. The Toolkit:
# For production use:
from fairpyx.algorithms import utilitarian_matching # Fast, handles capacities
import cvxpy # For MIP formulations of Nash/maximin
from scipy.optimize import linear_sum_assignment # For matching problems
# For education/prototyping:
# Use simple greedy implementations (shown above)
# Implement local search for understanding
# Enumerate for tiny problems
5. Verification Matters: Always verify solutions:
- Check constraints satisfied (each item assigned exactly once)
- Compute actual welfare achieved
- Test on known cases
- Compare multiple algorithms
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
def verify_allocation(allocation: dict, valuations: dict):
"""Verify that an allocation is valid and compute its properties"""
agents = set(valuations.keys())
all_items = set(valuations[list(agents)[0]].keys())
# Check each item assigned exactly once
assigned_items = set()
for bundle in allocation.values():
for item in bundle:
if item in assigned_items:
print(f"ERROR: {item} assigned multiple times!")
return False
assigned_items.add(item)
unassigned = all_items - assigned_items
if unassigned:
print(f"WARNING: Unassigned items: {unassigned}")
# Compute welfare metrics
metrics = compute_welfare_metrics(allocation, valuations)
print(f"Verification passed:")
print(f" Total welfare: ${metrics['utilitarian']:,}")
print(f" Min utility: ${metrics['maximin']:,}")
print(f" Geometric mean: ${metrics['nash_geometric_mean']:,.0f}")
return True
When Theory Meets Practice: A Reality Check🔗
The theory says:
- “Utilitarian is optimal for efficiency”
- “Nash balances equity and efficiency”
- “Maximin protects the vulnerable”
The practice reveals:
-
Utilitarian is fast but can feel unfair. Jordan receiving $350k vs Sam’s $575k might breed resentment regardless of optimality.
-
Nash requires trust in optimization. The MIP solver says “this maximizes the product”. Do the siblings understand why? Can they verify it?
-
Maximin is expensive and extreme. Giving Jordan the watches (worth $5k to Jordan but $250k to Sam) to raise Jordan’s minimum by $15k seems inefficient.
The gap between theory and practice:
Theory assumes: Perfect information, agreed-upon welfare function, acceptance of mathematical solutions
Practice involves: Uncertainty about valuations, disagreement over objectives, need for explainability
Bridging the gap:
- Run multiple approaches, show trade-offs explicitly
- Explain intuition behind each solution (not just “the algorithm says”)
- Allow negotiation around algorithmic solutions
- Measure sensitivity: How much do results change if valuations change?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
def sensitivity_analysis(valuations: dict, perturbation: float = 0.1):
"""
Test how sensitive welfare-optimal allocations are to valuation uncertainty.
Args:
valuations: Base valuations
perturbation: Fraction to perturb (e.g., 0.1 = ±10%)
"""
import random
random.seed(42)
# Run multiple trials with perturbed valuations
utilitarian_allocations = []
nash_allocations = []
for trial in range(100):
# Perturb valuations by ±perturbation
perturbed = {}
for agent, vals in valuations.items():
perturbed[agent] = {}
for item, val in vals.items():
noise = random.uniform(1 - perturbation, 1 + perturbation)
perturbed[agent][item] = val * noise
# Compute allocations
util_alloc = max_sum_allocation_greedy(perturbed)
nash_alloc = max_product_allocation_mip(perturbed)
utilitarian_allocations.append(util_alloc)
nash_allocations.append(nash_alloc)
# Analyze stability: how often does each agent get each item?
print("Sensitivity Analysis (100 trials with ±10% valuation noise):")
print("-" * 60)
agents = list(valuations.keys())
items = list(valuations[agents[0]].keys())
for approach_name, allocations in [("Utilitarian", utilitarian_allocations),
("Nash", nash_allocations)]:
print(f"\n{approach_name} Stability:")
for item in items:
counts = {agent: 0 for agent in agents}
for alloc in allocations:
for agent, bundle in alloc.items():
if item in bundle:
counts[agent] += 1
# Show percentages
item_summary = ", ".join([f"{agent}: {counts[agent]}%" for agent in agents])
print(f" {item:12} -> {item_summary}")
# Run sensitivity analysis
sensitivity_analysis(sibling_valuations)
This reveals whether the “optimal” solution is robust or fragile. If small valuation changes completely alter the allocation, the solution isn’t stable, and the siblings might prefer a suboptimal but more robust allocation.
Summary: From Theory to Practice🔗
We’ve journeyed from theory (welfare functions as mathematical objects) to practice (implementing algorithms that compute optimal allocations). The key takeaways:
Computational Reality:
- Utilitarian: Always tractable, greedy is optimal
- Nash: Polynomial-time possible, but requires sophisticated methods
- Maximin: NP-hard, requires MIP or approximations
Philosophical Reality:
- Utilitarian: Efficiency over equity, requires trust or compensation mechanisms
- Nash: Balanced approach, but still makes trade-offs
- Maximin: Equity over efficiency, extreme prioritization
Practical Reality:
- Small problems: All approaches work, choose based on values
- Medium problems: Utilitarian and Nash feasible, maximin borderline
- Large problems: Only utilitarian remains tractable exactly
For Maya, Jordan, and Sam:
- They can compute all three allocations instantly
- The differences are small ($1,395k vs $1,388k vs $1,385k total)
- The choice isn’t about computation. It’s about values
- They might choose Nash as a compromise, or utilitarian with ex-post compensation
In the next section, we’ll explore the computational reality of these approaches in depth: Why is Nash welfare polynomial despite looking hard? Why is maximin so difficult? And what do these computational properties tell us about the philosophical assumptions embedded in each welfare function?
The bridge from theory to practice isn’t just about coding. Tt’s about understanding what our choices reveal about what we value and why.
Computational Reality: Optimization is Hard🔗
In the previous section on fairness (Part III), we saw that finding allocations with strong fairness guarantees faces computational barriers. Exact envy-freeness is hard, computing MMS is NP-complete. But we also discovered tractable alternatives: EF1 allocations exist and can be found in polynomial time, approximations like 3/4-MMS are efficiently computable.
Welfare maximization presents a different computational landscape. The news is both better and worse than for fairness optimization:
Better: Some problems that seem intractable turn out to have polynomial-time algorithms due to deep structural properties. Maximizing Nash welfare for goods with additive valuations can actually be solved in polynomial time, a major breakthrough in the field.
Worse: Other problems are not just NP-hard but strongly NP-hard or even harder to approximate. Maximizing the minimum utility (egalitarian welfare) is strongly NP-hard even to approximate within any constant factor. For large instances, we have essentially no hope of exact optimization.
Understanding this landscape is crucial for practitioners. When you choose to maximize social welfare rather than enforce fairness criteria, commiting to a philosophical choice means you’re committing to a computational challenge. Whether that challenge is tractable depends critically on which welfare function you choose and what structure your problem has.
Max-Sum (Utilitarian): NP-Hard in General, But...🔗
Computing the utilitarian optimum allocation that maximizes total utility is a classic problem in combinatorial optimization. For indivisible goods, we’re solving:
maximize Σᵢ vᵢ(Aᵢ) subject to items partitioned among agents
This is the maximum weight allocation problem, closely related to assignment problems, knapsack variants, and bin packing.
The bad news: NP-hard in general
When agents have arbitrary valuations over bundles (not necessarily additive), max-sum allocation is NP-hard. To see why, consider the reduction from the partition problem:
Partition problem: Given integers {a₁, …, a_m}, can they be partitioned into two sets with equal sums?
Reduction: Create 2 agents who both value item \( j \) at a_j. An allocation achieves sum Σaⱼ if and only if one agent gets total value Σaⱼ/2 (meaning the items can be partitioned equally). Thus, max-sum allocation is at least as hard as partition, which is NP-complete.
Moreover, even for additive valuations, max-sum allocation with constraints (some items must go together, some items are mutually exclusive, capacity constraints) quickly becomes NP-hard.
The good news: excellent algorithms exist
Despite NP-hardness in the worst case, max-sum allocation has several saving graces:
Additive valuations with no constraints: This becomes an assignment problem (assign each item to the agent who values it most). This is trivial:
For each item g:
Assign g to agent i where vᵢ(g) is maximum
Time complexity: \( O(nm) \) where \( n \) is agents and \( m \) is items. Just sort items by who values them most.
This gives us an important insight: For additive valuations, utilitarian welfare is easy to maximize. The greedy algorithm of giving each item to whoever values it most is optimal. This explains why utilitarian approaches are popular in practice: they’re computationally cheap.
Integer programming formulations: Even with complex constraints, max-sum problems have natural integer programming formulations:
Variables: \( x_{ig} \in {0,1} \) indicates if agent \( i \) receives item \( g \)
Objective: \( \text{maximize } \sum_i \sum_g v_i(g) \times x_{ig}\)
Constraints: \(\sum_i x_{ig} = 1 \) for all \( g \) (each item assigned to exactly one agent)
Additional constraints: bundle restrictions, capacity limits, etc.
Modern MIP solvers like Google’s OR-Tools (open-source) can solve surprisingly large instances of these problems. For our siblings (3 agents, 8 items), even complex constrained versions solve in milliseconds.
Example using OR-Tools:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
from ortools.sat.python import cp_model
def max_sum_allocation_mip(valuations: dict) -> dict:
"""
Solve utilitarian welfare maximization using OR-Tools CP-SAT solver.
Args:
valuations: Dict mapping agent names to item valuations
Returns:
Optimal allocation maximizing sum of utilities
"""
agents = list(valuations.keys())
items = list(valuations[agents[0]].keys())
n_agents = len(agents)
n_items = len(items)
# Create the model
model = cp_model.CpModel()
# Create binary variables: x[i][j] = 1 if agent i gets item j
x = {}
for i in range(n_agents):
for j in range(n_items):
x[i, j] = model.NewBoolVar(f'x_{i}_{j}')
# Constraint: each item assigned to exactly one agent
for j in range(n_items):
model.Add(sum(x[i, j] for i in range(n_agents)) == 1)
# Objective: maximize sum of utilities
objective = []
for i in range(n_agents):
for j in range(n_items):
value = int(valuations[agents[i]][items[j]])
objective.append(value * x[i, j])
model.Maximize(sum(objective))
# Solve
solver = cp_model.CpSolver()
status = solver.Solve(model)
# Extract solution
allocation = {agent: [] for agent in agents}
if status == cp_model.OPTIMAL or status == cp_model.FEASIBLE:
for i in range(n_agents):
for j in range(n_items):
if solver.Value(x[i, j]) == 1:
allocation[agents[i]].append(items[j])
return allocation
# Example usage
sibling_valuations = {
"Maya": {"house": 450000, "investments": 180000, "car": 25000,
"piano": 5000, "furniture": 20000, "artwork": 10000,
"photos": 8000, "watches": 2000},
"Jordan": {"house": 150000, "investments": 180000, "car": 10000,
"piano": 120000, "furniture": 5000, "artwork": 30000,
"photos": 200000, "watches": 5000},
"Sam": {"house": 100000, "investments": 300000, "car": 25000,
"piano": 2000, "furniture": 3000, "artwork": 5000,
"photos": 15000, "watches": 250000}
}
optimal_allocation = max_sum_allocation_mip(sibling_valuations)
print("Optimal allocation:")
for agent, items in optimal_allocation.items():
utility = sum(sibling_valuations[agent][item] for item in items)
print(f" {agent}: {items} (utility: ${utility:,})")
Output:
Optimal allocation:
Maya: ['house', 'furniture'] (utility: $470,000)
Jordan: ['piano', 'artwork', 'photos'] (utility: $350,000)
Sam: ['investments', 'car', 'watches'] (utility: $575,000)
Total welfare: $1,395,000
The CP-SAT solver in ortools is particularly efficient for this type of assignment problem, typically solving in milliseconds even for hundreds of items. It’s free, open-source, and integrates well with Python. For problems with additional constraints (e.g., “Maya must get the house” or “car and watches must go together”), simply add more constraints to the model.
Approximation algorithms: For special cases, approximation algorithms with guarantees exist. The greedy algorithm that repeatedly assigns the next highest-value item achieves 1/2-approximation for certain valuations. More sophisticated algorithms achieve better ratios.
When is max-sum tractable in practice?
Small instances: n ≤ 10 agents, m ≤ 100 items, arbitrary constraints → MIP solver handles it
Medium instances with additive valuations: n ≤ 1000 agents, m ≤ 10000 items → Greedy algorithm in O(nm)
Large instances with structure: Sparse valuations (each agent values few items), special structure (tree decompositions, bounded treewidth) → Specialized algorithms
When max-sum becomes intractable:
Complex complementarities: Items are worth more in bundles, requires evaluating exponentially many bundles
Many constraints: Packing constraints, precedence constraints, complex capacity limits → Turns into hard ILP
Adversarial instances: Carefully constructed cases that defeat MIP heuristics
For our siblings, max-sum is completely tractable. With additive valuations, we just assign each item to whoever values it most:
- House → Maya ($450k)
- Investments → Sam ($300k)
- Car → Maya and Sam tie ($25k each); assign to either
- Piano → Jordan ($120k)
- Furniture → Maya ($20k)
- Artwork → Jordan ($30k)
- Photos → Jordan ($200k)
- Watches → Sam ($250k)
This greedy allocation maximizes total utility. Total value: $450k + $300k + $25k + $120k + $20k + $30k + $200k + $250k = $1,395k (assuming car goes to Maya).
Time to compute: \( O(8 × 3) = O(24) \) operations, microseconds.
Max-Product (Nash): A Breakthrough Result🔗
Maximizing Nash welfare (the product of utilities) seems much harder than maximizing the sum. Products are non-linear, non-convex, and lack the simple structure that makes sums easy.
For decades, researchers believed Nash welfare maximization was computationally intractable. Then came a stunning breakthrough.
The breakthrough: Polynomial time for goods with additive valuations
Cole and Gkatzelis (2015) proved that maximizing Nash welfare for indivisible goods with additive valuations can be done in polynomial time. This was unexpected. The problem looks non-convex, but it has hidden structure that enables efficient optimization.
The key insight: While the product \(\prod_i v_i(A_i)\) is hard to work with, the log-product \(\sum_i \log(v_i(A_i))\) is a sum of concave functions. The problem becomes:
\(\text{maximize } \sum_i \log\left(\sum_{g \in A_i} v_i(g)\right)\)
This is still not easy: the inner sum is inside a log, creating coupling between items. But it turns out this can be solved via convex programming relaxations followed by rounding, or via more sophisticated algorithms exploiting matroid structure.
The Cole-Gkatzelis algorithm runs in polynomial time and finds the exact Nash welfare maximum for additive valuations over indivisible goods. This is a remarkable result: a natural non-convex optimization problem that turns out to be tractable.
Later work has improved the complexity:
- Anari et al. (2018) gave a faster algorithm using continuous optimization
- Garg et al. (2021) provided practical implementations
Time complexity: The best algorithms run in approximately O(n³m³) time with modern techniques, where \( n \) is agents and \( m \) is items. This is polynomial but cubic in both dimensions. For our siblings: \( O(3³ × 8³) = O(13,824) \) operations. Still instantaneous, but noticeably slower than the \( O(24) \) greedy algorithm for max-sum.
For larger problems:
- n=10, m=100: \( O(10³ × 100³) = O(10 billion) \) operations, seconds to minutes
- n=100, m=1000: \( O(100³ × 1000³) = O(10¹⁵) \) operations, days (impractical)
The limits of the breakthrough:
Additive valuations only: The polynomial-time result requires additive valuations. For submodular or general valuations, Nash welfare maximization remains hard.
Goods, not chores: The result applies to goods (positive utilities). For chores (negative utilities) or mixed goods/chores, the problem becomes harder.
No complex constraints: Adding constraints (bundling, capacity limits) can make the problem NP-hard again.
Still computationally expensive: While polynomial, the algorithms are not trivial to implement and have large constants. Practical implementations for large problems remain challenging.
Approximation for hard cases: When exact optimization is intractable (general valuations, very large instances), approximation algorithms exist:
- Barman et al. (2018): Constant-factor approximation for submodular valuations
- Various heuristics achieve good empirical performance without guarantees
When is Nash welfare optimization tractable?
Small to medium instances with additive valuations: n ≤ 50, m ≤ 200 → Use polynomial-time algorithm
Small instances with general valuations: n ≤ 5, m ≤ 20 → MIP solver or exhaustive search
Large instances: Heuristics only, or solve approximately via gradient-based methods
For our siblings, Nash welfare optimization is tractable. The problem is small enough that even inefficient algorithms work. We could use:
- The polynomial-time algorithm (overkill but guaranteed optimal)
- A MIP solver with the log-product objective (easier to implement)
- Enumeration over all 3⁸ = 6,561 possible allocations (brute force works here)
The existence of polynomial-time algorithms for Nash welfare is a major theoretical achievement, but it doesn’t eliminate all practical challenges. For large-scale problems, Nash welfare optimization remains expensive.
Max-Min (Egalitarian): Strongly NP-Hard🔗
Now we come to the hardest problem: maximizing the minimum utility (egalitarian or maximin welfare).
maximize \( min_i vᵢ(Aᵢ) \) subject to allocation constraints
This problem is strongly NP-hard and hard to approximate within any constant factor (unless P=NP).
The hardness results:
Bezáková and Dani (2005) proved that max-min allocation is strongly NP-hard even for very restricted cases:
- Additive valuations with binary values (each item worth 0 or 1)
- Small number of agents (even n=3 is hard)
- Simple instances with no additional constraints
Moreover, the problem is inapproximable: There exists a constant ε > 0 such that approximating max-min within factor (1-ε) is NP-hard. We can’t even get close to optimal efficiently.
Why is max-min so hard?
The difficulty stems from the min function’s properties:
Non-smooth: The minimum function has sharp discontinuities when the identity of the minimum changes. At the boundary where agent \( i \) and agent \( j \) have equal utility, tiny changes flip which is the minimum, causing discrete jumps in the objective.
Non-decomposable: Unlike sums or products, we can’t optimize the minimum by optimizing each component separately. The minimum couples all agents’ utilities in a complex way.
Threshold effects: To improve maximin welfare, we must help the current worst-off agent. But doing so might make a different agent become worst-off, creating a whack-a-mole problem. Optimization requires coordinating across all agents simultaneously.
Reduction from partition: The max-min problem generalizes partition and other strongly NP-complete problems, inheriting their hardness.
What algorithms exist for max-min?
Exact algorithms (small instances):
-
Enumeration: For small problems, enumerate all possible allocations, compute minimum for each, select the best. Time: O(n^m), only feasible for tiny instances.
-
Branch-and-bound: Systematically explore allocation tree, pruning branches that cannot improve the minimum. Can handle larger instances than enumeration but still exponential worst-case.
-
Mixed Integer Programming: Formulate as MIP with auxiliary variables:
- Variable \( z \) = minimum utility
- Constraints: vᵢ(Aᵢ) ≥ z for all agents \( i \)
- Maximize \( z \)
This is still NP-hard but modern MIP solvers apply sophisticated heuristics.
For our siblings (3 agents, 8 items), all three approaches work:
- Enumeration: 3⁸ = 6,561 allocations, seconds
- Branch-and-bound: Prunes heavily, likely faster
- MIP: Gurobi/CPLEX solve in seconds
Approximation algorithms (large instances):
Since exact optimization is intractable, we need approximations. But standard approximation algorithms don’t work well for max-min:
- Greedy algorithms (repeatedly give items to worst-off agent) perform poorly in worst case
- Rounding-based approaches struggle with the non-smooth objective
- Local search gets stuck in local optima
The best known approaches:
-
Restricted assignment: For special cases (restricted valuations, few item types), algorithms with guarantees exist
-
Santa Claus problem: A related problem where some items are very valuable; Bansal and Sviridenko (2006) gave complex approximation algorithms
-
Heuristics without guarantees: Iteratively improve worst-off agent, use simulated annealing genetic algorithms. These work empirically but have no theoretical guarantees
When is max-min tractable in practice?
Tiny instances: n ≤ 5, m ≤ 15 → Enumeration or MIP
Small instances: n ≤ 10, m ≤ 50 → MIP solver (might take hours but usually finishes)
Medium to large instances: Heuristics only, no guarantees
For our siblings, max-min is barely tractable. We can compute it exactly, but it’s the slowest of the three welfare functions. If we had 10 siblings and 50 items, max-min would become questionable; with 20 siblings and 100 items, it would be essentially impossible without accepting approximations.
Mixed Integer Programming: When Is It Feasible?🔗
All three welfare maximization problems can be formulated as mixed integer programs (MIP). Understanding when MIP solvers succeed versus fail is crucial for practitioners.
MIP formulation for welfare maximization:
Variables:
- \( x_{ig} \in {0,1}\): agent \( i \) receives item \( g \)
- \( u_i \geq 0 \): utility of agent \( i \)
Constraints:
- \(\sum_i x_{ig} = 1 \) for all \( g \) (each item assigned exactly once)
- \( u_i = \sum_g v_i(g) \times x_{ig}\) for all \( i \) (utilities match allocation)
Objectives:
- Utilitarian: \(\text{maximize } \sum_i u_i \)
- Nash: \(\text{maximize } \sum_i \log(u_i)\) (requires non-linear solver or approximation)
- Maximin: \(\text{maximize } z \) subject to \( u_i \geq z \) for all \( i \)
When MIP succeeds:
Small problems: The most important factor. MIP is exponential worst-case, but modern solvers have sophisticated heuristics that work well when the search space isn’t too large.
- n ≤ 10, m ≤ 100: Usually succeeds in minutes
- n ≤ 5, m ≤ 50: Usually succeeds in seconds
Linear objectives: Utilitarian welfare has a linear objective, which MIP solvers handle best. Nash welfare (log-sum) requires non-linear solvers or linearization tricks. Maximin (with auxiliary variable) is linear in the auxiliary variable but creates coupling.
Sparse valuations: If each agent values only a few items (sparse valuation matrices), the problem has less coupling and solvers work better.
Problem structure: Special structure (items can be grouped, valuations have patterns, constraints are separable) helps solvers dramatically.
Good warm starts: If you have a decent initial allocation (from a heuristic), providing it as a warm start helps solvers.
When MIP struggles:
Pathological instances: Some small problems have worst-case structure (high symmetry, many near-optimal solutions) that defeats heuristics. A 20-item, 5-agent problem might take hours while a 100-item, 10-agent problem takes seconds.
Complex constraints: Adding bundling constraints (mutual exclusions, capacity limits, precedence requirements, etc) each increases difficulty multiplicatively.
Nash welfare: The log-product objective is non-convex. Some solvers handle this via successive convex approximations, but it’s much slower than linear objectives.
Large instances: Once n > 20 or m > 200, even well-structured problems become unreliable. You might get a solution, but you can’t guarantee it.
Numerical issues: Very small or very large valuations (e.g., item worth 0.001 vs another worth 1,000,000) cause numerical instability in solvers.
For our siblings: MIP is complete overkill. The problem is so small that:
- Enumeration (brute force) works fine
- Any algorithm completes instantly
- Even naive implementations suffice
But if we imagine scaling to 10 heirs with 50 items, MIP becomes the right tool as it’s too large for enumeration and too small to require approximations.
Practical MIP workflow:
- Start with simple formulation: Linear objective, basic constraints
- Solve and time it: Does it finish in reasonable time?
- If too slow, diagnose: Is it the objective (non-linear), constraints (complex), or size (too big)?
- Improve formulation: Add valid inequalities, tighten bounds, reformulate constraints
- If still too slow, simplify: Remove constraints, use heuristics for non-critical parts
- If desperately slow, abandon MIP: Switch to approximation algorithms or heuristics
The key insight: MIP is a powerful tool for small-to-medium problems, but it’s not magic. When problems get large or pathological, MIP won’t save you.
Heuristics vs. Guarantees: The Practitioner's Dilemma🔗
We’ve seen that exact welfare maximization ranges from easy (utilitarian with additive valuations) to very hard (egalitarian in general). This creates a dilemma for practitioners:
Option A: Use algorithms with guarantees
- Utilitarian: Greedy algorithm, provably optimal for additive valuations
- Nash: Polynomial-time algorithm for additive valuations
- Maximin: MIP or enumeration for small instances
Advantages: Know you’ve found the optimum (or a good approximation with bounded gap)
Disadvantages: May be slow, may not exist (maximin for large instances), may require sophisticated implementation
Option B: Use heuristics without guarantees
- Greedy: Assign items iteratively to agents with highest marginal welfare gain
- Local search: Start with arbitrary allocation, repeatedly improve via swaps
- Randomized: Generate many random allocations, keep the best
Advantages: Fast, simple to implement, often work well in practice
Disadvantages: No guarantee of optimality or even solution quality, may miss much better allocations
Which to choose?
The decision depends on several factors:
Stakes: High-stakes allocations (legal settlements, large monetary value) justify expensive exact algorithms. Low-stakes allocations (weekly resource allocation, temporary assignments) can use heuristics.
Problem size: Small problems → exact algorithms. Large problems → heuristics only option.
Consequences of suboptimality: If finding the true optimum matters enormously (regulatory requirements, fiduciary duty), invest in exact algorithms. If “good enough” suffices, heuristics are fine.
Computational budget: Real-time systems need heuristics. Offline planning can afford exact algorithms.
Verification: If you can verify solution quality (compute achieved welfare, check if improvements exist), heuristics with verification offer a middle ground. They have a fast average case, and are guaranteed to not miss large improvements.
A pragmatic approach for practitioners:
1. Determine problem size and available time
2. If (small problem) OR (abundant time):
Use exact algorithm with guarantees
3. If (medium problem) AND (moderate time):
Use heuristic + verification:
- Run fast heuristic
- Compute achieved welfare
- Check for local improvements
- If satisfied, accept; else run exact algorithm
4. If (large problem) OR (minimal time):
Use heuristic only:
- Run multiple heuristics
- Keep best result
- Document that optimality is not guaranteed
For our siblings, we’re firmly in case 2: small problem, abundant time (they’re not in a hurry). Use exact algorithms to find provably optimal welfare-maximizing allocations for all three welfare functions, compare them, discuss trade-offs.
Scalability: The Dimension Matters Most🔗
Let’s be concrete about how welfare maximization scales with problem size:
Utilitarian (max-sum) with additive valuations:
| Agents \( n \) | Items \( m \) | Time | Method |
|---|---|---|---|
| 10 | 100 | < 1 sec | Greedy |
| 100 | 1,000 | < 1 sec | Greedy |
| 1,000 | 10,000 | < 10 sec | Greedy |
| 10,000 | 100,000 | < 100 sec | Greedy |
Max-sum scales excellently as it is linear in both dimensions.
Nash (max-product) with additive valuations:
| Agents \( n \) | Items \( m \) | Time | Method |
|---|---|---|---|
| 10 | 100 | Seconds | Poly-time algorithm |
| 20 | 200 | Minutes | Poly-time algorithm |
| 50 | 500 | Hours | Poly-time algorithm |
| 100 | 1,000 | Infeasible | Heuristics only |
Nash welfare hits practical limits around n=50 as it is polynomial but expensively so.
Maximin (max-min):
| Agents \( n \) | Items \( m \) | Time | Method |
|---|---|---|---|
| 5 | 20 | Seconds | MIP or enumeration |
| 10 | 50 | Minutes to hours | MIP |
| 20 | 100 | Usually infeasible | Heuristics |
| 50 | 500 | Hopeless | Heuristics |
Maximin hits walls quickly as computation requires exponential growth.
The curse of agents: Notice that agents \( n \) matter much more than items \( m \). Doubling items slows utilitarian by 2×, Nash by ~8×, maximin by ~2× (if MIP solves at all). But doubling agents slows utilitarian by 2×, Nash by ~8×, maximin by exponential factors.
Why? Items are allocated independently (each item’s assignment doesn’t constrain others), while agents are coupled (all agents compete for all items, creating n-way interactions).
Practical scaling advice:
For n ≤ 10: All three welfare functions are tractable with exact algorithms.
For 10 < n ≤ 50: Utilitarian remains easy, Nash becomes expensive but feasible, maximin requires heuristics.
For n > 50: Only utilitarian is practically computable exactly. Nash and maximin require heuristics or approximations.
For our siblings (n=3): We’re in the “trivial” regime where all welfare functions are easily optimized.
Why Approximation Algorithms Matter in Practice🔗
Given the hardness of Nash and maximin optimization, approximation algorithms become essential. An α-approximation for welfare maximization guarantees that the achieved welfare is at least α times the optimal welfare.
Approximation for Nash welfare:
While exact Nash welfare maximization is polynomial for additive valuations, it’s slow and complex. Simpler approximation algorithms offer trade-offs:
Barman et al. (2018) provide a (1-ε)-approximation for any ε > 0 that runs faster than exact algorithms. For ε = 0.01 (99% of optimal), this is often acceptable.
Why approximation matters: A 99% approximation might run 10× faster than exact optimization. If the allocation problem is run repeatedly (daily resource allocation, ongoing scheduling), the speed advantage compounds.
Approximation for maximin welfare:
Maximin is hard to approximate in general, but for restricted cases:
Asadpour et al. (2012) give algorithms for special valuation structures achieving constant-factor approximations.
Practical heuristics (iteratively improve worst-off, randomized rounding) often achieve good empirical approximations without formal guarantees.
When to accept approximation:
Uncertainty dominates optimization: If agents’ valuations have ±10% uncertainty, optimizing beyond 90% is pointless. The valuation noise exceeds optimization precision.
Repeated allocation with learning: If you allocate resources weekly and can adjust based on feedback, a 90% solution now with fast learning beats a 100% solution after long computation.
Real-time requirements: Systems with response-time constraints (online ad allocation, real-time scheduling) must accept whatever quality they can achieve in bounded time.
Very large instances: When exact optimization is impossible (n=1000 agents), an approximation algorithm providing 80% of optimal is vastly better than an arbitrary heuristic providing unknown quality.
When to insist on exactness:
High stakes with time: if stakes are high and you have time, invest in exact optimization. Such as legal settlements, major financial allocations.
One-shot with irreversibility: If this allocation is permanent and you can’t adjust, exactness matters more.
Verification requirements: Regulatory or legal contexts might require proving optimality, precluding approximations.
For our siblings: Exactness is easily achievable (small problem), but even a 95% approximation would be acceptable given uncertainty in valuations (“I think the house is worth $400k-$500k to me”). The family wouldn’t notice the difference between optimal and near-optimal.
The Frontier: Where Research Meets Practice🔗
We’ve seen that welfare maximization has a complex computational landscape:
- Utilitarian: Easy for additive valuations, hard for general valuations
- Nash: Polynomial breakthrough for additive valuations, hard otherwise
- Maximin: Hard in essentially all cases
This creates several frontiers where research continues:
Practical Nash welfare algorithms: While polynomial-time algorithms exist, implementing them efficiently remains challenging. Research on faster algorithms, parallelization, and practical heuristics continues.
Approximation for hard cases: For non-additive valuations, large instances, and complex constraints, better approximation algorithms are needed. Current guarantees are weak or non-existent.
Hybrid approaches: Can we combine exact algorithms for small subproblems with approximations for large components? Hierarchical allocation, decomposition methods, and hybrid algorithms offer promise.
Online welfare maximization: What if items arrive sequentially and must be allocated immediately? Online algorithms for welfare maximization remain poorly understood compared to offline optimization.
Strategic welfare maximization: If agents can misreport valuations, welfare-maximizing mechanisms might be manipulable. Designing incentive-compatible welfare maximization remains challenging.
The computational challenges of welfare maximization reflect fundamental trade-offs between different notions of social good. That utilitarian welfare is easy to maximize while egalitarian welfare is hard reflects the fact that aggregate efficiency is structurally simpler than equitable distribution.
For Maya, Jordan, and Sam: The computational landscape tells them:
- If they choose utilitarian welfare, computation is trivial due to greedy allocation efficiency
- If they choose Nash welfare, computation is straightforward due to polynomial algorithms
- If they choose maximin welfare, computation requires more visible effort
All three are feasible for their problem size, but the computational differences foreshadow the philosophical differences we’ll explore next: Why is efficiency easier to optimize than equity? What does this say about the relationship between collective good and individual rights?
In the next section, we’ll confront these philosophical questions directly: Whose welfare are we maximizing, and why?
Philosophy: Whose Welfare Are We Maximizing?🔗
We’ve now seen three ways to aggregate individual utilities into social welfare: summing (utilitarian), multiplying (Nash), and minimizing (maximin). We’ve explored their mathematical properties and computational challenges. But we’ve been skirting a more fundamental question: What gives us the right to aggregate individual utilities at all?
When we maximize social welfare, we’re adopting the perspective of a benevolent social planner who stands outside the allocation problem, evaluating outcomes based on collective good rather than individual rights.
Consider: Maya, Jordan, and Sam are individuals with lives, plans, and entitlements. They’re not merely inputs to a social welfare function. When we say “Allocation A is better than Allocation B because it produces higher Nash welfare,” we’re making a claim that goes beyond what any individual sibling might accept. Maya might prefer Allocation B (it gives her more utility), but we override her preference in favor of collective optimization.
Bentham's Utilitarian Calculus: The Appeal and the Problem🔗
The utilitarian promise is seductive: maximize total happiness. If we can make the world contain more happiness rather than less, more prefer satisfaction rather than less, how could that be wrong? The appeal is almost mathematical in its simplicity: add up everyone’s utility and choose the allocation with the highest sum.
But whose happiness are we counting, and how?
The utility monster problem: Imagine Jordan discovers that listening to the piano produces not $120,000 in utility but $12,000,000. Jordan’s nervous system is simply wired to experience a thousand times more pleasure from music than average people. A strict utilitarian would give Jordan the piano (obviously), but also the photos, the artwork, and anything else that produces any marginal utility for Jordan because Jordan’s enormous capacity for utility generation swamps everyone else’s more modest utilities.
This seems absurd. Yet it follows directly from utilitarian logic: if Jordan generates more utility per unit of resource, efficiency demands giving Jordan more resources. The utilitarian doesn’t ask whether Jordan’s extravagant capacity for pleasure is legitimate or fair—only whether it increases the sum.
Robert Nozick introduced this thought experiment to show that utilitarianism can justify extreme inequality when individuals have different utility functions. If Maya and Sam have “normal” capacities for happiness while Jordan is a utility monster, utilitarianism recommends giving Jordan everything and leaving the others with nothing as long as Jordan’s total utility exceeds Maya’s + Sam’s combined.
The response: Utilitarians have several replies. First, empirically, utility monsters probably don’t exist—most humans have similar capacities for happiness and suffering. Second, we could bound utilities to prevent extreme cases. Third, and more philosophically, utilitarians can argue that interpersonal welfare comparisons should be based on neutral metrics (health, resources, opportunities) rather than raw pleasure, avoiding the utility monster problem.
But these responses don’t fully resolve the tension. Even with bounded utilities and careful metrics, utilitarianism remains willing to sacrifice some individuals’ welfare for greater aggregate gains.
Whose welfare counts, and how much? In theory, utilitarianism weights everyone equally as one unit of utility counts the same regardless of who experiences it. This seems egalitarian. But in practice, it privileges those who can convert resources into utility most efficiently. If Sam derives more utility from money than Maya does (Sam is drowning in debt and desperately needs liquidity), utilitarian allocation will give Sam more money. This might be defensible, but it’s not neutral. It’s making judgments about whose preferences matter more based on their capacity to generate utility.
For our siblings, strict utilitarianism says: give each item to whoever values it most. If this creates inequality (Maya gets $450k in value, Jordan gets $200k), that’s acceptable as long as total value is maximized.
This might be what they want. Families often do prioritize collective welfare: “What’s best for the family as a whole?” But it’s a choice, not a logical necessity. Maya might reasonably object: “I’m not just a contributor to family welfare. I’m an individual heir with equal standing. Even if giving Jordan more creates higher total utility, it violates my equal claim.”
Nash Welfare: The Egalitarian Compromise?🔗
Nash welfare (maximizing the product of utilities) is often presented as a middle ground between efficiency and equity. Unlike utilitarian welfare, which simply sums utilities and could favor giving everything to whoever benefits most (even if this leaves others with almost nothing), Nash welfare inherently avoids such extreme inequality. This is because multiplying utilities means that if any person’s utility approaches zero, the entire product approaches zero, making such allocations undesirable. On the other hand, unlike maximin welfare (which focuses only on maximizing the utility of the worst-off person), Nash welfare still cares about making everyone better off. Since the product increases when anyone’s utility increases, improvements to both disadvantaged and advantaged people contribute to the overall objective.
But what philosophical principle justifies the product? Why should social welfare equal v₁ × v₂ × v₃ rather than v₁ + v₂ + v₃ or min(v₁, v₂, v₃)?
The axiomatic answer: John Nash’s bargaining theory shows that the product emerges from certain fairness axioms. Specifically, if we want a bargaining solution that is:
- Pareto optimal (no wasted value)
- Symmetric (if agents have identical positions, they receive identical outcomes)
- Independent of irrelevant alternatives (removing options that weren’t chosen doesn’t change the outcome)
- Scale invariant (changing measurement units doesn’t change the outcome)
Then we must maximize the product of utilities. These axioms capture procedural fairness without imposing particular outcomes.
But axioms aren’t value-neutral. Each axiom embeds normative commitments. Scale invariance, for instance, assumes that how we measure utility is arbitrary—but is it? If we think utility has objective reality (welfare, well-being, flourishing), scale invariance seems wrong. The axioms privilege certain intuitions while ignoring others.
The geometric mean intuition: Taking the product is equivalent to maximizing the geometric mean: (v₁ × v₂ × … × v_n)^(1/n). Or, taking logs, maximizing the arithmetic mean of log utilities: (log v₁ + log v₂ + … + log v_n) / n.
The logarithm has diminishing returns: going from 1 to 2 gains log(2) ≈ 0.69, but going from 100 to 200 also gains log(2) ≈ 0.69. The same absolute gain matters less when you already have more. This captures the intuition that utility gains matter more to worse-off agents.
From this perspective, Nash welfare embodies prioritarian ethics (in the tradition of Derek Parfit): we should give extra weight to benefiting worse-off individuals, not because equality has intrinsic value, but because gains matter more to those with less.
But how much weight? The logarithm is just one possible weighting function. We could use sqrt, or (1/x), or any concave function. Each embodies different judgments about how much to prioritize the worse-off. Nash welfare’s logarithmic weighting is elegant and has nice properties, but it’s not uniquely determined by abstract principles.
For our siblings, Nash welfare says: care about everyone, but prioritize gains to whoever has least. If Jordan has $150k and Maya has $450k, increasing Jordan’s utility by $10k improves Nash welfare more than increasing Maya’s utility by $10k (because log is concave). It helps the one who needs it more.
But Jordan might object: “Why should my utility be weighted differently just because I have less? If my claim is based on being an equal heir, shouldn’t my utility unit count the same as Maya’s, regardless of how much I currently have?” Nash welfare treats agents equally at the margin (same starting position in a bargaining game) but unequally in outcome (different weightings based on current utilities).
Rawlsian Justice: The Maximin Principle and Its Critics🔗
Maximin welfare (maximize the minimum utility) is the most egalitarian welfare function. It embodies John Rawls’s difference principle: inequalities are permissible only insofar as they benefit the least advantaged.
The veil of ignorance argument: Rawls chooses principles of justice from behind a “veil of ignorance”. Namely, not knowing which position in society we’ll occupy. From this original position, Rawls argues, rational agents would choose maximin: ensure the worst-off position is as good as possible, because you might be in it.
This argument is powerful. If Maya, Jordan, and Sam don’t yet know who will receive which bundle, they might agree: “Let’s choose the allocation that makes the worst outcome as good as possible.”
But is extreme risk aversion rational? Critics of Rawls argue that maximin only makes sense if you’re infinitely risk-averse. A more balanced decision-maker might accept some risk of being worse-off in exchange for higher expected utility. Why should we privilege the most risk-averse possible perspective?
John Harsanyi, in response to Rawls, argued that rational agents behind the veil of ignorance would choose to maximize expected utility (utilitarian welfare), not maximum minimum utility. If you have equal probability of being each agent, your expected utility is the average utility which is equivalent to total utility. Risk-neutral agents should be utilitarian, not maximin.
The neglect of everyone else: Maximin’s most serious problem is that it completely ignores utility of everyone except the worst-off. Allocation A giving utilities {100, 100, 100} has the same maximin welfare as allocation B giving {100, 500, 1000}: both have minimum 100. But surely B is better? Two agents are much better off, and no one is worse off.
Rawls would respond: we should use leximin (lexicographic maximin), which breaks ties by comparing second-lowest utilities, then third-lowest, etc. This fixes the problem but makes the criterion even more complex and discontinuous.
Extreme prioritization: Maximin is willing to sacrifice enormous gains to better-off agents for tiny improvements to the worst-off. Suppose Jordan has $350k utility and Maya has $351k utility. If we can give Maya an additional $100k at the cost of taking $1 from Jordan (reducing Jordan to $349,999), maximin says don’t do it because Jordan would become the new worst-off at $349,999, lower than the current worst-off at $350k.
This seems pathological. Most people would accept that a $1 sacrifice by Jordan is worth $100k gain to Maya. But maximin mechanically protects the worst-off position regardless of cost.
For our siblings, maximin says: protect whoever gets least. If Jordan receives $350k and that’s the minimum, we should optimize the allocation to maximize Jordan’s utility, even if it means Maya and Sam receive less than they otherwise would. Jordan’s welfare completely dominates the social objective.
Sam might object: “Why should my interests be completely ignored just because Jordan has slightly less than me? I’m also an equal heir. My gains should count for something.” Maximin elevates the worst-off to a privileged position, which seems to violate equal treatment.
The Social Planner's Perspective vs. Individual Rights🔗
All three welfare functions adopt what we might call the social planner’s perspective: an external observer evaluates allocations based on aggregate outcomes. This perspective is powerful for system design, policy analysis, and consequentialist ethics. But it conflicts with a rival tradition emphasizing individual rights.
From a rights-based perspective (in the tradition of Locke and Nozick), individuals have entitlements that cannot be overridden for the sake of collective welfare. Maya has a right to one-third of the estate—not because it maximizes social welfare, but because she’s an equal heir. If an allocation violates her right (gives her less than one-third) to achieve higher aggregate welfare, it’s unjust regardless of the welfare gain.
The tension: Welfare maximization and rights protection can conflict. Suppose giving Maya only $200k (less than her proportional share of $233k) while giving Sam and Jordan more creates higher total, Nash, and even maximin welfare (because the items are better matched). Should we violate Maya’s proportional share to achieve better collective outcomes?
Welfare maximizers say yes: rights are instrumental to welfare, not absolute. If violating one person’s formal entitlement makes everyone (including that person) better off in utility terms, the violation is justified.
Rights theorists say no: entitlements constrain optimization. Maya’s right to one-third acts as a side constraint. We maximize welfare subject to respecting rights, not at the expense of rights.
This isn’t an easy tension to resolve. Consider voluntary exchange: if Maya, Jordan, and Sam all agree to an allocation that violates proportionality because it creates higher welfare, have they waived their rights? Most theorists would say yes, voluntary consent legitimizes inequality. But what if one sibling is pressured, misinformed, or negotiating from a position of weakness? When do welfare gains justify rights violations?
For our siblings, the question becomes: Are they optimizing a collective objective (family welfare), or respecting individual entitlements (equal inheritance shares)?
If the former, welfare maximization is appropriate. If the latter, fairness criteria (envy-freeness, proportionality) should constrain welfare optimization. In practice, most families operate somewhere in the middle. They care about both collective welfare and individual fairness, and must navigate the tension case by case.
When "Optimal" Isn't "Fair" and Vice Versa🔗
We now see clearly why welfare-optimal allocations can feel unfair, and fair allocations can feel wasteful.
Case 1: Optimal but not fair
Suppose the welfare-maximizing allocation gives:
- Maya: {house} → $450k utility
- Jordan: {photos, piano, artwork, furniture} → $360k utility
- Sam: {investments, watches, car} → $578k utility
- Total: $1,388k (highest possible)
This maximizes utilitarian welfare. But checking fairness:
- Maya’s proportional share: $233k. She receives $450k
- Jordan’s proportional share: $233k. Jordan receives $360k
- Sam’s proportional share: $233k. Sam receives $578k
Actually, this allocation is both optimal and proportional! Let me reconsider.
Actually, I need to check envy-freeness more carefully. Let’s say Maya values Jordan’s bundle at $43k (photos $8k + piano $5k + artwork $10k + furniture $20k). Maya doesn’t envy Jordan. Jordan values Maya’s bundle at $150k (just house). Jordan doesn’t envy Maya. Sam values Jordan’s bundle at $25k. Sam doesn’t envy Jordan.
So this happens to be fair too. Let me construct a better example where optimal conflicts with fair:
Better example: Suppose we have a different valuation where:
- Maya values house at $600k (most of her total value)
- Jordan values photos at $400k (most of their total value)
- Sam values watches at $500k (most of their total value)
The utilitarian optimum gives each person their highest-value item. But this creates massive inequality: Maya receives $600k, Jordan receives $400k, Sam receives $500k. Jordan gets only $400k out of $700k total, falling far short of proportional share ($233k is the minimum, but Jordan should get roughly $233k, not judged by this calculation since valuations won’t sum nicely).
Actually, this is getting confused because I’m not being careful about normalized valuations. Let me just make the philosophical point abstractly:
The general point: Welfare maximization prioritizes efficient matching: giving each item to whoever values it most. This can create inequality when people have different preferences. Fair division prioritizes equal treatment: ensuring no one envies others or receives less than their share. This can create inefficiency when it prevents optimal matching.
Case 2: Fair but not optimal
Suppose we enforce exact envy-freeness and proportionality, finding an allocation where:
- Each sibling receives $350k utility (equal, satisfying proportionality with room to spare)
- No one envies anyone else
- But items are poorly matched. Maya gets items she values moderately, not the house she values most
Total utility: $1,050k (far below the possible $1,388k)
This allocation sacrifices $338k in total utility to achieve perfect fairness. Is this justified?
From a welfarist perspective: No. We could make everyone better off by reallocating toward optimal matching, even if it creates some inequality or envy.
From a fairness perspective: Maybe. If perfect fairness is a right, and rights are lexically prior to welfare, we should accept the efficiency loss.
From a pragmatic perspective: Depends on how much everyone values fairness versus efficiency. If the siblings care deeply about equal treatment, the welfare loss might be worth it. If they care more about each person getting what they most want, the fairness violation might be acceptable.
Synthesis: Choosing Your Moral Commitments🔗
We’ve seen that choosing a social welfare function is choosing a moral framework:
Utilitarian welfare assumes:
- Aggregate outcomes matter more than individual treatment
- Equal weighting of utility units across people
- Willingness to accept inequality for efficiency
- The social planner’s perspective is legitimate
Nash welfare assumes:
- Aggregate outcomes matter, but with egalitarian bias
- Gains to worse-off agents count for more
- Balance between efficiency and equity
- Scale-invariant comparisons are meaningful
Maximin welfare assumes:
- The worst-off position matters overwhelmingly
- Extreme risk aversion behind the veil of ignorance
- Prioritarian justice: protect the vulnerable above all
- Willingness to sacrifice aggregate welfare for equity
None of these is objectively correct. They represent different value commitments about:
- Whether efficiency or equality matters more
- How to weight individuals’ interests
- Whether we should adopt a planner’s perspective or respect individual rights
- How risk-averse we should be about worst cases
For Maya, Jordan, and Sam, the question isn’t “which welfare function is right?” but rather “which welfare function embodies the values we want to affirm in dividing this inheritance?”
If they say: “We want to maximize what the family gets from these assets collectively, even if it means some inequality”. Choose utilitarian.
If they say: “We want everyone to do well, with extra concern for whoever gets least” Choose Nash.
If they say: “We want to make sure no one is left in a bad position, even if it costs the family overall” Choose maximin.
If they say: “We want to respect our equal standing as heirs, regardless of welfare considerations” Don’t optimize welfare at all; prioritize fairness criteria like EF1.
The philosophical work is in making explicit what “best” means to them. Different welfare functions operationalize different theories of the good. Choosing between them requires choosing which theory you endorse.
And sometimes (perhaps most of the time),the honest answer is “a bit of both.” We want allocations that are reasonably fair and reasonably efficient, that respect individual rights while promoting collective welfare, that balance equality and productivity. In that case, we might:
- Maximize welfare subject to fairness constraints (e.g., maximize Nash welfare subject to EF1)
- Satisfice on both dimensions (accept “good enough” fairness and “good enough” welfare)
- Negotiate explicitly (use welfare optimization to generate proposals, then adjust for fairness concerns)
The plurality of values isn’t a bug, it’s a feature of moral life. Fair division algorithms don’t resolve these tensions; they make them explicit so we can navigate them consciously. And that’s precisely what makes them valuable: not that they find the “right” answer, but that they help us understand what the question really is.
In the next section, we’ll see how modern algorithms combine welfare and fairness objectives, trying to achieve “good enough” on multiple dimensions simultaneously. But first, let’s take a brief detour into a particularly elegant welfare principle that deserves special attention: leximin.
Deep Dive: The Leximin Principle🔗
We’ve seen that maximin welfare (maximizing the minimum utility) has a serious flaw: it ignores everyone except the worst-off agent. Two allocations giving utilities {100, 100, 100} and {100, 500, 1000} have identical maximin welfare despite the second being obviously better: everyone has at least as much, and two people have much more.
Leximin (lexicographic maximin) solves this problem elegantly. It’s a refinement of maximin that preserves its egalitarian commitment, prioritizing the worst-off, while respecting Pareto improvements. The result is one of the most sophisticated and philosophically compelling welfare principles in fair division theory.
The Lexicographic Ordering Intuition🔗
Intuition: Imagine comparing two allocations by lining up each one’s utilities from lowest to highest, then comparing them like words in a dictionary.
Allocation A: Individual utilities are {300, 450, 550}
Sorted A: {300, 450, 550}
Allocation B: Individual utilities are {350, 400, 450}
Sorted B: {350, 400, 450}
To compare lexicographically:
- Compare the smallest utilities: 300 vs 350. Since 350 > 300, B is leximin-better than A
- We don’t need to look at the other utilities. B wins at the first comparison
Now consider:
Allocation C: {300, 500, 600}
Sorted C: {300, 500, 600}
Comparing A and C:
- Smallest utilities: 300 vs 300 is a tie
- Second-smallest utilities: 450 vs 500. Since 500 > 450, C is leximin-better than A
- We never need to check the third utilities
The name “lexicographic” comes from how dictionaries order words: “apple” comes before “apply” because at the first differing letter, ‘l’ precedes ‘y’. Leximin orders allocations by comparing sorted utility vectors position by position.
Why this matters: Leximin maintains maximin’s commitment to the worst-off. The first comparison is always the minimum utilities, exactly like maximin. But when minimums tie, leximin looks at second-worst, third-worst, and so on. This ensures that Pareto improvements are recognized: if allocation B gives someone more utility without hurting anyone, leximin will prefer it (eventually in the comparison sequence, B will have a higher utility at some position).
The philosophical appeal: Leximin captures a natural moral priority: first ensure the worst-off position is as good as possible; given that, make the second-worst-off as well-off as possible; and so on. This is hierarchical egalitarianism: prioritize positions from bottom to top, ensuring each level is maximized before considering the next.
Rawls’s difference principle is sometimes interpreted as requiring leximin rather than simple maximin. The principle states: inequalities are permissible only if they benefit the least advantaged. Leximin operationalizes this by allowing improvements to better-off positions only after the worst-off position is maximized.
Mathematical Definition🔗
Formal Definition: Given two allocations A and B, let \(\mathbf{u^A}\) and \(\mathbf{u^B}\) denote their utility vectors sorted in non-decreasing order:
\(\mathbf{u^A} = (u^A_1, u^A_2, \ldots, u^A_n)\) where \( u^A_1 \leq u^A_2 \leq \ldots \leq u^A_n \)
\(\mathbf{u^B} = (u^B_1, u^B_2, \ldots, u^B_n)\) where \( u^B_1 \leq u^B_2 \leq \ldots \leq u^B_n \)
We say A is leximin-better than B (written \( A \succ_{\text{lex}} B \)) if there exists an index \( k \) such that:
- \( u^A_i = u^B_i \) for all \( i < k \) (utilities are equal at all positions before k)
- \( u^A_k > u^B_k \) (at position \( k \), A has strictly higher utility)
In other words, A and B have identical sorted utilities until some position \( k \), where A’s utility exceeds B’s. The first position where they differ determines the comparison.
Allocation \( A \approx_{\text{lex}} B \) (leximin-equivalent) if \( u^A = u^B \) (sorted utility vectors are identical).
Example with our siblings:
Allocation 1: Maya $450k, Jordan $360k, Sam $578k
Sorted: {360k, 450k, 578k}
Allocation 2: Maya $475k, Jordan $350k, Sam $553k
Sorted: {350k, 475k, 553k}
Comparison:
- Position 1 (minimum): 360k vs 350k → Allocation 1 has higher minimum
- Allocation 1 is leximin-better (we don’t need to check further positions)
Allocation 3: Maya $475k, Jordan $360k, Sam $553k
Sorted: {360k, 475k, 553k}
Comparing Allocation 1 and Allocation 3:
- Position 1: 360k vs 360k → tied
- Position 2: 450k vs 475k → Allocation 3 has higher second-lowest
- Allocation 3 is leximin-better
Allocation 4: Maya $475k, Jordan $360k, Sam $578k
Sorted: {360k, 475k, 578k}
Comparing Allocation 1 and Allocation 4:
- Position 1: 360k vs 360k → tied
- Position 2: 450k vs 475k → Allocation 4 has higher second-lowest
- Allocation 4 is leximin-better
Notice that Allocation 4 Pareto-dominates Allocation 1 (Jordan tied, Maya and Sam both better off). Leximin correctly identifies this as an improvement. Simple maximin would treat them as equivalent (both have minimum 360k).
Why Leximin is Pareto Optimal🔗
Claim: Any leximin-optimal allocation is Pareto optimal.
Proof sketch: Suppose allocation A is leximin-optimal but not Pareto optimal. Then there exists an allocation B that Pareto-dominates A: every agent has at least as much utility in B, and at least one agent has strictly more.
Consider the sorted utility vectors \(\mathbf{u}^A \) and \(\mathbf{u}^B \). Since B Pareto-dominates A:
- For every agent i, \( u_i^B \geq u_i^A \)
- For at least one agent j, \( u_j^B > u_j^A \)
Since these are sorted vectors, this means:
- \( u^B_k \geq u^A_k \) for all positions k (componentwise, B’s sorted vector is at least as large)
- \( u^B_k > u^A_k \) for at least one position k (strict inequality somewhere)
By the leximin ordering, \( B \succ_{\text{lex}} A \) (B is leximin-better than A). This contradicts the assumption that A is leximin-optimal. Therefore, no such B can exist, and A must be Pareto optimal.
This is a powerful result: leximin combines egalitarian prioritization with efficiency. Unlike simple maximin, we can’t improve leximin allocations through Pareto improvements. The “waste” problem of maximin disappears.
The result was established in early social choice theory, notably in work by Amartya Sen in the 1970s, though the lexicographic principle itself has deeper roots in decision theory.
Code Implementation🔗
Implementing leximin optimization requires:
- Generating candidate allocations
- Computing utility for each agent under each allocation
- Sorting utilities and comparing lexicographically
- Selecting the leximin-optimal allocation
For our siblings with 3 agents and 8 items, we can enumerate all possible allocations and select the leximin-best:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
from itertools import product
from typing import Dict, List, Tuple
def leximin_compare(utils_a: List[float], utils_b: List[float]) -> int:
"""
Compare two utility vectors lexicographically.
Returns: 1 if a is better, -1 if b is better, 0 if equivalent
"""
# Sort both vectors (lowest to highest)
sorted_a = sorted(utils_a)
sorted_b = sorted(utils_b)
# Compare position by position
for u_a, u_b in zip(sorted_a, sorted_b):
if u_a > u_b:
return 1 # a is leximin-better
elif u_a < u_b:
return -1 # b is leximin-better
return 0 # leximin-equivalent
def compute_utilities(allocation: Tuple[int, ...],
valuations: Dict[str, Dict[str, float]],
items: List[str],
agents: List[str]) -> List[float]:
"""
Compute utility for each agent given an allocation.
allocation: tuple of agent indices (which agent gets each item)
"""
utilities = [0.0] * len(agents)
for item_idx, agent_idx in enumerate(allocation):
item = items[item_idx]
agent = agents[agent_idx]
utilities[agent_idx] += valuations[agent][item]
return utilities
def find_leximin_allocation(valuations: Dict[str, Dict[str, float]],
items: List[str],
agents: List[str]) -> Tuple[Tuple[int, ...], List[float]]:
"""
Find the leximin-optimal allocation by exhaustive search.
"""
n_agents = len(agents)
n_items = len(items)
# Generate all possible allocations
# Each item can go to any of n_agents
best_allocation = None
best_utilities = None
for allocation in product(range(n_agents), repeat=n_items):
utilities = compute_utilities(allocation, valuations, items, agents)
if best_allocation is None:
best_allocation = allocation
best_utilities = utilities
else:
# Compare with current best
comparison = leximin_compare(utilities, best_utilities)
if comparison > 0:
best_allocation = allocation
best_utilities = utilities
return best_allocation, best_utilities
# Example with our siblings
valuations = {
"Maya": {
"house": 450000, "investments": 180000, "car": 25000,
"piano": 5000, "furniture": 20000, "artwork": 10000,
"photos": 8000, "watches": 2000
},
"Jordan": {
"house": 150000, "investments": 180000, "car": 10000,
"piano": 120000, "furniture": 5000, "artwork": 30000,
"photos": 200000, "watches": 5000
},
"Sam": {
"house": 100000, "investments": 300000, "car": 25000,
"piano": 2000, "furniture": 3000, "artwork": 5000,
"photos": 15000, "watches": 250000
}
}
items = ["house", "investments", "car", "piano", "furniture",
"artwork", "photos", "watches"]
agents = ["Maya", "Jordan", "Sam"]
# Find leximin-optimal allocation
best_allocation, best_utilities = find_leximin_allocation(
valuations, items, agents
)
# Display results
print("Leximin-optimal allocation:")
for item_idx, agent_idx in enumerate(best_allocation):
print(f" {items[item_idx]} → {agents[agent_idx]}")
print(f"\nUtilities: {[f'${u:,.0f}' for u in best_utilities]}")
print(f"Sorted utilities: {sorted([f'${u:,.0f}' for u in best_utilities])}")
print(f"Minimum utility: ${min(best_utilities):,.0f}")
Output interpretation: The algorithm finds the allocation that maximizes the minimum utility, then among those, maximizes the second-lowest utility, and so on. This ensures both egalitarian prioritization and Pareto optimality.
Computational note: This exhaustive search has time complexity \( O(n^m) \) where \( n \) is agents and \( m \) is items, exponential in the number of items. For our siblings, 3^8 = 6,561 allocations are checked, taking milliseconds. For larger problems, more sophisticated algorithms are needed.
Real-World Impact: The Spliddit Case Study🔗
Leximin isn’t just theoretical. It’s been deployed in real allocation systems serving thousands of users.
Spliddit (http://spliddit.org), launched in 2014 by researchers at Carnegie Mellon and the Technion, is a web service that helps groups fairly divide rent, goods, tasks, and other resources. The service has been used by over 100,000 people worldwide for problems ranging from roommates splitting rent to families dividing inheritances.
One of Spliddit’s core algorithms for goods division uses leximin combined with other fairness properties. Specifically, for dividing indivisible items, Spliddit implements what the creators call the “maximum Nash welfare with leximin tie-breaking” approach.
Why Spliddit chose leximin: The designers, Ariel Procaccia and collaborators, needed an algorithm that:
- Achieved strong fairness guarantees (at least EF1)
- Was Pareto optimal (no wasted value)
- Prioritized equity when multiple efficient allocations exist
- Could be computed in reasonable time for realistic problems
Leximin satisfies all four requirements. When multiple allocations achieve the same Nash welfare (or when Nash welfare optimization is computationally expensive), leximin provides a principled tie-breaking rule that prioritizes equality.
User testimonials from Spliddit’s website indicate that users appreciate the algorithm’s egalitarian nature. When roommates divide furniture or families divide estates, the algorithm’s tendency to avoid leaving anyone with very little resonates with people’s fairness intuitions.
Computational implementation: For small-to-medium problems (< 15 items, < 10 agents), Spliddit uses exact leximin optimization via dynamic programming or integer programming. For larger problems, it uses approximations, finding a good Nash welfare allocation and then improving it leximin-wise through local search.
The success of Spliddit demonstrates that leximin is not just philosophically elegant but practically useful. Thousands of real-world allocation problems have been solved using leximin-based algorithms, and users report high satisfaction with the results.
Application to Siblings with Different Needs🔗
Return to Maya, Jordan, and Sam. We’ve been treating them as having equal standing, but suppose we want to incorporate their different life circumstances:
Maya has two young children and limited income. she has genuine need for housing
Jordan is a musician whose livelihood depends on the piano. This is productive use
Sam is drowning in debt. There’s urgency to their financial need
Should we treat these needs as morally relevant? Leximin provides a framework: if we can quantify needs through weighted utilities, leximin will naturally prioritize whoever is worst off by any reasonable metric.
Weighted leximin modifies the approach by adjusting utilities based on need:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
def weighted_leximin(valuations, items, agents, weights):
"""
Find leximin allocation with need-based weights.
weights: dict mapping agent names to weight factors (higher = greater need)
Interpretation: An agent with weight w=2 has utilities that "matter twice as much"
in the leximin comparison.
"""
# Adjust valuations by multiplying by weights
adjusted_valuations = {}
for agent in agents:
adjusted_valuations[agent] = {
item: valuations[agent][item] * weights[agent]
for item in items
}
# Run standard leximin with adjusted valuations
return find_leximin_allocation(adjusted_valuations, items, agents)
# Example: incorporate need-based weights
weights = {
"Maya": 1.2, # 20% higher weight due to children's housing needs
"Jordan": 1.0, # baseline
"Sam": 1.3 # 30% higher weight due to debt urgency
}
The moral question: Who decides the weights? If the siblings themselves agree that Sam’s debt crisis justifies extra priority, weighted leximin formalizes their shared judgment. But if weights are imposed externally (by a court, a parent’s will, or social norms), they might feel coercive.
Weighted leximin makes explicit what is often implicit: different agents’ needs carry different moral weight. This can be liberating (making hidden value judgments explicit) or troubling (formalizing inequality).
For the unweighted case where all siblings are treated identically, leximin provides a principled approach: maximize the worst-off sibling’s outcome, then the second-worst, and so on. If Maya ends up with the least utility, the leximin allocation is the one that makes Maya as well-off as possible while still allocating all items.
Computational Reality: Sequential Optimization🔗
We’ve shown exhaustive search for leximin, but this becomes infeasible for larger problems. How do we compute leximin allocations efficiently?
Key insight: Leximin can be computed through sequential optimization. To find the leximin allocation:
- Find all allocations that maximize the minimum utility (standard maximin)
- Among those, find all that maximize the second-minimum utility
- Among those, find all that maximize the third-minimum utility
- Continue until only one allocation remains (or all remaining are leximin-equivalent)
This sequential approach exploits the lexicographic structure: we can optimize one position at a time, constraining each optimization by the results of previous optimizations.
Complexity: Each stage requires solving a constrained optimization problem. If maximin optimization is NP-hard (which it is), then leximin is also NP-hard. But in practice, the sequential structure helps. After optimizing the minimum, many allocations are eliminated, reducing the search space for subsequent stages.
For goods with additive valuations, the problem has additional structure that can be exploited:
- Dynamic programming approaches (pseudopolynomial time for small utility values)
- Integer programming with hierarchical objectives
- Approximation algorithms that find near-leximin allocations efficiently
For our siblings: The problem is small enough that any approach works. For 10 heirs with 50 items, leximin becomes challenging but still feasible with modern optimization tools. For 100 agents with 1000 items, only approximations are practical.
When to Use Leximin vs. Other Welfare Functions🔗
Leximin isn’t always the right choice. Here’s guidance on when it’s most appropriate:
Use leximin when:
- Protecting the vulnerable is paramount: If leaving anyone with very little is unacceptable (social safety nets, critical resource allocation), leximin ensures this
- Pareto optimality matters: Unlike simple maximin, leximin doesn’t waste value
- Hierarchical egalitarianism is valued: The philosophical commitment to prioritize from bottom up resonates with stakeholders
- Problem size is manageable: Small-to-medium instances where computation is feasible
Use Nash welfare instead when:
- Smooth trade-offs are needed: Nash welfare’s continuous objective is easier to optimize for large problems
- Computational efficiency is critical: Nash welfare has polynomial-time algorithms; leximin generally doesn’t
- Moderate egalitarianism suffices: The logarithmic weighting of Nash may be enough without leximin’s extreme prioritization
Use utilitarian welfare when:
- Efficiency dominates equity: If maximizing total value is the primary goal
- Agents can compensate each other: When redistribution after allocation is feasible
- Computational speed is essential: Utilitarian optimization is trivial for additive valuations
Use fairness criteria (EF1, MMS) when:
- Individual rights trump collective optimization: If respecting equal entitlements matters more than welfare maximization
- No inter-agent comparisons are acceptable: Fairness criteria don’t require cardinal utility comparisons
For Maya, Jordan, and Sam, leximin might be appropriate if:
- They want to ensure no sibling is left with very little (protective insurance)
- They value efficiency (don’t want to waste potential welfare)
- They accept prioritizing whoever gets least
- The problem is small enough to compute exactly
If they instead want balanced optimization without extreme prioritization, Nash welfare might better match their values.
Synthesis: Leximin's Elegant Compromise🔗
Leximin represents a sophisticated middle ground in welfare theory:
More egalitarian than: Utilitarian welfare (which ignores distribution) and Nash welfare (which only moderately prioritizes the worse-off)
More efficient than: Simple maximin (which ignores Pareto improvements)
More computable than: Exact egalitarian ideals might require (leximin has structure that can be exploited)
The principle captures a moral intuition that many find compelling: prioritize helping people in order of how badly off they are, but once someone is helped, move on to help the next-worst-off. This is neither crude equality (everyone must get exactly the same) nor crude efficiency (maximize the total regardless of distribution). It’s sophisticated prioritarianism: focus on those at the bottom, working upward.
For practitioners, leximin provides a principled welfare function that balances multiple desiderata. For theorists, it demonstrates that apparently conflicting values (efficiency and equity) can be reconciled through clever formalization. And for our three siblings, it offers a way to operationalize the intuition: “Let’s make sure whoever gets least does as well as possible, then make sure the middle person does as well as possible, then optimize for the best-off”. A priority ordering that treats everyone fairly while avoiding waste.
In the next section, we’ll examine how leximin performs computationally compared to other welfare principles, and explore practical strategies for when exact computation isn’t feasible. The beauty of leximin is matched by its computational challenges—a fitting reminder that elegant theory doesn’t always translate to easy practice.
Computational Reality: Leximin in Practice🔗
We’ve established leximin’s philosophical elegance: it combines egalitarian prioritization with Pareto efficiency, offering a sophisticated compromise between equity and efficiency. We’ve seen that it can be computed through sequential optimization: maximize the minimum, then the second-minimum, and so on. But we’ve been vague about the computational cost.
Now we must confront the reality: leximin optimization is computationally hard. Not just NP-hard in the theoretical sense, but genuinely difficult in practice for problems of realistic size. The sequential structure that makes leximin philosophically appealing creates computational challenges that don’t affect simpler welfare functions.
Understanding these challenges is crucial for practitioners. Leximin isn’t a magical solution that always works. It’s a tool that excels in some contexts and struggles in others. Knowing when exact leximin computation is feasible, when approximations suffice, and when to abandon leximin entirely is essential for building real allocation systems.
Computing Leximin: The Complexity Landscape🔗
The fundamental problem: To find a leximin-optimal allocation, we must solve a sequence of constrained optimization problems, where each problem’s constraints include “achieve the optimal value from all previous stages.”
Formally, let’s denote the k-th stage optimization as finding allocations that maximize the k-th smallest utility, subject to all previous stages being optimal.
Stage 1: Find allocation(s) maximizing the minimum utility
- This is standard maximin optimization
- Strongly NP-hard even for additive valuations (Bezáková and Dani, 2005)
- No polynomial-time algorithm unless P=NP
Stage 2: Among all allocations achieving optimal minimum from Stage 1, find those maximizing second-minimum
- This is maximin on a constrained space (allocations from Stage 1)
- Still NP-hard, but on a potentially smaller search space
Stage k: Continue for all \( n \) agents
The total complexity is at least as hard as maximin (since Stage 1 is maximin), but potentially harder because:
- We must find all optimal solutions at each stage, not just one
- Constraining subsequent stages requires tracking which allocations achieved optimality
- Numerical precision issues arise when comparing “is this allocation optimal?” with floating-point utilities
Comparison to other welfare functions:
| Welfare Function | Complexity (additive valuations) | Feasible Problem Size |
|---|---|---|
| Utilitarian | \( O(nm) (greedy) \) | Millions of items |
| Nash | \( O(n³m³) (polynomial) \) | Hundreds of items |
| Simple Maximin | Strongly NP-hard | Tens of items (exact) |
| Leximin | Strongly NP-hard | Tens of items (exact) |
Leximin inherits maximin’s hardness but is conceptually more complex due to the sequential structure.
A saving grace: goods vs. chores: The complexity story differs based on what we’re allocating:
For goods (items with positive utility): Leximin is strongly NP-hard but has some exploitable structure. The positivity constraint helps pruning in branch-and-bound approaches. For small instances (our siblings with 3 agents and 8 items), exact computation via enumeration or integer programming is feasible.
For chores (items with negative utility or tasks agents want to avoid): The problem becomes even harder. The structure that helps with goods disappears. Moreover, ensuring the “maximum minimum” means making the most-burdened agent as unburdened as possible. The optimization reverses direction, and algorithms designed for goods often fail.
For mixed goods and chores: Hardest of all. The problem structure fractures: some agents seek items while others avoid tasks. Standard techniques for goods-only allocation don’t apply.
For our siblings dividing an inheritance, we’re in the goods-only case, which is the most favorable from a computational perspective. But even here, exact optimization isn’t cheap.
Sequential Optimization: How It Works and Why It's Expensive🔗
Let’s examine the sequential optimization approach in detail to understand both its power and its cost.
Algorithm sketch:
Stage 1: Maximize the minimum utility
- Solve: max z such that v_i(A_i) ≥ z for all i
- Let z₁* = optimal minimum utility
- Let S₁ = set of all allocations achieving minimum z₁*
Stage 2: Maximize the second-minimum utility
- Restricted to allocations in S₁
- Among allocations with minimum = z₁*, maximize the second-lowest
- Let z₂* = optimal second-minimum utility
- Let S₂ ⊆ S₁ = allocations achieving second-minimum z₂*
Stage k: Continue until one allocation remains (or equivalence)
Why this works:
The leximin ordering is defined by comparing sorted utility vectors position by position. By optimizing each position sequentially (from lowest to highest), we’re explicitly constructing allocations that win the lexicographic comparison at the earliest possible position.
If allocation A is leximin-optimal, then:
- A maximizes the minimum utility (else some B would have higher minimum, winning the leximin comparison at position 1)
- Among allocations with optimal minimum, A maximizes the second-minimum (else some B would win at position 2)
- And so on for all positions
The sequential approach builds the leximin optimum stage by stage.
Why this is expensive:
Exponential growth of solution sets: At Stage 1, we might find that 1,000 allocations all achieve the optimal minimum z₁. Now Stage 2 must optimize over these 1,000 allocations. If we’re lucky, this eliminates most of them. If we’re unlucky, hundreds of allocations also achieve optimal second-minimum z₂, forcing Stage 3 to consider hundreds of candidates.
The size of \( S_k \) (allocations remaining after k stages) varies wildly:
- Best case: Each stage eliminates all but one allocation. After Stage 1, only one allocation achieves z₁*, so we’re done. Total cost: one maximin optimization.
- Worst case: Many allocations achieve optimal values at each stage. All \( n \) stages must be executed, each requiring optimization over potentially exponential candidates.
Constrained optimization costs: Each stage solves:
maximize: \\( z_k \\) (the k-th smallest utility)
subject to:
- Allocation constraints (items partitioned)
- z_1 = z₁* (minimum equals Stage 1 optimum)
- z_2 = z₂* (second-minimum equals Stage 2 optimum)
- ...
- \\( z_{k-1} = z*_{k-1} \\) (previous stages optimal)
- Additional constraints to define \\( z_k \\) as k-th smallest
The constraints accumulate. Each stage’s optimization is more complex than the last because we’re adding constraints from previous stages. This makes integer programming solvers work harder. Branch-and-bound trees grow larger, relaxations become weaker, and computation slows.
Comparison to single-stage optimization:
For Nash welfare, we solve one optimization:
maximize: Σ log(v_i(A_i))
Done. One problem, one solution.
For leximin, we solve \( n \) optimization problems in sequence, each dependent on the previous. Even if each individual stage were fast (it isn’t, Stage 1 is NP-hard maximin), the accumulation makes the total expensive.
Practical implications:
For tiny problems (n ≤ 5, m ≤ 15): Sequential optimization works. Modern MIP solvers handle each stage in seconds. Total time: minutes.
For small problems (n ≤ 10, m ≤ 50): Feasible but slow. Each stage might take minutes. Total time: tens of minutes to hours.
For medium problems (n ≤ 20, m ≤ 100): Borderline. Stage 1 might complete, but subsequent stages with accumulated constraints may timeout.
For large problems (n > 20, m > 100): Exact sequential optimization is hopeless. Must use approximations or heuristics.
For our siblings (3 agents, 8 items): Sequential optimization completes instantly. We could even enumerate all 3⁸ = 6,561 allocations, sort each by utilities, and compare lexicographically. Brute force works fine here. The sequential structure is overkill for such a small problem.
But imagine 10 heirs with 50 items: 10⁵⁰ allocations make enumeration impossible. Sequential optimization becomes the only viable exact approach—and even it may struggle.
Numerical Stability: The Hidden Challenge🔗
Beyond algorithmic complexity, leximin faces a subtler challenge: numerical stability when comparing floating-point utilities.
The problem: Leximin requires determining whether two allocations achieve “the same” utility at each position. But with floating-point arithmetic, exact equality is rare. Consider:
Allocation A: Agent 1 receives items worth 100.0 + 50.0 + 25.0 = 175.0
Allocation B: Agent 1 receives items worth 175.0
Are these equal? In exact arithmetic, yes. In floating-point:
a = 100.0 + 50.0 + 25.0
b = 175.0
print(a == b) # True (in this case, but not always!)
# But consider:
a = 0.1 + 0.2
b = 0.3
print(a == b) # False! (0.30000000000000004 ≠ 0.3)
Why this matters for leximin:
At each stage, we must determine: “Does this allocation achieve the optimal value from previous stages?” If utilities are 175.000001 vs 175.000002 due to rounding errors, are these “equal” for purposes of constraining the next stage?
Conservative approach (tight tolerance): Treat 175.000001 and 175.000002 as different. Only allocations with exactly the same floating-point representation proceed.
- Risk: Spurious distinctions. Two essentially identical allocations are treated as different, forcing unnecessary computation in later stages.
- Result: Overly precise comparison creates computational waste.
Liberal approach (loose tolerance): Treat values within ε as equal (e.g., ε = 0.01 means anything in [174.99, 175.01] is “equal to 175”).
- Risk: False equivalence. Two genuinely different allocations might be incorrectly considered equivalent.
- Result: The leximin ordering might be violated, we might miss the true optimum.
The dilemma: No tolerance is perfect. Too tight and we waste computation on meaningless distinctions. Too loose and we violate the leximin principle.
Practical strategies:
Strategy 1: Rational arithmetic
Use exact rational numbers (fractions) instead of floating-point. Python’s fractions.Fraction or similar libraries allow exact representation of utilities like 1/3, avoiding rounding errors entirely.
from fractions import Fraction
# Exact arithmetic
a = Fraction(1, 3) + Fraction(1, 3) + Fraction(1, 3)
b = Fraction(1, 1)
print(a == b) # True (exactly)
Cost: Rational arithmetic is slower (10-100× slower than floating-point for large numerators/denominators). For small problems, this is acceptable. For large problems, it becomes prohibitive.
Strategy 2: Scaled integer arithmetic
If all utilities are multiples of some small unit (e.g., all dollar amounts, no cents), represent them as integers scaled by that unit.
# All values in dollars (no cents)
maya_util = 450000 # $450,000
jordan_util = 360000 # $360,000
# Exact integer comparison
print(maya_util == jordan_util) # False (exact)
Cost: Only works when utilities naturally have finite precision. For irrational values or arbitrary-precision needs, doesn’t help.
Strategy 3: Epsilon-tolerance with validation
Use floating-point with tolerance ε, but after finding the “optimal” allocation, verify that small perturbations don’t improve it. If they do, the tolerance was too loose…refine and retry.
EPSILON = 1e-6
def approximately_equal(a, b, eps=EPSILON):
return abs(a - b) < eps
# Compare utilities with tolerance
if approximately_equal(util_a, util_b):
# Treat as equal for leximin purposes
pass
Cost: Additional validation step. Doesn’t eliminate the problem, just reduces its impact.
For our siblings: With dollar amounts (integers), we can use exact integer arithmetic. No numerical stability issues arise. But if we had fractional shares or non-monetary values requiring floating-point, careful attention to tolerance would be essential.
In practice: Most implementations of leximin use floating-point with a small tolerance (e.g., ε = 10⁻⁶) and warn users that results near tie-breaking boundaries might be sensitive to numerical precision. For high-stakes allocations where “close” utilities might lead to different outcomes, rational arithmetic or scaled integers should be used despite the performance cost.
When Exact Computation Is Infeasible: Approximation Strategies🔗
For problems too large for exact sequential optimization, we need approximation strategies. The question is: what does it mean to “approximate” leximin?
Unlike utilitarian or Nash welfare (where we can say “achieve 90% of optimal welfare”), leximin is an ordering relation, not a single value. We can’t easily quantify “how leximin” an allocation is.
Approximation approaches:
Approach 1: Restricted Search Space🔗
Rather than searching all possible allocations, search a structured subset.
Round-robin allocation: Agents take turns selecting items in some order. This is fast (O(nm)) and often produces reasonable leximin-style outcomes (tends to balance utilities across agents).
Greedy leximin heuristic: At each step, assign the next item to the agent who would benefit most (by their marginal utility), subject to trying to equalize utilities. This mimics leximin’s egalitarian spirit without full optimization.
Advantage: Fast, simple to implement, often works well in practice.
Disadvantage: No guarantee of how close to true leximin we get. Might miss much better allocations.
Approach 2: Partial Sequential Optimization🔗
Perform only the first k stages of sequential optimization, then use heuristics for remaining stages.
For instance:
- Stage 1: Exactly maximize the minimum (solve maximin fully)
- Stage 2: Exactly maximize second-minimum among Stage 1 optimal allocations
- Stages 3-n: Use heuristics (greedy assignment, local search)
Advantage: Ensures the most important positions (minimum, second-minimum) are optimal. Later positions, which matter less in leximin ordering, can be approximate.
Disadvantage: Computation still expensive for Stages 1-2 if the problem is large. Heuristic stages might introduce significant suboptimality.
Approach 3: Simulated Annealing / Local Search🔗
Start with a random or heuristic allocation. Repeatedly perturb it (swap items between agents) and accept improvements measured by leximin comparison. Continue until no local improvements exist.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
def leximin_local_search(initial_allocation, valuations, items, agents, max_iterations=10000):
"""
Local search for leximin optimization.
"""
current = initial_allocation
current_utils = compute_utilities(current, valuations, items, agents)
for iteration in range(max_iterations):
# Generate neighbor: swap one item between two agents
neighbor = generate_neighbor(current)
neighbor_utils = compute_utilities(neighbor, valuations, items, agents)
# Accept if leximin-better
if leximin_compare(neighbor_utils, current_utils) > 0:
current = neighbor
current_utils = neighbor_utils
# Optional: simulated annealing acceptance of worse solutions early on
# to escape local optima
return current, current_utils
Advantage: Flexible, can run for as long as time permits. Often finds good solutions even when exact optimization times out.
Disadvantage: No guarantee of optimality. Might get stuck in local optima where no single swap improves leximin but larger rearrangements would.
Approach 4: Upper and Lower Bounds🔗
Compute bounds on the leximin-optimal utilities at each position:
Upper bound: Solve a relaxed problem (e.g., allow fractional item assignments). The resulting utilities are an upper bound on what any integral allocation can achieve.
Lower bound: Use a heuristic to find a feasible allocation. Its utilities provide a lower bound.
If the gap between upper and lower bounds is small (e.g., within 5%), the heuristic solution is provably close to optimal.
Advantage: Provides quality guarantees (“our solution is within 10% of optimal at each position”).
Disadvantage: Computing tight bounds may itself be expensive. Loose bounds don’t provide meaningful guarantees.
Which Approximation to Use?🔗
The choice depends on problem characteristics and requirements:
For medium-sized problems (n=10-20, m=50-100):
- Try partial sequential optimization (Stages 1-2 exact, rest heuristic)
- If that’s too slow, use local search initialized from a good heuristic (round-robin or greedy)
For large problems (n > 20, m > 100):
- Use fast heuristics (round-robin, greedy balancing) without attempting exact optimization
- Validate the result using local search refinement
- If quality guarantees are critical, compute bounds to assess solution quality
For problems with special structure:
- If items are nearly homogeneous: simple round-robin works well
- If agents have sparse preferences (value few items): exploit sparsity in optimization
- If problem has hierarchy (items in categories): solve hierarchically (allocate categories, then items within categories)
For our siblings: Exact optimization is trivial, so no approximation needed. But if we imagine 20 heirs with 200 items:
- Try exact Stage 1 (maximize minimum utility) using MIP solver with 1-hour timeout
- If successful, try exact Stage 2 (maximize second-minimum) with 1-hour timeout
- If successful, continue until timeout or completion
- For remaining stages, use greedy heuristic: assign each remaining item to the agent who values it most, subject to trying to balance utilities
- Validate using local search: try swaps and see if leximin improves
This hybrid approach balances exactness where feasible with pragmatism where necessary.
Comparison to Nash Welfare: Trading Philosophy for Computation🔗
We’ve seen that leximin is computationally expensive. Nash welfare, by contrast, has polynomial-time algorithms for additive valuations. This raises a pragmatic question: Should we sacrifice leximin’s philosophical elegance for Nash’s computational tractability?
The trade-off:
Leximin advantages:
- Stronger egalitarian commitment (absolute priority to worst-off)
- Clearer interpretation (maximize minimum, then second-minimum, etc.)
- No arbitrary parameters or functional forms
- Directly operationalizes Rawlsian justice
Nash advantages:
- Polynomial-time computation for additive valuations
- Scale-invariant (doesn’t require normalizing utilities)
- Smooth objective (easier for optimization algorithms)
- Still prioritizes worse-off agents (via logarithm’s concavity)
When to choose Nash over leximin:
Problem size prohibits leximin: If n > 20 or m > 100, exact leximin becomes questionable. Nash welfare’s polynomial algorithms scale better.
Near-optimal is sufficient: If leximin’s extreme prioritization isn’t critical, Nash welfare’s logarithmic prioritization may suffice if “mostly equal” is good enough.
Computational resources are limited: Real-time systems, resource-constrained environments, or time-sensitive allocations can’t afford leximin’s expense.
Philosophical commitment to smoothness: If you believe prioritization should be continuous rather than discrete (leximin has discontinuities at tie points), Nash’s logarithmic weighting is more appealing.
When to insist on leximin:
Philosophical commitment to maximin: If Rawlsian justice is non-negotiable (the worst-off position must be absolutely maximized) then only leximin suffices.
Small problem with high stakes: If n ≤ 10, m ≤ 50, and the allocation is legally binding or irreversible, investing in exact leximin computation is justified.
Pareto optimality is essential: If you need both egalitarianism and efficiency, leximin guarantees Pareto optimality while simple maximin doesn’t.
For our siblings: Both are feasible computationally. The choice is purely philosophical:
- If they prioritize protecting whoever gets least above all else: leximin
- If they want balanced fairness and efficiency with smooth trade-offs: Nash
- If they want maximum total family wealth: utilitarian
The computational landscape doesn’t constrain their choice, philosophy does.
Practical Implementation Lessons🔗
From real-world implementations of leximin (Spliddit and others), several practical lessons emerge:
Lesson 1: Start simple, escalate only if needed
Don’t begin with full sequential optimization. Try:
- Round-robin allocation
- Check if it’s leximin-optimal (or close)
- If not, run local search to improve
- Only if that fails, attempt exact sequential optimization
Most problems have structure that makes simple heuristics work well. Reserve expensive computation for hard cases.
Lesson 2: Set computational budgets
Don’t let optimization run forever. Set time limits:
- Stage 1 (maximin): 5 minutes maximum
- Stage 2: 5 minutes maximum
- If any stage times out, switch to heuristics
Return the best allocation found within the budget. Document that it’s approximate.
Lesson 3: Use warm starts
When running sequential optimization stages, use the best allocation from the previous stage’s heuristic as a warm start for MIP solvers. This dramatically speeds up convergence.
Lesson 4: Validate solutions
After finding an allocation claimed to be leximin-optimal, validate:
- Check that it’s Pareto optimal (no beneficial swaps exist)
- Check that no allocation improves the minimum utility
- Verify numerical precision hasn’t corrupted comparisons
Validation catches implementation bugs and numerical instability.
Lesson 5: Communicate uncertainty
When using approximations, be honest with stakeholders:
- “This allocation maximizes the minimum utility exactly, but subsequent positions are approximately optimized”
- “We’ve computed this allocation using a heuristic; it may not be the true leximin optimum”
- “Due to computational constraints, we cannot guarantee this is optimal, but we’ve validated it locally”
Transparency about algorithmic limitations builds trust.
The Bottom Line: When to Use Leximin🔗
Synthesizing computational considerations with philosophical ones:
Leximin is the right choice when:
- Problem size is small (n ≤ 10, m ≤ 50)
- Rawlsian justice is the agreed-upon ethical framework
- Pareto optimality matters (simple maximin won’t suffice)
- Computational resources are available (time, solver access)
- High stakes justify computational investment
Leximin is the wrong choice when:
- Problem size is large (n > 20, m > 100)
- Real-time allocation is required (must respond in seconds)
- Smooth trade-offs are preferred over extreme prioritization
- Nash welfare’s philosophical commitments are acceptable
- Computational constraints are severe
For our siblings: Leximin is perfectly feasible. The 3-agent, 8-item problem is tiny by computational standards. If they philosophically endorse Rawlsian prioritization, ensuring the worst-off sibling is maximally protected, leximin is both appropriate and implementable.
But if they find leximin’s extreme prioritization troubling (“Why should Maya getting $1 less matter so much more than Jordan getting $1 less?”), Nash welfare offers similar egalitarian spirit with smoother trade-offs and guaranteed polynomial-time computation.
The computational reality informs rather than dictates their choice. Understanding that leximin is expensive helps them appreciate what they’re committing to: a welfare function that takes egalitarianism seriously enough to invest significant computational effort in achieving it. That seriousness might be exactly what they want, or it might suggest that a more computationally tractable alternative better matches their actual priorities.
In the end, computational complexity is a mirror reflecting philosophical complexity. The fact that perfectly equalizing outcomes is computationally hard isn’t a bug, it’s a feature revealing the genuine difficulty of the moral problem we’re trying to solve.
Conclusion: Bridging Individual Rights and Collective Good🔗
We began Part IV confronting a fundamental tension: Maya, Jordan, and Sam can achieve an allocation that’s fair to each individual (EF1, proportional), or they can achieve an allocation that maximizes family wealth, but often not both. This forced us to make explicit a choice that’s usually implicit in allocation problems: Are we optimizing for individual fairness or collective welfare?
What we’ve discovered is that this isn’t a binary choice, but a spectrum of possibilities. Let’s synthesize what we’ve learned.
Three Ways to Aggregate Individual Utility🔗
We explored three major social welfare functions, each embodying a different ethical framework:
Utilitarian Welfare (sum of utilities) represents pure efficiency: maximize total value created, regardless of how it’s distributed. For additive valuations, greedy allocation achieves the optimum in \( O(nm) \) time making it computationally cheap. But it can justify extreme inequality: if giving everything to one agent maximizes the sum, utilitarianism endorses that outcome. The philosophical commitment: aggregate outcomes matter more than individual treatment.
Nash Welfare (product of utilities) provides a sophisticated middle ground. By multiplying utilities rather than summing them, it achieves Pareto optimality (no wasted value) while severely penalizing allocations that leave anyone with very little. The logarithmic transformation reveals its egalitarian character: gains to worse-off agents count for more. Remarkably, maximizing Nash welfare for additive valuations is polynomial-time computable despite appearing non-convex, a major breakthrough in the field. The philosophical commitment: balance efficiency with equity, protecting the worse-off through diminishing returns.
Maximin Welfare (minimum utility) embodies Rawlsian justice: maximize the worst-off position before considering anyone else. Extended to leximin (lexicographic maximin), it provides a complete ordering while maintaining Pareto optimality. But it is strongly NP-hard even to approximate. The philosophical commitment: extreme prioritization of the vulnerable, even at the cost of large losses to others.
These welfare functions span a spectrum from efficiency-focused to equity-focused:
Utilitarian ← Nash ← Maximin
Efficiency ←→ Equity
The Computational Landscape🔗
The computational story parallels the philosophical one:
| Welfare Function | Complexity | Feasible Size | Philosophical Character |
|---|---|---|---|
| Utilitarian | \( O(nm) \) | Millions | Pure efficiency |
| Nash | \( O(n³m³) \) | Hundreds | Balanced |
| Maximin/Leximin | Strongly NP-hard | Tens | Pure equity |
Efficiency is computationally cheap; equity is expensive. This reflects the fundamental structure of these problems. Aggregating value (utilitarian) decomposes naturally: assign each item independently to whoever values it most. Equalizing outcomes (maximin) requires coordinating all assignments simultaneously to ensure no one is left behind, creating combinatorial complexity.
For Maya, Jordan, and Sam (3 agents, 8 items), this landscape is forgiving as all three welfare functions are easily computable. For 100 agents with 1000 items, only utilitarian optimization remains tractable; Nash requires approximations; leximin is hopeless without heuristics.
What We're Really Optimizing: Making Implicit Values Explicit🔗
The choice of social welfare function is a value judgment disguised as mathematics. When we write down a formula like SW = Σvᵢ or SW = Πvᵢ, we’re formalizing ethical commitments:
Utilitarian welfare assumes:
- Individual utilities can be meaningfully compared and aggregated
- A unit of utility to Maya equals a unit to Jordan
- Inequality is acceptable if it increases the total
- The social planner’s perspective (external optimization) is legitimate
Nash welfare assumes:
- Scale-invariant comparisons are appropriate
- Gains to worse-off agents should count for more (via logarithm)
- Pareto efficiency is non-negotiable
- Risk aversion behind the veil of ignorance justifies egalitarian bias
Maximin/leximin assumes:
- The worst-off position deserves absolute priority
- Extreme risk aversion: protect against the worst case at any cost
- Hierarchical prioritization from bottom to top
- Rights-like guarantees for the least advantaged
None of these is objectively correct. They represent different answers to profound questions: How much does inequality matter? Should we prioritize aggregate abundance or equitable distribution? Can we impose external judgments about what’s collectively best, or must we respect individual claims to fair treatment?
When to Choose Welfare Over Fairness🔗
Given this philosophical complexity, when should we optimize for social welfare rather than individual fairness?
Choose welfare optimization when:
-
Agents agree on collective goals: If Maya, Jordan, and Sam all say “We want to maximize family wealth, even if it means some inequality,” welfare optimization embodies their shared value.
-
Compensation is feasible: If the allocation that maximizes welfare can be adjusted through transfers (Maya pays Jordan to accept less), welfare optimization identifies the value-maximizing baseline before redistribution.
-
Efficiency dominates fairness in stakes: When creating $100,000 more value matters much more than achieving perfect EF1, welfare optimization is justified.
-
Ongoing relationships allow balancing: If this allocation is one of many over time, inequality now can be balanced by inequality later. Pursue efficiency in each round, equality across rounds.
-
Trust is high: When agents trust each other not to exploit inequality, welfare optimization doesn’t threaten relationships.
Prioritize fairness over welfare when:
-
Agents have equal entitlements: As co-heirs, Maya, Jordan, and Sam have equal legal and moral claims. Violating those claims requires stronger justification than efficiency alone provides.
-
No compensation mechanism exists: If Jordan can’t be compensated for receiving less, welfare-optimal allocations that hurt Jordan are unjust.
-
Relationships are fragile: Inequality breeds resentment. If efficiency gains are small ($10k) but fairness violations large (Jordan receives 30% less), protect fairness.
-
Rights trump outcomes: From a deontological perspective, individual rights aren’t negotiable for collective gains. Fairness is a side constraint, not an objective function.
-
Stakes are high and irreversible: One-shot allocations with permanent consequences require stronger fairness guarantees than repeated allocations where learning and adjustment are possible.
The Leximin Synthesis: Having It Both Ways🔗
The leximin principle represents an elegant attempt to reconcile welfare and fairness. By maximizing the minimum utility, then the second-minimum, and so on, leximin:
- Prioritizes the worst-off (egalitarian commitment)
- Achieves Pareto optimality (efficiency guarantee)
- Creates a complete ordering (unlike simple maximin)
- Embodies Rawlsian prioritarianism (maximize from bottom up)
Leximin is computationally expensive and can still sacrifice fairness criteria like envy-freeness. But it shows that the welfare-versus-fairness dichotomy isn’t absolute. Sophisticated welfare functions can build in egalitarian commitments, achieving meaningful equity while pursuing efficiency.
For practitioners, leximin offers a principled middle path when stakeholders value both efficiency and equity but prioritize protecting the vulnerable. For theorists, it demonstrates that apparently contradictory values can be reconciled through clever formalization.
Bringing It Back to Our Siblings🔗
Throughout Part IV, we’ve been asking: What should Maya, Jordan, and Sam optimize for? As we know, this question has no universal answer as it depends on their values.
If they say: “We want to maximize what the family gets from these assets”
Then use: Utilitarian welfare (max-sum allocation)
Computational approach: Greedy assignment is trivially fast
Philosophical commitment: Efficiency over equality
Likely outcome: Better matching (each gets items they value most) but potential inequality
If they say: “We want everyone to do well, with extra concern for whoever gets least”
Then use: Nash welfare (max-product allocation)
Computational approach: Polynomial-time algorithm or MIP solver
Philosophical commitment: Balanced efficiency and equity
Likely outcome: Reasonable matching with protection against extreme inequality
If they say: “We want to make sure no one ends up in a bad position, even if it costs the family overall”
Then use: Leximin welfare (lexicographic maximin)
Computational approach: Sequential optimization or MIP with hierarchical objectives
Philosophical commitment: Maximal protection of worst-off
Likely outcome: Most equal distribution, potentially sacrificing some matching efficiency
If they say: “We want to respect our equal standing as heirs, regardless of total value”
Then use: Fairness criteria (EF1, proportionality) from Part 2
Don’t optimize welfare at all, prioritize individual fairness guarantees
Likely outcome: Each sibling satisfied with their treatment, even if total value is lower
The sophisticated approach recognizes that most people care about multiple dimensions: some efficiency, some fairness, some protection of the worst-off. In Part 4, we’ll explore algorithms that explicitly optimize for multiple objectives simultaneously: seeking allocations that are both reasonably fair and reasonably efficient, using sophisticated mechanisms that achieve “good enough” on several criteria at once.
The Broader Lesson: Design with Eyes Open🔗
The most important lesson from Part IV isn’t which welfare function to choose: it’s the importance of making the choice consciously.
Every allocation mechanism embeds values, whether designers acknowledge them or not. An algorithm that maximizes the sum is making a utilitarian commitment. A protocol that prioritizes the worst-off is making a Rawlsian commitment. Even an algorithm claiming to be “neutral” is making a commitment, perhaps to procedural fairness over outcome fairness, or to equal treatment over differential need.
The failure mode isn’t choosing the “wrong” welfare function as there is no objectively wrong choice. The failure mode is choosing unconsciously, allowing implicit values to drive outcomes without examining whether those values align with stakeholders’ considered judgments.
Fair division at its best makes the implicit explicit:
- Here are the possible objectives (welfare functions, fairness criteria)
- Here’s what each assumes philosophically
- Here’s what each costs computationally
- Here’s what each produces in this case
- Now you can choose, understanding the full implications
This transparency doesn’t resolve the tension between individual rights and collective good. That tension is fundamental and unresolvable in the abstract. But it transforms an intractable philosophical debate into a concrete, informed choice that stakeholders can make for themselves.
For Maya, Jordan, and Sam, Part IV has shown them that maximizing family wealth isn’t the only option, and treating everyone equally isn’t the only option. They can pursue efficiency, equity, or sophisticated combinations. They can prioritize absolute standards (proportionality, MMS) or relative comparisons (envy-freeness). They can optimize for the family as a whole or protect each individual’s rights.
The tools we’ve discussed give them the power to operationalize their values, whatever those values are. The philosophical analysis gives them the understanding to know what values they’re operationalizing.
That’s what makes fair division more than mathematics: it’s applied ethics, made rigorous through computation.
Looking Ahead: Part 4 and Beyond🔗
Part IV explored the welfare perspective in pure form: optimize for collective good, possibly at the expense of individual fairness. Part 2 explored fairness in pure form: protect individual rights, possibly at the expense of collective efficiency.
But real allocation problems rarely fit cleanly into one perspective or the other. Maya, Jordan, and Sam likely care about both fairness and welfare. They want efficient matching (Nash welfare) subject to no one being left behind (MMS constraints). Or they want envy-freeness (EF1) subject to reasonable total value (not too inefficient).
Part 4: Sophisticated Mechanisms explores exactly this hybrid approach:
- Pareto-efficient fair allocations: Achieving EF1 while ensuring no wasted value
- Bounded sharing: When items must sometimes be split, but we want to minimize splitting
- Weighted fairness: When agents have different priorities or needs, formalizing differential treatment fairly
- Multi-objective optimization: Explicitly trading off multiple criteria using mathematical frameworks
We’ll see algorithms that find allocations simultaneously achieving multiple desiderata: not perfectly, but “well enough” on each dimension. These sophisticated mechanisms represent the practical frontier: they acknowledge that pure optimization is often too strong (violates fairness) and pure fairness constraints are often too weak (waste value), and they seek principled compromises.
For practitioners building real allocation systems (like welfare systems, computational resource allocators, or school choice mechanisms) Part 4’s multi-objective approach is often most applicable. The world rarely presents problems where we can optimize one thing while ignoring others.
But understanding the pure perspectives (fairness in Part 2, welfare in Part IV) is essential for understanding their combinations. You cannot intelligently balance efficiency and equity if you don’t understand what each means, what achieving each costs, and what philosophical commitments each embeds.
We’ve built that understanding. Now, in Part 4, we’ll see how to use it.
Further Reading🔗
This section provides curated resources for deepening your understanding of social welfare functions, welfare maximization algorithms, and the philosophical underpinnings of collective optimization. Resources are organized by topic and level, emphasizing both foundational papers that established key results and modern work extending the frontier.
Social Welfare Functions: Theory and Philosophy🔗
Foundations of welfare economics and social choice:
Bentham, Jeremy. An Introduction to the Principles of Morals and Legislation. 1789.
The foundational text of utilitarian ethics. While not focused on fair division specifically, Bentham’s “greatest happiness for the greatest number” principle underlies utilitarian social welfare functions. Essential for understanding the philosophical origins of welfare maximization.
Rawls, John. A Theory of Justice. Harvard University Press, 1971.
Rawls’s difference principle and veil of ignorance argument provide the philosophical foundation for maximin welfare. His concept of “primary goods” and focus on the worst-off position directly motivate leximin optimization. Graduate-level political philosophy that’s essential for understanding egalitarian welfare functions.
Sen, Amartya. Collective Choice and Social Welfare (Expanded Edition). Harvard University Press, 2017.
Comprehensive treatment of social choice theory and welfare economics. Sen’s work on interpersonal comparisons, capability approach, and the foundations of welfare measurement inform modern computational approaches. Accessible to advanced undergraduates with mathematical background.
Moulin, Hervé. Fair Division and Collective Welfare. MIT Press, 2003.
Graduate-level textbook connecting fair division to cooperative game theory and social choice. Excellent treatment of welfare functions, axiomatic characterizations, and impossibility results. Mathematically rigorous but well-motivated.
Nash Bargaining and Geometric Mean Welfare🔗
The breakthrough results on Nash welfare:
Nash, John F. “The Bargaining Problem.” Econometrica 18.2 (1950): 155-162.
The original paper introducing Nash bargaining and proving that maximizing the product of utilities emerges from certain fairness axioms. Short, elegant, and foundational. Essential reading for understanding why Nash welfare matters.
Caragiannis, Ioannis, David Kurokawa, Hervé Moulin, Ariel D. Procaccia, Nisarg Shah, and Junxing Wang. “The Unreasonable Fairness of Maximum Nash Welfare.” ACM Transactions on Economics and Computation 7.3 (2019): 1-32.
Shows that maximizing Nash welfare for indivisible goods achieves remarkably strong fairness properties including EF1 and approximations to proportionality. Bridges welfare optimization and fairness criteria. Highly influential for connecting efficiency and equity.
Cole, Richard, and Vasilis Gkatzelis. “Approximating the Nash Social Welfare with Indivisible Items.” SIAM Journal on Computing 47.3 (2018): 1211-1236.
The breakthrough showing that maximizing Nash welfare for goods with additive valuations can be done in polynomial time. Technical but accessible to readers with algorithms background. Resolves what seemed like a hard non-convex optimization problem.
Anari, Nima, Shayan Oveis Gharan, Amin Saberi, and Mohit Singh. “Nash Social Welfare, Matrix Permanent, and Stable Polynomials.” Theory of Computing 16.3 (2020): 1-55.
Further improves Nash welfare algorithms using deep connections to polynomials and matroids. Advanced but represents state-of-the-art. For readers interested in the mathematical techniques behind efficient Nash welfare computation.
Maximin and Leximin: Egalitarian Welfare🔗
Theory and algorithms for maximin welfare:
Bezáková, Ivona, and Varsha Dani. “Allocating Indivisible Goods.” ACM SIGecom Exchanges 5.3 (2005): 11-18.
Proves that maximin allocation is strongly NP-hard even for simple cases. Establishes fundamental computational barriers for egalitarian welfare. Short and readable introduction to complexity results.
Asadpour, Arash, Uriel Feige, and Amin Saberi. “Santa Claus Meets Hypergraph Matchings.” ACM Transactions on Algorithms 8.3 (2012): Article 24.
Sophisticated approximation algorithm for a variant of maximin allocation (the Santa Claus problem). Shows that approximation is possible despite hardness of exact optimization. Advanced reading for those interested in approximation techniques.
Bansal, Nikhil, and Maxim Sviridenko. “The Santa Claus Problem.” Proceedings of STOC, 2006.
Early work on approximating maximin welfare for restricted valuations. Important for understanding which special cases are tractable. Graduate-level algorithmic content.
Computational Complexity of Welfare Maximization🔗
Understanding hardness and approximability:
Bouveret, Sylvain, Yann Chevaleyre, and Nicolas Maudet. “Fair Allocation of Indivisible Goods.” Handbook of Computational Social Choice. Cambridge University Press, 2016.
Chapter-length survey covering computational aspects of fair allocation including welfare maximization. Comprehensive complexity analysis across different problem variants. Excellent reference for understanding what’s tractable and what’s not.
Barman, Siddharth, Sanath Kumar Krishnamurthy, and Rohit Vaish. “Finding Fair and Efficient Allocations.” Proceedings of ACM EC, 2018.
Studies algorithms achieving both fairness and efficiency for different welfare functions. Shows trade-offs between computational complexity and approximation quality. Recent work reflecting current state-of-the-art.
Nguyen, Trung Thanh, Magnus Roos, and Jörg Rothe. “A Survey of Approximability and Inapproximability Results for Social Welfare Optimization in Multiagent Resource Allocation.” Annals of Mathematics and Artificial Intelligence 68.1-3 (2013): 65-90.
Comprehensive survey of approximation results for various welfare functions. Covers both positive results (polynomial-time approximations) and negative results (inapproximability). Technical but thorough.
Leximin: Theory and Practice🔗
The lexicographic refinement of maximin:
Bouveret, Sylvain, Katarína Cechlárová, Edith Elkind, Ayumi Igarashi, and Dominik Peters. “Fair Division of a Graph.” Proceedings of IJCAI, 2017.
Studies leximin for fair division on graphs (where items have network structure). Shows how leximin properties extend to structured allocation problems. Good introduction to leximin with modern applications.
Ghodsi, Ali, Matei Zaharia, Benjamin Hindman, Andy Konwinski, Scott Shenker, and Ion Stoica. “Dominant Resource Fairness: Fair Allocation of Multiple Resource Types.” Proceedings of USENIX NSDI, 2011.
Applies leximin-like principles to cloud computing resource allocation. Shows practical deployment at scale (Apache Mesos). Excellent case study of theoretical concepts meeting real systems.
Procaccia, Ariel D. “Cake Cutting Algorithms.” Handbook of Computational Social Choice. Cambridge University Press, 2016.
Survey chapter covering cake-cutting and its extensions. Discusses leximin for both divisible and indivisible goods. Accessible introduction by one of the field’s leading researchers.
Pareto Optimality and Efficiency🔗
Understanding efficiency in fair division:
Pareto, Vilfredo. Manual of Political Economy. 1906.
The origin of Pareto efficiency. While not about fair division specifically, Pareto’s concept of “optimality” is fundamental to understanding welfare maximization. Historical interest.
Chevaleyre, Yann, Ulle Endriss, Jérôme Lang, and Nicolas Maudet. “A Short Introduction to Computational Social Choice.” SOFSEM 2007: Theory and Practice of Computer Science. Springer, 2007.
Introductory survey covering Pareto optimality in multi-agent resource allocation. Accessible to advanced undergraduates. Good starting point for understanding efficiency concepts.
Moulin, Hervé. “Axioms for Cooperative Decision Making.” Econometric Society Monographs 15 (1988).
Axiomatic treatment of cooperative game theory including efficiency axioms. Graduate level but foundational for understanding why Pareto optimality matters.
Applied Welfare Optimization🔗
Real-world implementations and case studies:
Goldman, Jonathan, and Ariel D. Procaccia. “Spliddit: Unleashing Fair Division Algorithms.” SIGecom Exchanges 13.2 (2015): 41-46.
Description of the Spliddit platform (http://spliddit.org) which has served over 100,000 users with fair division algorithms. Shows practical deployment of Nash welfare and leximin optimization. Excellent case study of theory meeting practice.
Ghodsi, Ali, Matei Zaharia, Scott Shenker, and Ion Stoica. “Choosy: Max-min Fair Sharing for Datacenter Jobs with Constraints.” Proceedings of EuroSys, 2013.
Applies maximin fairness to real datacenter scheduling at Facebook scale. Shows how theoretical concepts scale to production systems. Important for practitioners building allocation systems.
Othman, Abraham, Tuomas Sandholm, and Eric Budish. “Finding Approximate Competitive Equilibria: Efficient and Fair Course Allocation.” Proceedings of AAMAS, 2010.
Uses welfare optimization for course allocation at Wharton Business School. Real-world case study with over 4,000 students. Shows how Nash welfare approximations work in practice.
Ethics and political philosophy for welfare functions:
Harsanyi, John C. “Cardinal Welfare, Individualistic Ethics, and Interpersonal Comparisons of Utility.” Journal of Political Economy 63.4 (1955): 309-321.
Argues that rational agents behind the veil of ignorance would choose utilitarianism (expected utility maximization) rather than Rawls’s maximin. Important counterpoint to Rawlsian egalitarianism.
Dworkin, Ronald. “What is Equality? Part 2: Equality of Resources.” Philosophy & Public Affairs 10.4 (1981): 283-345.
Philosophical argument for resource equality and the problem of expensive preferences. Informs debates about utility normalization and interpersonal comparisons in welfare functions.
Nozick, Robert. Anarchy, State, and Utopia. Basic Books, 1974.
Libertarian critique of aggregative welfare functions. Introduces the “utility monster” problem showing extreme implications of utilitarianism. Important for understanding limitations of welfare maximization from a rights-based perspective.
Broome, John. Weighing Goods: Equality, Uncertainty and Time. Wiley-Blackwell, 1991.
Philosophical analysis of how to compare different people’s well-being. Addresses interpersonal utility comparisons that underlie welfare functions. Graduate-level philosophy but essential for understanding foundations.
Surveys and Recent Advances🔗
Recent surveys synthesizing current knowledge:
Aziz, Haris, and Simon Mackenzie. “A Discrete and Bounded Envy-free Cake Cutting Protocol for Any Number of Agents.” Proceedings of FOCS, 2016.
While primarily about cake-cutting, this paper’s techniques have influenced discrete welfare optimization. Shows how divisible and indivisible problems connect.
Amanatidis, Georgios, Haris Aziz, Georgios Birmpas, Aris Filos-Ratsikas, Bo Li, Hervé Moulin, Alexandros A. Voudouris, and Xiaowei Wu. “Fair Division of Indivisible Goods: Recent Progress and Open Questions.” Artificial Intelligence 322 (2023): 103965.
State-of-the-art survey (2023) covering recent results in fair division including welfare maximization. Comprehensive treatment with clear exposition of open problems. Essential for understanding current research frontier.
Bei, Xiaohui, Xinhang Lu, Pasin Manurangsi, and Warut Suksompong. “The Price of Fairness for Indivisible Goods.” Theory of Computing Systems 65.7 (2021): 1069-1093.
Studies the efficiency loss from enforcing fairness constraints. Quantifies the welfare cost of achieving EF1 or proportionality. Important for understanding fairness-efficiency trade-offs.
Computational Tools and Libraries🔗
Practical implementation resources:
FairPy: Python Library for Fair Allocation Algorithms
Open-source library implementing welfare maximization algorithms including Nash welfare, maximin, and leximin. Actively maintained with examples and documentation. Use this for implementing algorithms from the post.
Spliddit: Fair Division in Practice
Web application applying welfare-maximizing algorithms to real problems. Based on research by Procaccia and collaborators. Try it for practical allocation problems to see how welfare optimization works in practice.
Gurobi Optimization
Commercial MIP solver (free for academic use) for solving welfare maximization as integer programming. Essential tool for researchers implementing exact algorithms.
Advanced Topics and Research Frontiers🔗
Barman, Siddharth, and Sanath Kumar Krishnamurthy. “On the Proximity of Markets with Integral Equilibria.” Proceedings of AAAI, 2019.
Connects market equilibria to welfare maximization. Shows deep relationships between economic mechanisms and fair division algorithms. Advanced reading for those interested in mechanism design.
Garg, Jugal, Pooja Kulkarni, and Rucha Kulkarni. “Approximating Nash Social Welfare under Submodular Valuations through (Un)Matchings.” Proceedings of SODA, 2020.
Extends Nash welfare beyond additive valuations to submodular cases. State-of-the-art for handling complementarities. Graduate-level algorithms content.
Aziz, Haris, Herve Moulin, and Fedor Sandomirskiy. “A Polynomial-Time Algorithm for Computing a Pareto Optimal and Almost Proportional Allocation.” Operations Research Letters 48.5 (2020): 573-578.
Recent algorithm achieving both Pareto optimality and approximate proportionality efficiently. Bridges fairness and welfare in polynomial time. Important for Part 4’s multi-objective approach.
Historical Context🔗
Understanding the intellectual history:
Steinhaus, Hugo. “The Problem of Fair Division.” Econometrica 16.1 (1948): 101-104.
One of the first formal papers on fair division, introducing proportionality for cake-cutting. While focused on divisible goods, this paper established the field. Short, elegant, and historically important.
Kuhn, Harold W. “The Hungarian Method for the Assignment Problem.” Naval Research Logistics Quarterly 2.1-2 (1955): 83-97.
The Hungarian algorithm for maximum weight matching is fundamental to utilitarian welfare optimization. Classic paper in combinatorial optimization with direct applications to fair division.
These resources span from foundational philosophy (Bentham, Rawls) to cutting-edge algorithms (Nash welfare computation), from abstract theory (social choice) to practical deployment (Spliddit, datacenter scheduling). For readers wanting to go deeper:
Start with: Moulin’s Fair Division and Collective Welfare for comprehensive theoretical foundations, plus the Caragiannis et al. paper on Nash welfare for modern algorithmic results.
For philosophical depth: Read Rawls for egalitarian perspectives and Harsanyi for utilitarian counterarguments.
For computational techniques: Study the Cole-Gkatzelis paper on Nash welfare and the Amanatidis et al. survey for comprehensive coverage of current algorithms.
For practical implementation: Use the FairPy library and examine Spliddit’s codebase to see how algorithms work in practice.
The interdisciplinary nature of welfare optimization means no single paper or book covers everything. The richest understanding comes from reading across economics (welfare functions), computer science (algorithms), and philosophy (ethical foundations) which is exactly what this post aimed to synthesize.