BusinessMath Quarterly Series
16 min read
Part 31 of 12-Week BusinessMath Series
BusinessMath provides 10+ optimization algorithms:
Which should you use? The answer depends on:
Choosing wrong can mean: no solution, slow convergence, or local optima.
BusinessMath’s AdaptiveOptimizer analyzes your problem and automatically selects the best algorithm. It considers problem characteristics, tries multiple methods in parallel, and returns the best result.
Business Problem: Optimize portfolio allocation without worrying about algorithm details.
import BusinessMath
import Foundation
let assets: [String] = ["US Stocks", "Intl Stocks", "Bonds", "Real Estate"]
let expectedReturns = VectorN([0.10, 0.12, 0.04, 0.09])
let riskFreeRate = 0.03
// Covariance matrix (variances on diagonal, covariances off-diagonal)
let covarianceMatrix = [
[0.0400, 0.0150, 0.0020, 0.0180], // US Stocks
[0.0150, 0.0625, 0.0015, 0.0200], // Intl Stocks
[0.0020, 0.0015, 0.0036, 0.0010], // Bonds
[0.0180, 0.0200, 0.0010, 0.0400] // Real Estate
]
// Define your optimization problem
let portfolioObjective: @Sendable (VectorN) -> Double = { weights in
// Minimize negative Sharpe ratio
let expectedReturn = weights.dot(expectedReturns)
var variance = 0.0
for i in 0..>.budgetConstraint
let longOnlyConstraints = MultivariateConstraint>.nonNegativity(dimension: assets.count)
let constraints: [MultivariateConstraint>] = [budgetConstraint] + longOnlyConstraints
// Let AdaptiveOptimizer choose the algorithm
let adaptive = AdaptiveOptimizer>()
do {
let result = try adaptive.optimize(
objective: portfolioObjective,
initialGuess: VectorN.equalWeights(dimension: assets.count),
constraints: constraints
)
print("Optimal Portfolio:")
for (asset, weight) in zip(assets, result.solution.toArray()) {
if weight > 0.01 {
print(" \(asset): \(weight.percent())")
}
}
print("\nOptimization Details:")
print(" Algorithm Used: \(result.algorithmUsed)")
print(" Selection Reason: \(result.selectionReason)")
print(" Iterations: \(result.iterations)")
print(" Sharpe Ratio: \((-result.objectiveValue).number())")
} catch {
print("Optimization failed: \(error)")
}
Pattern: Run the same algorithm from multiple starting points in parallel to find global optima.
import BusinessMath
import Foundation
// Use ParallelOptimizer for problems with multiple local minima
let parallelOptimizer = ParallelOptimizer>(
algorithm: .inequality, // Use inequality-constrained optimizer
numberOfStarts: 20, // Try 20 different starting points
maxIterations: 1000,
tolerance: 1e-6
)
// Define search region for starting points
let searchRegion = (
lower: VectorN(repeating: 0.0, count: 4),
upper: VectorN(repeating: 1.0, count: 4)
)
// Run optimization in parallel (async/await)
let parallelResult = try await parallelOptimizer.optimize(
objective: portfolioObjective,
searchRegion: searchRegion,
constraints: constraints
)
print("Best solution found across \(parallelResult.allResults.count) attempts")
print("Success rate: \(parallelResult.successRate.percent())")
print("Objective value: \(parallelResult.objectiveValue.number())")
Pattern: Analyze problem structure to choose algorithm.
AdaptiveOptimizer uses a decision tree to select the best algorithm:
// AdaptiveOptimizer's actual selection logic:
// Rule 1: Inequality constraints? → InequalityOptimizer (penalty-barrier method)
if hasInequalityConstraints {
// Use interior-point penalty-barrier method
return .inequality
}
// Rule 2: Equality constraints only? → ConstrainedOptimizer (augmented Lagrangian)
else if hasEqualityConstraints {
// Use augmented Lagrangian method
return .constrained
}
// Rule 3: Large unconstrained problem (>100 variables)? → Gradient Descent
else if problemSize > 100 {
// Memory-efficient gradient descent with adaptive learning rate
return .gradientDescent
}
// Rule 4: Prefer accuracy + small problem (<10 vars)? → Newton-Raphson
else if preferAccuracy && problemSize < 10 {
// Full Newton method with Hessian for quadratic convergence
return .newtonRaphson
}
// Rule 5: Very small problem (≤5 vars)? → Newton-Raphson
else if problemSize <= 5 {
// Newton-Raphson for fast convergence
return .newtonRaphson
}
// Default: Gradient Descent (best balance)
else {
return .gradientDescent
}
// Use analyzeProblem() to see what will be selected:
let adaptive = AdaptiveOptimizer>()
let analysis = adaptive.analyzeProblem(
initialGuess: VectorN(repeating: 0.25, count: 4),
constraints: constraints,
hasGradient: false
)
print("Problem size: \(analysis.size)")
print("Has constraints: \(analysis.hasConstraints)")
print("Has inequalities: \(analysis.hasInequalities)")
print("Recommended: \(analysis.recommendedAlgorithm)")
print("Reason: \(analysis.reason)")
Pattern: Control adaptive selection with preferences.
// Prefer speed: Uses higher learning rates and simpler algorithms
let fastOptimizer = AdaptiveOptimizer>(
preferSpeed: true,
maxIterations: 500,
tolerance: 1e-4 // Looser tolerance for faster convergence
)
// Prefer accuracy: Uses Newton-Raphson for small problems
let accurateOptimizer = AdaptiveOptimizer>(
preferAccuracy: true,
maxIterations: 2000,
tolerance: 1e-8 // Tighter tolerance for precise results
)
// Example: Portfolio optimization with accuracy preference
let result = try accurateOptimizer.optimize(
objective: portfolioObjective,
initialGuess: VectorN.equalWeights(dimension: 4),
constraints: constraints
)
print("With preferAccuracy=true:")
print(" Algorithm: \(result.algorithmUsed)")
print(" Reason: \(result.selectionReason)")
print(" Iterations: \(result.iterations)")
print(" Converged: \(result.converged)")
// Compare with default settings
let defaultResult = try AdaptiveOptimizer>().optimize(
objective: portfolioObjective,
initialGuess: VectorN.equalWeights(dimension: 4),
constraints: constraints
)
print("\nWith default settings:")
print(" Algorithm: \(defaultResult.algorithmUsed)")
print(" Reason: \(defaultResult.selectionReason)")
Has Inequality Constraints?
├─ YES → InequalityOptimizer (penalty-barrier method)
│
└─ NO → Has Equality Constraints?
├─ YES → ConstrainedOptimizer (augmented Lagrangian)
│
└─ NO (Unconstrained) → Problem Size?
├─ > 100 variables → Gradient Descent (memory-efficient)
│
├─ ≤ 5 variables → Newton-Raphson (fast convergence)
│
├─ < 10 variables + preferAccuracy → Newton-Raphson
│
└─ Default → Gradient Descent (best balance)
import Foundation
// Compare different optimizers on the same problem
struct OptimizerComparison {
let objective: (VectorN) -> Double
let initialGuess: VectorN
let constraints: [MultivariateConstraint>]
func compare() throws {
print("Optimizer Performance Comparison")
print("═══════════════════════════════════════════════")
// Test 1: Gradient Descent
let startGD = Date()
let gdOptimizer = MultivariateGradientDescent>(
learningRate: 0.01,
maxIterations: 1000,
tolerance: 1e-6
)
let gdResult = try gdOptimizer.minimize(
function: objective,
gradient: { try numericalGradient(objective, at: $0) },
initialGuess: initialGuess
)
let gdTime = Date().timeIntervalSince(startGD)
print("Gradient Descent:")
print(" Value: \(gdResult.value.number(4))")
print(" Time: \(gdTime.number(2))s")
print(" Iterations: \(gdResult.iterations)")
// // Test 2: Newton-Raphson (if problem is small)
// NOTE: This will likely crash if run in a playground. To understand when and how to use Newton-Raphson, check out our [Newton-Raphson Guide](../05-fri-newton-raphson-guide)
// if initialGuess.dimension <= 10 {
// let startNR = Date()
// let nrOptimizer = MultivariateNewtonRaphson>(
// maxIterations: 1000,
// tolerance: 1e-6
// )
// let nrResult = try nrOptimizer.minimize(
// function: objective,
// gradient: { try numericalGradient(objective, at: $0) },
// hessian: { try numericalHessian(objective, at: $0) },
// initialGuess: initialGuess
// )
// let nrTime = Date().timeIntervalSince(startNR)
//
// print("\nNewton-Raphson:")
// print(" Value: \(nrResult.value.number(4))")
// print(" Time: \(nrTime.number(2))s")
// print(" Iterations: \(nrResult.iterations)")
// }
// Test 3: Adaptive (let it choose)
let startAdaptive = Date()
let adaptiveOptimizer = AdaptiveOptimizer>()
let adaptiveResult = try adaptiveOptimizer.optimize(
objective: objective,
initialGuess: initialGuess,
constraints: constraints
)
let adaptiveTime = Date().timeIntervalSince(startAdaptive)
print("\nAdaptive Optimizer:")
print(" Algorithm chosen: \(adaptiveResult.algorithmUsed)")
print(" Value: \(adaptiveResult.objectiveValue.number(4))")
print(" Time: \(adaptiveTime.number(2))s")
print(" Iterations: \(adaptiveResult.iterations)")
}
}
// Run comparison
let comparison = OptimizerComparison(
objective: portfolioObjective,
initialGuess: VectorN.equalWeights(dimension: 4),
constraints: constraints
)
try comparison.compare()
Company: National manufacturer with 12 facilities, 8 products, 40 distribution centersChallenge: Minimize total costs (production + shipping) subject to capacity and demand
Problem Characteristics:
Algorithm Selection Process:
import BusinessMath
import Foundation
// Problem dimensions
let numFacilities = 12
let numProducts = 8
let numVariables = numFacilities * numProducts // 96 variables
// Cost structure ($/unit for each facility-product combination)
// Lower costs for specialized facilities, higher for general purpose
let productionCosts = (0..) -> Double = { production in
var totalCost = 0.0
// Production costs with volume discounts
for i in 0.. volumeDiscountThreshold {
let discountedAmount = quantity - volumeDiscountThreshold
totalCost += productionCosts[i] * volumeDiscountThreshold
totalCost += productionCosts[i] * volumeDiscountRate * discountedAmount
} else {
totalCost += baseCost
}
}
return totalCost
}
// Current production (starting point)
// Distribute demand equally across facilities initially
let currentProduction = VectorN((0..>] = []
for facility in 0..>] = []
for product in 0..>.nonNegativity(dimension: numVariables)
let allConstraints = capacityConstraints + demandConstraints + nonNegativityConstraints
// Let AdaptiveOptimizer analyze and choose
do {
print(String(repeating: "=", count: 70))
print("SUPPLY CHAIN OPTIMIZATION: MULTI-FACILITY PRODUCTION")
print(String(repeating: "=", count: 70))
print("Facilities: \(numFacilities)")
print("Products: \(numProducts)")
print("Variables: \(numVariables)")
print("Total demand: \(productDemands.reduce(0, +).number(0)) units/month")
print("Total capacity: \(facilityCapacities.reduce(0, +).number(0)) units/month")
print()
let supplyChainOptimizer = AdaptiveOptimizer>(
maxIterations: 2000,
tolerance: 1e-5
)
// First, analyze what algorithm will be selected
let analysis = supplyChainOptimizer.analyzeProblem(
initialGuess: currentProduction,
constraints: allConstraints,
hasGradient: false
)
print("Problem Analysis:")
print(" Size: \(analysis.size) variables")
print(" Constraints: \(analysis.hasConstraints)")
print(" Inequalities: \(analysis.hasInequalities)")
print(" Recommended: \(analysis.recommendedAlgorithm)")
print(" Reason: \(analysis.reason)")
print()
// Run optimization
let startTime = Date()
let supplyChainResult = try supplyChainOptimizer.optimize(
objective: totalCostObjective,
initialGuess: currentProduction,
constraints: allConstraints
)
let elapsedTime = Date().timeIntervalSince(startTime)
print("Supply Chain Optimization Results:")
print(" Algorithm Selected: \(supplyChainResult.algorithmUsed)")
print(" Total Cost: \(supplyChainResult.objectiveValue.currency())")
print(" Time: \(elapsedTime.number())s")
print(" Iterations: \(supplyChainResult.iterations)")
print(" Converged: \(supplyChainResult.converged)")
// Calculate cost savings vs initial
let initialCost = totalCostObjective(currentProduction)
let savings = initialCost - supplyChainResult.objectiveValue
let savingsPercent = (savings / initialCost)
print("\nCost Savings:")
print(" Initial cost: \(initialCost.currency())")
print(" Optimized cost: \(supplyChainResult.objectiveValue.currency())")
print(" Savings: \(savings.currency()) (\(savingsPercent.percent(1)))")
// Show production summary
var facilitiesUsed = 0
for facility in 0.. 1.0 {
facilitiesUsed += 1
}
}
print("\nProduction Summary:")
print(" Active facilities: \(facilitiesUsed)/\(numFacilities)")
print(" Total units produced: \(supplyChainResult.solution.sum.number(0))")
} catch {
print("Optimization failed: \(error)")
}
AdaptiveOptimizer Analysis:
Results:
import BusinessMath
import Foundation
// MARK: - Basic Portfolio Optimization with AdaptiveOptimizer
let assets = ["US Stocks", "Intl Stocks", "Bonds", "Real Estate"]
let expectedReturns = VectorN([0.10, 0.12, 0.04, 0.09])
let riskFreeRate = 0.03
// Covariance matrix (variances on diagonal, covariances off-diagonal)
let covarianceMatrix = [
[0.0400, 0.0150, 0.0020, 0.0180], // US Stocks
[0.0150, 0.0625, 0.0015, 0.0200], // Intl Stocks
[0.0020, 0.0015, 0.0036, 0.0010], // Bonds
[0.0180, 0.0200, 0.0010, 0.0400] // Real Estate
]
// Define optimization problem - maximize Sharpe ratio
let portfolioObjective: @Sendable (VectorN) -> Double = { weights in
// Minimize negative Sharpe ratio
let expectedReturn = weights.dot(expectedReturns)
var variance = 0.0
for i in 0..>.budgetConstraint
let longOnlyConstraints = MultivariateConstraint>.nonNegativity(dimension: assets.count)
let constraints: [MultivariateConstraint>] = [budgetConstraint] + longOnlyConstraints
// Let AdaptiveOptimizer choose the algorithm
let adaptive = AdaptiveOptimizer>()
do {
// First, analyze what algorithm will be selected
let analysis = adaptive.analyzeProblem(
initialGuess: VectorN.equalWeights(dimension: assets.count),
constraints: constraints,
hasGradient: false
)
print("Problem Analysis:")
print(" Size: \(analysis.size) variables")
print(" Has constraints: \(analysis.hasConstraints)")
print(" Has inequalities: \(analysis.hasInequalities)")
print(" Recommended: \(analysis.recommendedAlgorithm)")
print(" Reason: \(analysis.reason)")
print()
// Run optimization
let result = try adaptive.optimize(
objective: portfolioObjective,
initialGuess: VectorN.equalWeights(dimension: assets.count),
constraints: constraints
)
print("Optimal Portfolio:")
for (asset, weight) in zip(assets, result.solution.toArray()) {
if weight > 0.01 {
print(" \(asset): \(weight.percent())")
}
}
print("\nOptimization Details:")
print(" Algorithm Used: \(result.algorithmUsed)")
print(" Selection Reason: \(result.selectionReason)")
print(" Iterations: \(result.iterations)")
print(" Converged: \(result.converged)")
print(" Sharpe Ratio: \((-result.objectiveValue).number())")
// Calculate portfolio metrics
let optimalReturn = result.solution.dot(expectedReturns)
var optimalVariance = 0.0
for i in 0..>(
preferSpeed: true,
maxIterations: 500,
tolerance: 1e-4
)
do {
let fastResult = try fastOptimizer.optimize(
objective: portfolioObjective,
initialGuess: VectorN.equalWeights(dimension: assets.count),
constraints: constraints
)
print("\nWith preferSpeed=true:")
print(" Algorithm: \(fastResult.algorithmUsed)")
print(" Iterations: \(fastResult.iterations)")
print(" Sharpe Ratio: \((-fastResult.objectiveValue).number())")
} catch {
print("Fast optimization failed: \(error)")
}
// Prefer accuracy: Tighter tolerance, uses Newton when possible
let accurateOptimizer = AdaptiveOptimizer>(
preferAccuracy: true,
maxIterations: 2000,
tolerance: 1e-8
)
do {
let accurateResult = try accurateOptimizer.optimize(
objective: portfolioObjective,
initialGuess: VectorN.equalWeights(dimension: assets.count),
constraints: constraints
)
print("\nWith preferAccuracy=true:")
print(" Algorithm: \(accurateResult.algorithmUsed)")
print(" Iterations: \(accurateResult.iterations)")
print(" Sharpe Ratio: \((-accurateResult.objectiveValue).number())")
} catch {
print("Accurate optimization failed: \(error)")
}
// MARK: - Testing Decision Tree with Different Problem Sizes
print("\n" + String(repeating: "=", count: 60))
print("TESTING DECISION TREE")
print(String(repeating: "=", count: 60))
// Small unconstrained problem (≤5 variables) → Newton-Raphson
let smallObjective: (VectorN) -> Double = { x in
(x[0] - 1)*(x[0] - 1) + (x[1] - 2)*(x[1] - 2) + (x[2] - 3)*(x[2] - 3)
}
let smallAnalysis = AdaptiveOptimizer>().analyzeProblem(
initialGuess: VectorN([0.0, 0.0, 0.0]),
constraints: [],
hasGradient: false
)
print("\nSmall unconstrained (3 variables):")
print(" Recommended: \(smallAnalysis.recommendedAlgorithm)")
print(" Reason: \(smallAnalysis.reason)")
// Large unconstrained problem (>100 variables) → Gradient Descent
let largeAnalysis = AdaptiveOptimizer>().analyzeProblem(
initialGuess: VectorN(repeating: 0.0, count: 150),
constraints: [],
hasGradient: false
)
print("\nLarge unconstrained (150 variables):")
print(" Recommended: \(largeAnalysis.recommendedAlgorithm)")
print(" Reason: \(largeAnalysis.reason)")
// Problem with inequality constraints → InequalityOptimizer
let inequalityAnalysis = AdaptiveOptimizer>().analyzeProblem(
initialGuess: VectorN.equalWeights(dimension: assets.count),
constraints: constraints, // Has inequalities (long-only)
hasGradient: false
)
print("\nWith inequality constraints:")
print(" Recommended: \(inequalityAnalysis.recommendedAlgorithm)")
print(" Reason: \(inequalityAnalysis.reason)")
// Problem with only equality constraints → ConstrainedOptimizer
let equalityOnly = [MultivariateConstraint>.budgetConstraint]
let equalityAnalysis = AdaptiveOptimizer>().analyzeProblem(
initialGuess: VectorN.equalWeights(dimension: assets.count),
constraints: equalityOnly,
hasGradient: false
)
print("\nWith only equality constraints:")
print(" Recommended: \(equalityAnalysis.recommendedAlgorithm)")
print(" Reason: \(equalityAnalysis.reason)")
print("\n" + String(repeating: "=", count: 60))
print("✓ AdaptiveOptimizer automatically selects the best algorithm")
print(" based on problem characteristics!")
print(String(repeating: "=", count: 60))
// Use ParallelOptimizer for problems with multiple local minima
let parallelOptimizer = ParallelOptimizer>(
algorithm: .inequality, // Use inequality-constrained optimizer
numberOfStarts: 20, // Try 20 different starting points
maxIterations: 1000,
tolerance: 1e-6
)
// Define search region for starting points
let searchRegion = (
lower: VectorN(repeating: 0.0, count: assets.count),
upper: VectorN(repeating: 1.0, count: assets.count)
)
// Run optimization in parallel (async/await)
let parallelResult = try await parallelOptimizer.optimize(
objective: portfolioObjective,
searchRegion: searchRegion,
constraints: constraints
)
print("Best solution found across \(parallelResult.allResults.count) attempts")
print("Success rate: \(parallelResult.successRate.percent())")
print("Objective value: \(parallelResult.objectiveValue.number())")
do {
// Compare different optimizers on the same problem
struct OptimizerComparison {
let objective: (VectorN) -> Double
let initialGuess: VectorN
let constraints: [MultivariateConstraint>]
func compare() throws {
print("Optimizer Performance Comparison")
print("═══════════════════════════════════════════════")
// Test 1: Gradient Descent
let startGD = Date()
let gdOptimizer = MultivariateGradientDescent>(
learningRate: 0.01,
maxIterations: 1000,
tolerance: 1e-6
)
let gdResult = try gdOptimizer.minimize(
function: objective,
gradient: { try numericalGradient(objective, at: $0) },
initialGuess: initialGuess
)
let gdTime = Date().timeIntervalSince(startGD)
print("Gradient Descent:")
print(" Value: \(gdResult.value.number(4))")
print(" Time: \(gdTime.number(2))s")
print(" Iterations: \(gdResult.iterations)")
// // Test 2: Newton-Raphson (if problem is small)
// if initialGuess.dimension <= 10 {
// let startNR = Date()
// let nrOptimizer = MultivariateNewtonRaphson>(
// maxIterations: 1000,
// tolerance: 1e-6
// )
// let nrResult = try nrOptimizer.minimize(
// function: objective,
// gradient: { try numericalGradient(objective, at: $0) },
// hessian: { try numericalHessian(objective, at: $0) },
// initialGuess: initialGuess
// )
// let nrTime = Date().timeIntervalSince(startNR)
//
// print("\nNewton-Raphson:")
// print(" Value: \(nrResult.value.number(4))")
// print(" Time: \(nrTime.number(2))s")
// print(" Iterations: \(nrResult.iterations)")
// }
// Test 3: Adaptive (let it choose)
let startAdaptive = Date()
let adaptiveOptimizer = AdaptiveOptimizer>()
let adaptiveResult = try adaptiveOptimizer.optimize(
objective: objective,
initialGuess: initialGuess,
constraints: constraints
)
let adaptiveTime = Date().timeIntervalSince(startAdaptive)
print("\nAdaptive Optimizer:")
print(" Algorithm chosen: \(adaptiveResult.algorithmUsed)")
print(" Value: \(adaptiveResult.objectiveValue.number(4))")
print(" Time: \(adaptiveTime.number(2))s")
print(" Iterations: \(adaptiveResult.iterations)")
}
}
// Run comparison
let comparison = OptimizerComparison(
objective: portfolioObjective,
initialGuess: VectorN.equalWeights(dimension: 4),
constraints: constraints
)
try comparison.compare()
} catch let error as BusinessMathError {
print("ERROR:\n\t\(error.localizedDescription)")
}
// MARK: - Real-World Application
// Problem dimensions
let numFacilities = 12
let numProducts = 8
let numVariables = numFacilities * numProducts // 96 variables
// Cost structure ($/unit for each facility-product combination)
// Lower costs for specialized facilities, higher for general purpose
let productionCosts = (0..) -> Double = { production in
var totalCost = 0.0
// Production costs with volume discounts
for i in 0.. volumeDiscountThreshold {
let discountedAmount = quantity - volumeDiscountThreshold
totalCost += productionCosts[i] * volumeDiscountThreshold
totalCost += productionCosts[i] * volumeDiscountRate * discountedAmount
} else {
totalCost += baseCost
}
}
return totalCost
}
// Current production (starting point)
// Distribute demand equally across facilities initially
let currentProduction = VectorN((0..>] = []
for facility in 0..>] = []
for product in 0..>.nonNegativity(dimension: numVariables)
let allConstraints = capacityConstraints + demandConstraints + nonNegativityConstraints
// Let AdaptiveOptimizer analyze and choose
do {
print(String(repeating: "=", count: 70))
print("SUPPLY CHAIN OPTIMIZATION: MULTI-FACILITY PRODUCTION")
print(String(repeating: "=", count: 70))
print("Facilities: \(numFacilities)")
print("Products: \(numProducts)")
print("Variables: \(numVariables)")
print("Total demand: \(productDemands.reduce(0, +).number(0)) units/month")
print("Total capacity: \(facilityCapacities.reduce(0, +).number(0)) units/month")
print()
let supplyChainOptimizer = AdaptiveOptimizer>(
maxIterations: 2000,
tolerance: 1e-5
)
// First, analyze what algorithm will be selected
let analysis = supplyChainOptimizer.analyzeProblem(
initialGuess: currentProduction,
constraints: allConstraints,
hasGradient: false
)
print("Problem Analysis:")
print(" Size: \(analysis.size) variables")
print(" Constraints: \(analysis.hasConstraints)")
print(" Inequalities: \(analysis.hasInequalities)")
print(" Recommended: \(analysis.recommendedAlgorithm)")
print(" Reason: \(analysis.reason)")
print()
// Run optimization
let startTime = Date()
let supplyChainResult = try supplyChainOptimizer.optimize(
objective: totalCostObjective,
initialGuess: currentProduction,
constraints: allConstraints
)
let elapsedTime = Date().timeIntervalSince(startTime)
print("Supply Chain Optimization Results:")
print(" Algorithm Selected: \(supplyChainResult.algorithmUsed)")
print(" Total Cost: \(supplyChainResult.objectiveValue.currency())")
print(" Time: \(elapsedTime.number())s")
print(" Iterations: \(supplyChainResult.iterations)")
print(" Converged: \(supplyChainResult.converged)")
// Calculate cost savings vs initial
let initialCost = totalCostObjective(currentProduction)
let savings = initialCost - supplyChainResult.objectiveValue
let savingsPercent = (savings / initialCost)
print("\nCost Savings:")
print(" Initial cost: \(initialCost.currency())")
print(" Optimized cost: \(supplyChainResult.objectiveValue.currency())")
print(" Savings: \(savings.currency()) (\(savingsPercent.percent(1)))")
// Show production summary
var facilitiesUsed = 0
for facility in 0.. 1.0 {
facilitiesUsed += 1
}
}
print("\nProduction Summary:")
print(" Active facilities: \(facilitiesUsed)/\(numFacilities)")
print(" Total units produced: \(supplyChainResult.solution.sum.number(0))")
} catch {
print("Optimization failed: \(error)")
}
→ Full API Reference: BusinessMath Docs – Adaptive Selection Guide
Tomorrow: We’ll conclude Week 9 with Parallel Optimization, using multiple CPU cores to speed up large-scale problems.
Next Week: Week 10 explores Performance Benchmarking and advanced algorithms (L-BFGS, Conjugate Gradient, Simulated Annealing).
Series: [Week 9 of 12] | Topic: [Part 5 - Business Applications] | Case Studies: [4/6 Complete]
Topics Covered: Adaptive algorithms • Algorithm selection • Performance profiling • Multi-algorithm racing • Problem analysis
Playgrounds: [Week 1-9 available] • [Next: Parallel optimization]
Tagged with: businessmath, swift, optimization, adaptive-algorithms, algorithm-selection, performance, automation