BusinessMath Development Journey
12 min read
Development Journey Series
In the previous post, we discussed how the Master Plan serves as the project’s memory across sessions. But the master plan only answers “what to build next”—it doesn’t answer how to build it consistently.
After a few weeks of BusinessMath development, I had a different problem: pattern drift.
guard statements for validationif !conditionrate vs. r vs. discountRate)Each individual choice made sense in isolation. But across 200+ tests and 11 topic areas, the inconsistency was creating friction:
Without explicit standards, every decision becomes a mini research project. AI doesn’t remember past decisions, so it defaults to whatever seems reasonable right now.
Create living standards documents that serve as the project’s consistency engine.
We developed three core documents:
These aren’t heavyweight “process manuals”—they’re quick-reference guides that answer common questions in seconds.
The Problem It Solves: “How should I structure this code?”
# Coding Rules for BusinessMath Library
## 1. Generic Programming
- Use `` for all numeric functions
- Enables flexibility across Float, Double, Float16, etc.
## 2. Function Signatures
- Public API: All user-facing functions marked `public`
- Descriptive parameter labels
- Default parameters for common cases
## 3. Guard Clauses & Validation
- Use `guard` for input validation
- Return sensible defaults for empty inputs (e.g., `T(0)`)
- Throw errors for truly invalid cases
## 4. Formatting Rules
- NEVER use String(format:) for number formatting
- ALWAYS use Swift's formatted() API
- Respect user locales automatically
Early in the project, we used C-style formatting:
// Week 2 code
let output = String(format: "%.2f", value)
This created problems:
We established a rule:
// RULE: Never use String(format:)
// ALWAYS use formatted() API
// Correct approach
let output = value.formatted(.number.precision(.fractionLength(2)))
Before the rule: 30 minutes per session debating formatting approaches.
After the rule: 0 minutes. “Check CODING_RULES.md. Use formatted().”
When starting a session:
“Read CODING_RULES.md. Implement the IRR function following these standards.”
AI responds:
“Using
generic constraint,guardfor validation, and Swift’s formatted() API as specified in CODING_RULES.md.”
Result: Consistent code on first try.
Week 10, implementing a new feature:
// AI's first attempt
let result = String(format: "%.4f", value)
My review:
“This violates CODING_RULES.md section 4. Use formatted() API.”
AI immediately corrects:
let result = value.formatted(.number.precision(.fractionLength(4)))
Without the documented rule, I’d have to re-explain why every single time.
The string formatting rule exists because we spent 2 hours debugging locale issues in Week 2. The rule captures that lesson so it’s never repeated.
The Problem It Solves: “How should I document this API?”
# DocC Documentation Guidelines
## Required for All Public APIs
1. Brief one-line summary
2. Detailed explanation including:
- What problem it solves
- How it works (if non-obvious)
- When to use it
3. Parameter documentation
4. Return value documentation
5. Throws documentation (if applicable)
6. Usage example
7. Mathematical formula (for math functions)
8. Excel equivalent (if applicable)
9. See Also links
## Documentation Template
///
/// Brief one-line summary.
///
/// Detailed explanation...
///
/// - Parameters:
/// - param1: Description with valid ranges
/// - Returns: Description of return value and guarantees
/// - Throws: Specific errors and when they occur
///
/// ## Usage Example
///
/// let result = function(param: value)
/// // Output: expected result
///
///
/// ## Mathematical Formula
/// [LaTeX or ASCII math notation]
///
/// - SeeAlso:
/// - ``RelatedType``
/// - ``relatedFunction(_:)``
Week 4, documenting the NPV function. First attempt:
/// NPV = sum of (cash flow / (1 + rate)^period)
Problems:
After establishing guidelines:
/// ## Mathematical Formula
/// NPV is calculated as:
///
/// NPV = Σ (CFₜ / (1 + r)ᵗ)
///
/// where:
/// - CFₜ = cash flow at time t
/// - r = discount rate
/// - t = time period
Result: Consistent, readable mathematical notation across all 200+ documented functions.
Writing DocC comments before implementation forced clarification:
Question: “What errors can calculateIRR throw?”
DocC forces answer:
/// - Throws: `FinancialError.convergenceFailure` if calculation
/// does not converge within `maxIterations`.
/// `FinancialError.invalidInput` if cash flows array is empty.
Now I know exactly what to implement.
The guidelines require runnable examples:
/// ## Usage Example
///
/// let cashFlows = [-1000.0, 300.0, 400.0, 500.0]
/// let irr = try calculateIRR(cashFlows: cashFlows)
/// print(irr.formatted(.percent)) // Output: 12.5%
///
Rule: Every example must run successfully in a playground.
We manually verified all of the documented examples to make sure we had correct values and an ergonomic approach for users.
This caught:
throws)Week 15, adding async versions of functions. The template ensures consistent documentation:
/// [Async version follows same structure as sync version]
/// - Same brief summary
/// - Same parameter docs
/// - Added: Concurrency section
/// - Same usage examples (with await)
Without guidelines: 15 different documentation styles for 15 async functions.
With guidelines: Perfect consistency.
The Problem It Solves: “How should I test this function?”
# Test-Driven Development Standards
## Test Structure (Swift Testing)
- Use `@Test` attribute with descriptive names
- Use `@Suite` to group related tests
- Use `#expect` for assertions
- Use parameterized tests for multiple scenarios
## Test Organization
Tests mirror source structure:
Tests/BusinessMathTests/
├── Time Series Tests/
│ ├── PeriodTests.swift
│ └── TVM Tests/
│ └── NPVTests.swift
## RED-GREEN-REFACTOR Cycle
1. RED: Write failing test
2. GREEN: Minimal implementation to pass
3. REFACTOR: Improve code quality (tests still pass)
## Deterministic Testing for Random Functions
**Always use seeded random number generators**
@Test("Monte Carlo with seed is deterministic")
func testDeterministic() {
let seed: UInt64 = 12345
let result1 = runSimulation(trials: 10000, seed: seed)
let result2 = runSimulation(trials: 10000, seed: seed)
#expect(result1 == result2) // Must be identical
}
Week 6, implementing Monte Carlo simulations. First test:
@Test("Monte Carlo converges to expected value")
func testConvergence() {
let result = runSimulation(trials: 10000)
#expect(abs(result.mean - 100.0) < 1.0)
}
Problem: Flaky test. Sometimes passed, sometimes failed (randomness).
After establishing the rule:
@Test("Monte Carlo with seed converges to expected value")
func testConvergence() {
let seed: UInt64 = 12345
let result = runSimulation(trials: 10000, seed: seed)
#expect(abs(result.mean - 100.023) < 0.001) // Exact value
}
Result: 100% reliable tests. CI never flakes.
The RED-GREEN-REFACTOR rule means tests are written before code:
// STEP 1: Write test (RED)
@Test("IRR calculates correctly")
func testIRR() {
let cashFlows = [-1000.0, 300.0, 400.0, 500.0]
let result = try calculateIRR(cashFlows: cashFlows)
#expect(abs(result - 0.125) < 0.001) // 12.5%
}
// ❌ Test fails: calculateIRR doesn't exist yet
// STEP 2: Implement function (GREEN)
public func calculateIRR(cashFlows: [T]) throws -> T {
// ... implementation ...
}
// ✅ Test passes
// STEP 3: Refactor (tests still pass)
// Extract validation logic, improve performance, etc.
// ✅ Tests still pass after refactoring
The test specifies behavior before implementation exists.
Instead of:
@Test("NPV at 5%") func npv5() { /* ... */ }
@Test("NPV at 10%") func npv10() { /* ... */ }
@Test("NPV at 15%") func npv15() { /* ... */ }
Use parameterized tests:
@Test("NPV at multiple discount rates",
arguments: [
(rate: 0.05, expected: 297.59),
(rate: 0.10, expected: 146.87),
(rate: 0.15, expected: 20.42)
])
func multipleRates(rate: Double, expected: Double) {
let cashFlows = [-1000.0, 300.0, 300.0, 300.0, 300.0]
let result = npv(discountRate: rate, cashFlows: cashFlows)
#expect(abs(result - expected) < 0.01)
}
Result: 3 test cases, 10 lines of code instead of 30.
These three documents form a complete system:
┌─────────────────────────────────────────────┐
│ MASTER_PLAN.md │
│ "What to build next" │
└─────────────────────┬───────────────────────┘
│
┌────────────┴────────────┐
│ │
┌────▼──────┐ ┌────────▼───────┐
│ CODING │ │ TEST_DRIVEN │
│ RULES │◄────────┤ DEVELOPMENT │
└────┬──────┘ └────────┬───────┘
│ │
│ │
┌────▼─────────────────────────▼───────┐
│ DOCC_GUIDELINES.md │
│ "How to document it" │
└───────────────────────────────────────┘
Master Plan: “Implement Statistical Distributions (Topic 2)”
Test-Driven Development: “Write tests for normalCDF first, then implement”
Coding Rules: “Use , guard clauses, and formatted() API”
DocC Guidelines: “Document with formula, example, Excel equivalent, and See Also links”
Result: Consistent, high-quality implementation on the first try.
Each document is 200-500 lines—scannable in 60 seconds.
Anti-pattern: 50-page “Software Development Manual” that nobody reads.
Better: “Check CODING_RULES.md section 3 for guard clause patterns.”
Week 2: CODING_RULES.md has 5 rules.Week 10: CODING_RULES.md has 15 rules.Week 20: CODING_RULES.md has 25 rules.
As we discovered patterns that worked, we documented them. As we hit issues, we added rules to prevent recurrence.
Unwritten rule: “We prefer functional patterns.”
reduce even when a loop is clearer.Written rule: “Prefer functional patterns (reduce, map) where readable. Use loops when clarity demands it.”
Lesson: Make implicit standards explicit.
Week 15, reviewing code:
Without standards:
With standards:
The master plan answers “what to build.” The standards documents answer “how to build it consistently.”
Without standards:
With standards:
Key Takeaway: Create quick-reference standards documents. Start with 5-10 rules. Evolve as you discover what matters.
For your next project:
Don’t try to write comprehensive standards on day 1. Start with:
When you decide something important:
Create copy-paste templates for:
When working with AI:
“Read CODING_RULES.md. Implement calculateXIRR following these standards.”
Not:
“Implement calculateXIRR. Oh, and use generics. And guard clauses. And formatted(). And…”
Made a mistake this session? Add a rule to prevent it next time.
Example: Week 5, forgot to handle empty array in mean() function. Added rule: “Always validate array input with guard.”
# Coding Rules for [Project Name]
**Updated**: [Date]
## MUST (Non-Negotiable)
1. [Critical rule with rationale]
```swift
// Example
// Example
### DOCC_GUIDELINES.md Template
```markdown
# Documentation Guidelines
## Required Sections
1. Brief summary
2. Detailed explanation
3. Parameters/Returns/Throws
4. Usage example
5. See Also
## Template
///
/// [Brief one-line summary]
///
/// [Detailed explanation]
///
/// - Parameters:
/// - param: [Description]
/// - Returns: [Description]
///
/// ## Usage Example
/// ```swift
/// [Runnable code]
/// ```
# Testing Standards
## Test Structure
```swift
@Suite("[Topic] Tests")
struct TopicTests {
@Test("[What this tests]")
func descriptiveName() {
// Arrange
// Act
// Assert with #expect
}
}
---
## See It In Action
BusinessMath's standards documents:
- **CODING_RULES.md**: 25 rules developed over 20 weeks
- **DOCC_GUIDELINES.md**: Complete documentation template with 9 required sections
- **TEST_DRIVEN_DEVELOPMENT.md**: Testing patterns for deterministic behavior
**Results**:
- 200+ functions with consistent style
- 100% documentation coverage
- 250+ tests with 0 flaky tests
- Code reviews focus on logic, not style
---
## Discussion
**Questions to consider**:
1. How detailed should your standards be?
2. When do you add a new rule vs. accepting variation?
3. How do you balance flexibility with consistency?
**Share your experience**: Do you maintain coding standards documents? What works for your team?
---
**Series Progress**:
- Week: 3/12
- Posts Published: 10.5/~48
- Methodology Posts: 4/12
- Practices Covered: Test-First, Documentation as Design, Master Planning, **Standards Documents**
- Standards Established: Coding Rules, DocC Guidelines, Testing Patterns
---
**Related Posts**:
- **Previous**: [The Master Plan: Organizing Complexity](#) - How to maintain project context
- **Next**: [Case Study #2: Capital Equipment Decision](#) - Standards documents in action
- **See Also**: [Building with Claude: A Reflection](#) - Full methodology overview
Tagged with: ai-collaboration, coding-standards, documentation, testing, development-journey