The Supporting Cast: Coding Rules, DocC Guidelines, and Testing Standards

BusinessMath Development Journey

12 min read

Development Journey Series


The Context

In the previous post, we discussed how the Master Plan serves as the project’s memory across sessions. But the master plan only answers “what to build next”—it doesn’t answer how to build it consistently.

After a few weeks of BusinessMath development, I had a different problem: pattern drift.

  • Week 1: Functions used guard statements for validation
  • Week 3: Some functions started using early returns with if !condition
  • Week 5: Parameter naming became inconsistent (rate vs. r vs. discountRate)
  • Week 7: DocC comments had three different documentation styles

Each individual choice made sense in isolation. But across 200+ tests and 11 topic areas, the inconsistency was creating friction:

  • “Wait, did we decide to use external parameter labels?”
  • “Should this throw an error or return zero for empty input?”
  • “What’s the format for DocC mathematical formulas?”

Without explicit standards, every decision becomes a mini research project. AI doesn’t remember past decisions, so it defaults to whatever seems reasonable right now.


The Solution

Create living standards documents that serve as the project’s consistency engine.

We developed three core documents:

  1. CODING_RULES.md - How to write code
  2. DOCC_GUIDELINES.md - How to document APIs
  3. TEST_DRIVEN_DEVELOPMENT.md - How to test code

These aren’t heavyweight “process manuals”—they’re quick-reference guides that answer common questions in seconds.


Document 1: Coding Rules

The Problem It Solves: “How should I structure this code?”

What It Contains

# Coding Rules for BusinessMath Library

## 1. Generic Programming
- Use `` for all numeric functions
- Enables flexibility across Float, Double, Float16, etc.

## 2. Function Signatures
- Public API: All user-facing functions marked `public`
- Descriptive parameter labels
- Default parameters for common cases

## 3. Guard Clauses & Validation
- Use `guard` for input validation
- Return sensible defaults for empty inputs (e.g., `T(0)`)
- Throw errors for truly invalid cases

## 4. Formatting Rules
- NEVER use String(format:) for number formatting
- ALWAYS use Swift's formatted() API
- Respect user locales automatically

Real Example: The String Formatting Rule

Early in the project, we used C-style formatting:

// Week 2 code
let output = String(format: "%.2f", value)

This created problems:

  • Doesn’t respect user locales
  • Breaks with non-decimal numeric types
  • Error-prone format strings

We established a rule:

// RULE: Never use String(format:)
// ALWAYS use formatted() API

// Correct approach
let output = value.formatted(.number.precision(.fractionLength(2)))

Before the rule: 30 minutes per session debating formatting approaches.

After the rule: 0 minutes. “Check CODING_RULES.md. Use formatted().”


Why This Worked

1. AI Can Follow Rules It Can Read

When starting a session:

“Read CODING_RULES.md. Implement the IRR function following these standards.”

AI responds:

“Using generic constraint, guard for validation, and Swift’s formatted() API as specified in CODING_RULES.md.”

Result: Consistent code on first try.

2. Rules Prevent Regression

Week 10, implementing a new feature:

// AI's first attempt
let result = String(format: "%.4f", value)

My review:

“This violates CODING_RULES.md section 4. Use formatted() API.”

AI immediately corrects:

let result = value.formatted(.number.precision(.fractionLength(4)))

Without the documented rule, I’d have to re-explain why every single time.

3. Rules Capture Hard-Won Lessons

The string formatting rule exists because we spent 2 hours debugging locale issues in Week 2. The rule captures that lesson so it’s never repeated.


Document 2: DocC Guidelines

The Problem It Solves: “How should I document this API?”

What It Contains

# DocC Documentation Guidelines

## Required for All Public APIs

1. Brief one-line summary
2. Detailed explanation including:
   - What problem it solves
   - How it works (if non-obvious)
   - When to use it
3. Parameter documentation
4. Return value documentation
5. Throws documentation (if applicable)
6. Usage example
7. Mathematical formula (for math functions)
8. Excel equivalent (if applicable)
9. See Also links

## Documentation Template

///
/// Brief one-line summary.
///
/// Detailed explanation...
///
/// - Parameters:
///   - param1: Description with valid ranges
/// - Returns: Description of return value and guarantees
/// - Throws: Specific errors and when they occur
///
/// ## Usage Example
/// 
/// let result = function(param: value)
/// // Output: expected result
/// 
///
/// ## Mathematical Formula
/// [LaTeX or ASCII math notation]
///
/// - SeeAlso:
///   - ``RelatedType``
///   - ``relatedFunction(_:)``

Real Example: The Formula Format

Week 4, documenting the NPV function. First attempt:

/// NPV = sum of (cash flow / (1 + rate)^period)

Problems:

  • Unclear notation
  • No variable definitions
  • Doesn’t render well in DocC

After establishing guidelines:

/// ## Mathematical Formula
/// NPV is calculated as:
/// 
/// NPV = Σ (CFₜ / (1 + r)ᵗ)
/// 
/// where:
/// - CFₜ = cash flow at time t
/// - r = discount rate
/// - t = time period

Result: Consistent, readable mathematical notation across all 200+ documented functions.


Why This Worked

1. Documentation as Design Tool

Writing DocC comments before implementation forced clarification:

Question: “What errors can calculateIRR throw?”

DocC forces answer:

/// - Throws: `FinancialError.convergenceFailure` if calculation
///   does not converge within `maxIterations`.
///   `FinancialError.invalidInput` if cash flows array is empty.

Now I know exactly what to implement.

2. Examples Must Compile

The guidelines require runnable examples:

/// ## Usage Example
/// 
/// let cashFlows = [-1000.0, 300.0, 400.0, 500.0]
/// let irr = try calculateIRR(cashFlows: cashFlows)
/// print(irr.formatted(.percent))  // Output: 12.5%
/// 

Rule: Every example must run successfully in a playground.

We manually verified all of the documented examples to make sure we had correct values and an ergonomic approach for users.

This caught:

  • API design issues (awkward to use → redesign)
  • Missing error handling (forgot to mark throws)
  • Incorrect output claims (example output didn’t match reality)

3. Prevents Documentation Drift

Week 15, adding async versions of functions. The template ensures consistent documentation:

/// [Async version follows same structure as sync version]
/// - Same brief summary
/// - Same parameter docs
/// - Added: Concurrency section
/// - Same usage examples (with await)

Without guidelines: 15 different documentation styles for 15 async functions.

With guidelines: Perfect consistency.


Document 3: Test-Driven Development Standards

The Problem It Solves: “How should I test this function?”

What It Contains

# Test-Driven Development Standards

## Test Structure (Swift Testing)

- Use `@Test` attribute with descriptive names
- Use `@Suite` to group related tests
- Use `#expect` for assertions
- Use parameterized tests for multiple scenarios

## Test Organization

Tests mirror source structure:

Tests/BusinessMathTests/
├── Time Series Tests/
│   ├── PeriodTests.swift
│   └── TVM Tests/
│       └── NPVTests.swift


## RED-GREEN-REFACTOR Cycle

1. RED: Write failing test
2. GREEN: Minimal implementation to pass
3. REFACTOR: Improve code quality (tests still pass)

## Deterministic Testing for Random Functions

**Always use seeded random number generators**


@Test("Monte Carlo with seed is deterministic")
func testDeterministic() {
    let seed: UInt64 = 12345
    let result1 = runSimulation(trials: 10000, seed: seed)
    let result2 = runSimulation(trials: 10000, seed: seed)
    #expect(result1 == result2)  // Must be identical
}

Real Example: The Deterministic Testing Rule

Week 6, implementing Monte Carlo simulations. First test:

@Test("Monte Carlo converges to expected value")
func testConvergence() {
    let result = runSimulation(trials: 10000)
    #expect(abs(result.mean - 100.0) < 1.0)
}

Problem: Flaky test. Sometimes passed, sometimes failed (randomness).

After establishing the rule:

@Test("Monte Carlo with seed converges to expected value")
func testConvergence() {
    let seed: UInt64 = 12345
    let result = runSimulation(trials: 10000, seed: seed)
    #expect(abs(result.mean - 100.023) < 0.001)  // Exact value
}

Result: 100% reliable tests. CI never flakes.


Why This Worked

1. Tests as Specifications

The RED-GREEN-REFACTOR rule means tests are written before code:

// STEP 1: Write test (RED)
@Test("IRR calculates correctly")
func testIRR() {
    let cashFlows = [-1000.0, 300.0, 400.0, 500.0]
    let result = try calculateIRR(cashFlows: cashFlows)
    #expect(abs(result - 0.125) < 0.001)  // 12.5%
}
// ❌ Test fails: calculateIRR doesn't exist yet

// STEP 2: Implement function (GREEN)
public func calculateIRR(cashFlows: [T]) throws -> T {
    // ... implementation ...
}
// ✅ Test passes

// STEP 3: Refactor (tests still pass)
// Extract validation logic, improve performance, etc.
// ✅ Tests still pass after refactoring

The test specifies behavior before implementation exists.

2. Parameterized Tests Prevent Duplication

Instead of:

@Test("NPV at 5%") func npv5() { /* ... */ }
@Test("NPV at 10%") func npv10() { /* ... */ }
@Test("NPV at 15%") func npv15() { /* ... */ }

Use parameterized tests:

@Test("NPV at multiple discount rates",
      arguments: [
          (rate: 0.05, expected: 297.59),
          (rate: 0.10, expected: 146.87),
          (rate: 0.15, expected: 20.42)
      ])
func multipleRates(rate: Double, expected: Double) {
    let cashFlows = [-1000.0, 300.0, 300.0, 300.0, 300.0]
    let result = npv(discountRate: rate, cashFlows: cashFlows)
    #expect(abs(result - expected) < 0.01)
}

Result: 3 test cases, 10 lines of code instead of 30.


The Triad Working Together

These three documents form a complete system:

┌─────────────────────────────────────────────┐
│          MASTER_PLAN.md                     │
│   "What to build next"                      │
└─────────────────────┬───────────────────────┘
                      │
         ┌────────────┴────────────┐
         │                         │
    ┌────▼──────┐         ┌────────▼───────┐
    │  CODING   │         │  TEST_DRIVEN   │
    │  RULES    │◄────────┤  DEVELOPMENT   │
    └────┬──────┘         └────────┬───────┘
         │                         │
         │                         │
    ┌────▼─────────────────────────▼───────┐
    │       DOCC_GUIDELINES.md              │
    │   "How to document it"                │
    └───────────────────────────────────────┘

Master Plan: “Implement Statistical Distributions (Topic 2)”

Test-Driven Development: “Write tests for normalCDF first, then implement”

Coding Rules: “Use , guard clauses, and formatted() API”

DocC Guidelines: “Document with formula, example, Excel equivalent, and See Also links”

Result: Consistent, high-quality implementation on the first try.


What Worked

1. Quick Reference Beats Long Documents

Each document is 200-500 lines—scannable in 60 seconds.

Anti-pattern: 50-page “Software Development Manual” that nobody reads.

Better: “Check CODING_RULES.md section 3 for guard clause patterns.”

2. Living Documents That Evolve

Week 2: CODING_RULES.md has 5 rules.Week 10: CODING_RULES.md has 15 rules.Week 20: CODING_RULES.md has 25 rules.

As we discovered patterns that worked, we documented them. As we hit issues, we added rules to prevent recurrence.

3. AI Follows Written Rules Reliably

Unwritten rule: “We prefer functional patterns.”

  • AI interpretation: Uses reduce even when a loop is clearer.

Written rule: “Prefer functional patterns (reduce, map) where readable. Use loops when clarity demands it.”

  • AI gets it right every time.

Lesson: Make implicit standards explicit.

4. Standards Prevent “Why Did We Do It This Way?” Debates

Week 15, reviewing code:

Without standards:

  • “Should we use String(format:) here?”
  • “I don’t remember why we decided against it…”
  • 30 minutes lost to research and re-debate

With standards:

  • “Check CODING_RULES.md—String(format:) is forbidden, use formatted().”
  • 0 minutes lost

The Insight

The master plan answers “what to build.” The standards documents answer “how to build it consistently.”

Without standards:

  • Every decision is re-litigated
  • Patterns drift across sessions
  • AI generates inconsistent code
  • Code reviews become re-teaching sessions

With standards:

  • Decisions are made once, documented, and followed
  • Consistency across 200+ functions
  • AI generates correct code on first attempt
  • Code reviews verify adherence to documented standards

Key Takeaway: Create quick-reference standards documents. Start with 5-10 rules. Evolve as you discover what matters.


How to Apply This

For your next project:

1. Start Small

Don’t try to write comprehensive standards on day 1. Start with:

  • 3 coding rules that matter most
  • 1 documentation template
  • 1 testing pattern

2. Document Decisions As You Make Them

When you decide something important:

  • Add it to the relevant document immediately
  • Include the “why” (so you don’t forget)
  • Show an example

3. Use Templates

Create copy-paste templates for:

  • Function documentation
  • Test structure
  • Common patterns

4. Reference Documents in Prompts

When working with AI:

“Read CODING_RULES.md. Implement calculateXIRR following these standards.”

Not:

“Implement calculateXIRR. Oh, and use generics. And guard clauses. And formatted(). And…”

5. Update After Mistakes

Made a mistake this session? Add a rule to prevent it next time.

Example: Week 5, forgot to handle empty array in mean() function. Added rule: “Always validate array input with guard.”


Template Starter Pack

CODING_RULES.md Template

# Coding Rules for [Project Name]

**Updated**: [Date]

## MUST (Non-Negotiable)

1. [Critical rule with rationale]
   ```swift
   // Example

SHOULD (Strong Preference)

  1. [Preferred pattern]
    // Example
    

CONSIDER (Suggestions)

  1. [Optional guideline]

### DOCC_GUIDELINES.md Template

```markdown
# Documentation Guidelines

## Required Sections

1. Brief summary
2. Detailed explanation
3. Parameters/Returns/Throws
4. Usage example
5. See Also

## Template

///
/// [Brief one-line summary]
///
/// [Detailed explanation]
///
/// - Parameters:
///   - param: [Description]
/// - Returns: [Description]
///
/// ## Usage Example
/// ```swift
/// [Runnable code]
/// ```

TEST_DRIVEN_DEVELOPMENT.md Template

# Testing Standards

## Test Structure

```swift
@Suite("[Topic] Tests")
struct TopicTests {
    @Test("[What this tests]")
    func descriptiveName() {
        // Arrange
        // Act
        // Assert with #expect
    }
}

RED-GREEN-REFACTOR

  1. Write failing test (RED)
  2. Minimal implementation (GREEN)
  3. Improve quality (REFACTOR)

---

## See It In Action

BusinessMath's standards documents:
- **CODING_RULES.md**: 25 rules developed over 20 weeks
- **DOCC_GUIDELINES.md**: Complete documentation template with 9 required sections
- **TEST_DRIVEN_DEVELOPMENT.md**: Testing patterns for deterministic behavior

**Results**:
- 200+ functions with consistent style
- 100% documentation coverage
- 250+ tests with 0 flaky tests
- Code reviews focus on logic, not style

---

## Discussion

**Questions to consider**:
1. How detailed should your standards be?
2. When do you add a new rule vs. accepting variation?
3. How do you balance flexibility with consistency?

**Share your experience**: Do you maintain coding standards documents? What works for your team?

---

**Series Progress**:
- Week: 3/12
- Posts Published: 10.5/~48
- Methodology Posts: 4/12
- Practices Covered: Test-First, Documentation as Design, Master Planning, **Standards Documents**
- Standards Established: Coding Rules, DocC Guidelines, Testing Patterns

---

**Related Posts**:
- **Previous**: [The Master Plan: Organizing Complexity](#) - How to maintain project context
- **Next**: [Case Study #2: Capital Equipment Decision](#) - Standards documents in action
- **See Also**: [Building with Claude: A Reflection](#) - Full methodology overview


Tagged with: ai-collaboration, coding-standards, documentation, testing, development-journey