Decision Engines

Decision infrastructure at scale — without building an internal team.

Higher-end decision engine work — TCO models, scenario analysis, automated data pipelines, and evaluation frameworks that make complex decisions repeatable. For organizations that need production-grade decision infrastructure without building an internal team.

See what I build

What I build

Production-grade systems that make complex decisions repeatable and auditable.

  • TCO and techno-economic analysis engines
  • Scenario modeling and option comparison frameworks
  • Data validation pipelines and quality automation (105+ tests, 99.7% accuracy)
  • Regulatory compliance and EPR modeling (fee schedules, map overlays)
  • Three-layer automation (Trigger→Logic→Data) and service architecture (Product, Pricing, Sync)
  • Sheets-first data flows, bidirectional sync, portable database patterns
  • SOP frameworks and information hierarchy for scaling operations

Jump to a type:

TCO & techno-economic engines

Models that validate business cases in policy and commercial strategy. Built for enterprise clients; adaptable to your domain.

In plain termsYou get a single place that answers: "What does this really cost?" and "How does Option A compare to Option B over time?" — so you can choose and justify with numbers, not gut feel.

What it is & when to use it

A TCO (total cost of ownership) or TEA (techno-economic analysis) engine turns all the pieces — upfront investment, ongoing costs, logistics, fees, regulations — into one comparable picture. Same assumptions, same method, every time. Used for procurement ("which supplier?"), site selection ("which location?"), product design ("which material?"), and policy or advocacy ("what's the real cost of compliance?").

What you get

  • A structured cost model (CapEx, OpEx, logistics, fees) with every assumption written down and easy to change
  • Sensitivity and break-even views so you see which inputs actually move the outcome
  • Hooks to real data where it exists: logistics APIs, EPR fee schedules, manufacturing or ERP outputs
  • Auditable runs: same inputs → same outputs, so you can reproduce and explain any number
  • Outputs you can use in the room: spreadsheets, reports, or an API for dashboards and tools

Under the hood

Built as "sheets-first" when stakeholders need to touch the assumptions; we add Python + dbt when refresh frequency, complexity, or integrations demand it. DuckDB (or Snowflake/BigQuery at scale) keeps query logic in SQL with versioned transforms. Streamlit or a thin API layer exposes interactive what-ifs without turning the model into a black box. Tradeoff: maximum transparency and tweakability vs. a single "one-click" report — we bias toward transparency so commercial and policy decisions stay defensible.

Tech & scale

Python/Streamlit, dbt, DuckDB; 75+ SKUs and multi-facility scenarios in production.

Scenario & option comparison

Frameworks that make complex tradeoffs repeatable — same inputs, same criteria, auditable outputs.

In plain termsInstead of "we'll decide in a meeting," you get a clear process: same options, same rules, same scoring — so everyone sees how you got from inputs to a recommendation.

What it is & when to use it

Scenario and option-comparison engines make multi-criteria decisions repeatable. You define the options, the criteria (and optionally their weights), and the scoring rules. Every run uses the same logic, so you get comparable rankings and can run "what if we change the weights?" or "what if we add an option?" Used for vendor selection, technology or build-vs-buy choices, site or product prioritization, and any decision where stakeholders need to see how options stack up under the same rules.

What you get

  • Documented criteria and optional weights, versioned so you can see what changed and when
  • Structured option profiles so every alternative is scored on the same dimensions (no hidden factors)
  • Scores or rankings with full traceability: which inputs and rules produced this result
  • Scenario variants (e.g. base / optimistic / conservative) so you can stress-test and present ranges
  • Exportable comparison tables and one-pagers for leadership, boards, or audits

Under the hood

The value is in the structure, not the tool. We often start in spreadsheets (Google Sheets or Excel) with clear tabs: options, criteria, weights, scores. When the number of options or the frequency of runs grows, we move logic into a small app or script so the methodology is codified and versioned. We avoid "magic" scoring — every point is traceable to a criterion and a rule. That makes the output defensible and easy to explain, which matters more than squeezing out a few extra percentage points of "optimality."

Tech & scale

Spreadsheets, low-code, or custom apps; design favors clarity and auditability over black-box optimization.

Data pipelines & validation

Automated quality and consistency so decisions run on data you can trust. 105+ quality tests, regulatory and EPR-ready.

In plain termsYour decisions are only as good as the numbers going in. This is the plumbing that checks, cleans, and documents the data so you can trust it and explain it.

What it is & when to use it

A decision engine that runs on bad or inconsistent data will produce bad or inconsistent decisions. Data pipeline and validation work makes sure the numbers feeding your TCO, scenario, or evaluation logic are correct, consistent, and fit for purpose. That means automated checks (schema, ranges, relationships, business rules), rules to normalize messy real-world data (e.g. material names, units), and alignment with regulation (EPR fee schedules, material codes, claim rules) where it applies. You get a single "source of truth" with clear lineage: what was changed, by which rule, and why.

What you get

  • Automated data quality tests (schema, ranges, referential integrity, business rules) with pass/fail and alerts so bad data is caught before it reaches decisions
  • Domain normalization (material names, units, categories) so downstream logic sees consistent values, not 10 spellings of the same thing
  • Regulatory readiness: EPR fee schedules, map overlays, claim validation so compliance is built into the pipeline, not bolted on later
  • Documented lineage and transforms (e.g. dbt models) so every change is traceable and reviewable
  • Metrics you can report: e.g. 99.7% accuracy, 105+ tests — so data health is a number, not a hope

Under the hood

We use dbt for transform logic and lineage: all transforms in SQL (or Python where needed), versioned and testable. Tests are first-class — schema, uniqueness, not-null, custom business rules — and run on every change. DuckDB fits fast, portable analytics and moderate scale; we step up to Snowflake or BigQuery when volume or concurrency demands it. Normalization is often a mix of lookup tables and rule-based steps so domain experts can validate and tune. The goal is "data you can defend": same inputs and code always produce the same outputs, and you can point to the exact test or rule that validated a number.

Tech & scale

dbt, DuckDB / Snowflake / BigQuery, Python; 105+ tests, 99.7% accuracy in production.

Service architecture & automation

Three-layer (Trigger→Logic→Data), BFF + microservices, sheets-first flows, bidirectional sync. Vendor-agnostic, cloud-agnostic.

In plain termsYour "engine" has to plug into CRMs, spreadsheets, and APIs without becoming a tangled mess. This is the design that keeps triggers, business logic, and data in clear layers so you can change one without breaking the rest.

What it is & when to use it

Decision logic rarely lives alone. It sits between CRMs, ERPs, spreadsheets, and external APIs. Service architecture and automation define how something starts (a schedule, a webhook, a button), what runs (pricing, capacity, compliance), and where it reads and writes data — in a way that stays maintainable and scalable. You get a repeatable "engine" that fits into existing tools and can grow from pilot to production without a full rewrite. Used when you need consistent calculations, sync between systems, or automation that multiple teams and systems can rely on.

What you get

  • Three-layer design: Trigger (what starts a run) → Logic (business rules, calculations) → Data (sources and sinks), so you can swap or extend any layer without rewriting the others
  • Clear service boundaries (e.g. Product, Pricing, Sync) so each piece has one job and can evolve on its own
  • Sheets-first or hybrid flows: spreadsheets as source of truth where it makes sense, with sync and validation so systems and humans stay aligned
  • Bidirectional sync (e.g. CRM ↔ internal systems) with explicit conflict handling and ownership rules
  • Vendor- and cloud-agnostic patterns so you're not locked into a single provider or stack

Under the hood

Trigger→Logic→Data keeps "when it runs," "what it does," and "where data lives" separate. That makes it easier to add new triggers (e.g. a new API or UI) or new data sources without touching core logic. We use a BFF or small microservices (Product for nesting/capacity, Pricing for tiers, Sync for CRM/folders) so each domain has a clear owner and deployment path. Sheets-first means we treat Google Sheets or Excel as the system of record when that's where the business lives; we add sync, validation, and versioning so downstream services get clean, timely data. Proven at 1K–100M unit logistics, geospatial port detection, EPR map overlays, and CRM↔internal bidirectional sync with conflict resolution.

Tech & scale

1K–100M unit scale; two-legged logistics, EPR map overlays, CRM integration in production.

Evaluation frameworks

From tacit judgment to explicit criteria. SOP frameworks, information hierarchy. Decision infrastructure that scales.

In plain termsRight now "how we decide" lives in someone's head. You get it written down: same criteria, same steps, so the next person (or the next year) can decide the same way.

What it is & when to use it

Many important decisions still run on experience and judgment — which works until you need consistency, onboarding, or an audit trail. Evaluation frameworks turn that tacit knowledge into explicit criteria, steps, and an information hierarchy: what's required, what's optional, what's for context. The same kind of decision can then be made the same way by different people and over time. Used for vendor evaluation, go/no-go gates, prioritization, and scaling operations without losing quality or losing the "why" behind decisions.

What you get

  • Explicit criteria and (where it helps) weights, documented and agreed with stakeholders so there's no ambiguity about what "good" means
  • SOP-style steps: "how we decide" written down so it's teachable and repeatable
  • Information hierarchy: required vs. optional vs. context-only inputs so people know what to gather and what to skip
  • Templates and checklists so new team members can run the same process without reinventing it
  • Lightweight governance: who owns the criteria, how often they're reviewed, and how exceptions are handled

Under the hood

We start with the decision, not the tool. Criteria and steps are captured in docs and spreadsheets first; that keeps stakeholders close and makes it easy to iterate. Once the framework is stable and used repeatedly, we codify it: simple apps (e.g. Streamlit, internal tools) or workflow steps that enforce the same criteria and produce the same kind of output. The technical layer is there to enforce consistency and reduce manual work — not to replace judgment where it matters. Versioning and change control apply to the criteria and weights so you can see how "how we decide" evolved over time.

Tech & scale

Often starts in docs and spreadsheets; codified into apps or workflows once the framework is stable.

Is a decision engine right for you?

Book a short discovery call. We'll map your decision problem and whether a built system is the answer.

No commitment. We define the problem and options together.