NuroPicks.com

← Blog

2026-04-22 · NuroPicks Team · buildinpublic · clv · shap · engineering · record

CLV + SHAP Ship Day

The methodology page on nuropicks.com has described a six-stage pipeline since April 16th. As of today, four of the six stages are actually live — not aspirational, not stub data, not stage-2-of-a-GTM-deck. Live. You can click every claim.

This post is the engineering story of what shipped on 2026-04-22 and why each piece matters for the "why should I trust this track record?" question.

The problem

Two weeks ago, the /record page on nuropicks.com was an aggregate of metrics that were either zero (real picks hadn't started flowing yet) or placeholder (the pipeline was in stub mode). The SHAP "why" story only rendered on one surface — NFL-draft trade-grade props. Closing-line value was hinted at but never actually computed. The weekly archive table existed in the schema but nothing wrote to it.

That's a pretty common trajectory for AI betting tools. Ship the aggregate story first, then backfill the infrastructure over a year. We decided to close the loop in a single day instead.

What shipped (in dependency order)

1. Immutable-record foundation (migration 0017, shipped 2026-04-21)

Every picks row gets locked_at = NOW() on insert, and a Postgres trigger rejects any UPDATE or DELETE that touches a locked column. Only grading fields — result, graded_at, closing_odds — stay mutable. DELETE on a locked row throws. This is the foundation every later claim sits on: if rows can't be secretly edited after the fact, the ledger is auditable by construction.

2. Closing-line capture (migration 0020 + service, shipped 2026-04-22)

Before today, picks.closing_odds was always NULL because nothing populated it. The aggregate CLV number on /record stayed at - for every window.

The capture service runs every two minutes, scans for locked picks whose games have just started, matches the event in the in-process odds-feed cache by (sport_key, event_id) — both added as new columns in migration 0020 — and writes closing_odds + closing_at. The same math (decimal-at-post / decimal-at-close - 1) that /record already had can now actually return a number instead of null.

We pick the first book that carries the market at close, which matches the same selection rule the publisher uses at post time. That matters: CLV only means anything if you're comparing a post-price and a close-price from the same book.

3. Generalized SHAP explainer (shipped 2026-04-22)

The second half of the day's work. We defined a canonical schema for picks.shap_top:

{
  "base_prob": 0.5238,
  "model_prob": 0.5810,
  "features": [
    { "key": "rest_diff", "label": "Rest advantage", "value": 0.028, "detail": "LAL +2 days rest" },
    { "key": "injury_report", "label": "Injury report", "value": -0.014, "detail": "BOS starter probable" },
    ...
  ]
}

One explainer module is imported by every surface that shows a pick: the Discord publisher embed, the /record Recent Edges panel, the /record/[id] permalink, the per-pick OG image, the /api/record/[id] JSON response. Falls back to the legacy why_edge prose string when shap_top is null so historical picks stay readable.

The pipeline stub was upgraded to emit the canonical shape too, so the day a real XGBoost model comes online there's zero rewrite — the stub's placeholders become real drivers with the same fields.

4. Permalink + share infrastructure (shipped 2026-04-22)

Every pick now has a stable shareable URL. nuropicks.com/record/123 renders the full receipt: game, pick, odds, result badge, stake, closing line, computed CLV %, top-4 SHAP drivers with tooltips for detail, legacy narrative block (Edge / Signal / Context / Risk), and an audit-trail footer noting the immutability trigger.

The permalink carries:

  • A dedicated per-pick OG image (/record/[id]/opengraph-image.tsx) so pasting the URL into Discord, X, or LinkedIn renders a card with the pick content, not the generic site card
  • Article + BreadcrumbList JSON-LD structured data for Google rich results
  • A JSON sibling at /api/record/[id] returning the same shape a third-party verifier would need to audit us
  • A sitemap entry (the 500 most recent per-pick URLs get exposed to crawlers)

The Discord publisher embed was updated to include a [Full receipt →](...) markdown link per pick so readers jump straight from the Discord message to the detail page.

5. RSS 2.0 feed (shipped 2026-04-22)

/record/rss.xml — 50 most recent picks, each with its SHAP one-liner in the description. Consumable by Feedly, NetNewsWire, Inoreader, downstream aggregators. The <link rel="alternate"> in the /record metadata means browsers and feed readers autodiscover it.

Why this matters: sharp bettors distrust walled-garden content. An RSS feed says "you don't need an account to audit us."

6. Weekly snapshot freezer (shipped 2026-04-22)

The weekly_snapshots table has existed since migration 0017 and /record has been querying it for weeks. Nothing ever wrote to it, so the Weekly Archive panel showed "No weekly snapshots frozen yet" forever.

The freezer runs hourly. Each tick, it computes the previous ISO week (Monday 00:00 ET → Sunday 23:59:59 ET) and attempts to INSERT ... ON CONFLICT DO NOTHING. The table has UNIQUE(week_start) plus an append-only trigger, so after the first successful write in a given week, every subsequent hourly tick that week is a no-op. First actual freeze lands next Sunday.

There's also a scripts/backfill-weekly-snapshots.js one-shot for pre-freezing historical weeks — useful if picks from prior weeks are already in the DB.

What this means for the pitch

Every NuroPicks sales sentence about "tracked," "auditable," "transparent," or "we show the model's work" is now backed by something you can click on without an account:

  • nuropicks.com/record — live ledger
  • nuropicks.com/record/1 (or any id) — full pick receipt
  • nuropicks.com/record/rss.xml — subscribable feed
  • nuropicks.com/api/record — aggregate JSON
  • nuropicks.com/api/record/1 — per-pick JSON
  • nuropicks.com/sitemap.xml — crawlable index

If we were hiding anything, this surface area would be a terrible place to do it. That's the point.

What's left

The remaining stages in the methodology: Stage 3 — replacing the pipeline stub with a real XGBoost ensemble trained on historical data — and Stage 6 — the daily retrain + shadow-model evaluation loop. Stage 3 is the bigger lift because it needs multi-season data ingestion and walk-forward cross-validation infrastructure. Stage 6 depends on Stage 3.

Both are on the roadmap. Neither blocks anything shipped today; the surfaces we shipped render the canonical SHAP schema regardless of whether the values come from a stub or a trained model. When the real model ships, the receipts on every existing surface turn from placeholder drivers into production drivers with no UI changes.

Every methodology claim on nuropicks.com now comes with a link next to it. That's the only honest way to ship an "AI-powered" product.

21+ only · Not financial advice · 1-800-GAMBLER

CLV + SHAP Ship Day — How We Made the Receipts Real | NuroPicks Blog