Builder Brief
Interactive — try clicking around
TL;DR

Designed, architected, and shipped a solo discovery product that turns community noise into evidence-backed product briefs. Built it upstream of where the rest of the category competes.

Builder Brief is a web product I designed, architected, and shipped solo. It ingests demand signals from Reddit, Hacker News, Indie Hackers, and a curated set of operator newsletters, enriches them through an LLM pipeline, and surfaces structured product briefs that tell builders which problems are actually worth having ideas about.

The product started as a personal pattern-matching tool I was using in my own hunt for opportunities. After watching the schema mature into something more useful than most tools in the category, I rebuilt the infrastructure for productization, designed a preview-paid split that maps to how builders actually think, and shipped it section by section using a session-based development methodology that made solo-shipping at this scope feasible.

Impact

7
Live Sources Monitored
302
Briefs in Feed
$4.99
Per Brief Download
Timeline
2025 — present
Role
Solo founder, designer, engineer
Live Product
Open ↗
🎯Strategy

What drove the build

AI-assisted building has made shipping software dramatically cheaper, and that shift has moved the bottleneck. It is no longer hard to build. It is hard to know what to build. Most tools in the category solve the downstream half of that problem (refining, validating, or documenting an idea you already have). The upstream half (which problems are worth having ideas about at all) was largely unclaimed.

GummySearch shut down in late 2025, vacating shelf space. Rocket raised $15M to own the downstream side of the same lifecycle. The category was clearly re-forming, and the upstream gap was the strategic target. The positioning line that fell out of it (we don't give you ideas, we give you evidence-backed problems worth validating next) shaped every product decision after.

Goals, Frustrations, and Audience

Goals

  • Surface problems worth building, backed by real demand evidence, not generic idea lists
  • Keep LLM enrichment economically defensible through staged filtering and dedup
  • Ship a product experience clean enough that the multi-layer pipeline stays invisible
  • Reach a self-serve, pay-per-brief revenue model with no friction on browsing

User Frustrations

  • The signal-to-noise ratio in builder communities is brutal at scale
  • Manually pattern-matching across Reddit, HN, and newsletters takes hours and produces inconsistent results
  • Most idea tools live downstream and assume you already know what to build
  • Monthly subscriptions for tools you use occasionally feel like a tax on curiosity
  • Generic "100 startup ideas" lists are easy to find and impossible to act on

Personas

  • Indie builders who can already ship and want better bets for their next weekend project
  • Tech-adjacent founders mid-pivot who need evidence before committing to a build cycle
  • Me, originally. I built it for myself first, which kept the framing honest
🧪Process

How it actually played out

01

Started as a manual practice

The product hypothesis was not abstract. I had been pattern-matching across Reddit, Hacker News, and Upwork manually for months, looking for product opportunities. After a few weeks the practice was producing signal but no structure. Posts piled up. Patterns surfaced and disappeared. I could not track which problems were recurring, which were noisy one-offs, and which were worth a build cycle.

The first version was a Notion database with an n8n workflow piping Reddit posts into it, enriched by an LLM. It worked, but only for me, and only because I was the one curating it. A few things then happened in parallel: the schema was already more useful than most products in the space, LLM enrichment was cheap enough at scale to plausibly serve more than one user, and the category was visibly re-forming around me. So I productized it.

Coming Soon
Coming Soon
02

Pivoted infrastructure for productization

The original stack was local Docker, n8n, and a Notion database. Workflows ran on my laptop. Data lived in Notion. There was no auth, no billing, no real UI. That setup had worked for prototyping but it could not become a product.

I rebuilt the foundation around Firestore (storage + real-time feed), Firebase Auth (Google sign-in), a Node + tsx worker on Railway (ingestion), and a Next.js app on Vercel (the surface). Claude Haiku via the Anthropic SDK replaced the original OpenAI call after a side-by-side quality comparison. The infrastructure pivot was not a performance decision. It was a product decision. The product could not exist as a product until the infrastructure stopped requiring my attention to stay alive.

  • Hosted document storage over relational, prioritizing fast iteration on schema and real-time feed updates
  • URL-normalized dedup runs before LLM enrichment to keep cost-per-source predictable as volume grows
  • Per-user state lives on the brief itself, simplifying the UI surface for saved and downloaded briefs
03

Designed the preview-paid split

The single most important product decision was where to draw the line between free preview and paid artifact. Subscription was off the table early. Builders use a tool like this occasionally, not daily, and a $29-per-month bill that gets ignored creates churn no product improvement can solve.

The split that emerged: browse unlimited briefs for free (problem, solution shape, demand score, confidence percentage, effort estimate, competitor names visible). Pay $4.99 per download for the analysis (full validation evidence with source quotes, competitive breakdown with pricing and gaps, monetization strategy, risk analysis, and an AI builder prompt ready to paste). The gating is visible, never sneaky. Muted lines like "Full signal analysis included in download" sit exactly where the paywall lives.

Coming Soon
Coming Soon
04

Shipped section by section

Solo-building at this scope is only feasible with a workflow that compounds. Architecture and product decisions happen in Claude.ai, where I can think through tradeoffs and keep context. Execution happens in Claude Code CLI inside VS Code. A Notion master context document serves as the persistent handoff between sessions, since CLI sessions do not carry memory forward.

Work is structured into scoped sessions, each one a tightly bounded Claude Code prompt with explicit guardrails on which files it can touch. Eleven sessions shipped to date, each scoped tightly enough to reason about and large enough to ship something visible. The methodology is itself part of the product story, because it is what made the scope reachable solo.

What I learned

1

The preview is the pricing model

Preview design is not a marketing tactic. It is a design problem. It requires knowing exactly what a user needs to see to form a conviction, and exactly what they should still have to pay for to act on that conviction.

Get that split wrong in either direction and the product collapses. Show too little and the preview feels useless. Show too much and the paid artifact feels redundant. The version that shipped (problem, solution, scores, and competitor names visible; full validation evidence, competitive analysis, monetization strategy, risk analysis, and AI builder prompt paid) holds up because every gated field is the kind of thing a builder would re-research themselves rather than guess at.

2

The brief is the product, not the pipeline

It would have been easy to design Builder Brief as a tool that exposed its own machinery. Source filters, ingestion logs, enrichment scoring breakdowns, raw signal feeds. None of that is what builders actually want.

The brief is the product. The pipeline is the cost of producing the brief reliably. Once that ordering was clear, every subsequent UI decision got easier. The brief modal became the primary surface. The table view became a secondary index. The "Research My Idea" flow returns the same brief shape, so the artifact stays consistent whether the source is the public feed or a user's own idea. The interface should not look like a system. It should look like an answer.

3

Calibration is the difference between a brief and a horoscope

The first enrichment prompt produced pleasant, undifferentiated briefs. Everything looked like a "high signal" opportunity because the model was anchored toward optimism. A 4 out of 5 demand score meant nothing because nothing ever scored a 2.

The rewrite enforces explicit scoring rules that require the full 1-to-5 range, including low scores. It insists on evidence-backed fields: competitors must include pricing tiers and specific gaps, validation signals must quote or closely paraphrase source language, monetization strategy must include a concrete price range and acquisition angle. The result is briefs where a 5 means something and a 2 is allowed to exist. That calibration is what makes the preview-to-download conversion defensible at all.

💡Solution

What I shipped: Builder Brief

A focused, opinionated discovery product that does one thing well.

  • Seven curated ingestion sources spanning Reddit, Hacker News, Indie Hackers, and a set of operator newsletters chosen for builder-specific signal density
  • A scheduled worker on Railway runs the pipeline and writes structured briefs to a hosted document store
  • A Next.js + Tailwind v4 web app on Vercel surfaces the feed with real-time updates, filters, and per-brief preview modals
  • A neo-brutalist visual system (navy, yellow, red, off-white; thick borders; offset shadows; bold uppercase tracking) deliberately differentiated from default SaaS aesthetics
  • Pay-per-download checkout via Stripe at $4.99 per brief (no subscription)
  • A "Research My Idea" feature lets users vet their own ideas through the same enrichment pipeline, stored privately
  • A "My Briefs" page with two tabs (Saved and My Uploads) and a corner-ribbon treatment marking downloaded briefs
Builder Brief landing page
Idea Feed with sidebar filters and brief cards

Idea Feed — sidebar filters and brief cards

Brief detail modal — product brief and problem analysis

Brief modal — preview-paid split

My Briefs — Saved tab

My Briefs — Saved

My Briefs — My Uploads tab

My Briefs — My Uploads

Research My Idea modal — URL input

Research My Idea — URL input

Research My Idea — site blocked, paste text fallback

Research My Idea — blocked URL fallback

Research My Idea — text paste mode

Research My Idea — text paste mode

Research My Idea — enriched result brief

Research My Idea — enriched result

How the pipeline works under the hood

The visible product is one surface. The system underneath is what makes the briefs trustworthy enough to charge for. The pipeline filters aggressively before enrichment so cost stays predictable as source volume grows, and the schema forces the LLM to produce briefs in a consistent shape that the UI can render against.

The decisions that mattered most:

  • Scheduled ingestion across all sources runs unattended; the user never sees a refresh button or a stale feed
  • Dedup runs before any LLM call, because LLM enrichment is the cost-critical step (cross-posts and re-shares would otherwise multiply spend)
  • A calibrated enrichment prompt with explicit scoring rules and required evidence fields — the difference between a brief that means something and a horoscope
  • A structured schema designed so every field has a designated UI surface (modal, card, PDF, or paid-only) — the schema is what makes the preview-paid split clean
  • Real-time updates so a newly enriched brief appears in the feed without a reload
  • Server-side auth gating on the paid PDF route, with payment confirmation as the unlock signal

The interface is the product. The pipeline is the cost of making the interface trustworthy.

Coming Soon
Outcome

What I'm measuring

Pipeline health, preview-to-download conversion, and direct early-user feedback.

Pipeline health: Seven sources running unattended on a six-hour cron, dedup catching repeated cross-posts, enrichment cost per brief holding stable across volume increases
Preview-to-download conversion: Tracking the rate at which a brief view leads to a $4.99 download. The core monetization signal — if the preview is calibrated right, conversion is the leading indicator of trust in the artifact
Pattern strength distribution: A healthy feed shows a real range of demand scores, not a bell curve pinned at 4 of 5. The distribution is itself a signal of whether the enrichment prompt is calibrated honestly

What early users have told me

"The competitor section is the part I look at first."

"I want to know which sources a brief came from before I read it."

"$4.99 is fine. I would never pay $29 a month for this."

"The preview tells me enough to skip the bad ones without paying."

These came from direct conversations, not surveys. The signal is consistent: builders trust the gating, want more transparency on provenance, and validate the pay-per-brief model over subscription.

Results and reflection

Builder Brief is shipped and live, but pre-scale. The honest version: the pipeline produces briefs that are good enough that I use them in my own decision-making, the visual system holds up against the category default, and the preview-paid split reads cleanly to early users who have not asked what the $4.99 is for. The unit economics work on paper. The next test is whether they hold under acquisition pressure.

The lesson I will carry into every future build is that infrastructure is a product decision. The Docker-and-Notion version of this product could not have shipped. The Firestore-Vercel-Railway version did. That gap was not technical, it was strategic. Designers who can make those calls are increasingly the ones who get to ship 0-to-1 products without waiting for an engineering team to agree.

The other lesson is about what AI tooling actually compresses. Claude Code and Claude.ai compressed the execution timeline dramatically. They did not compress the judgment timeline. Every meaningful decision in this project was mine. The tooling made execution tractable. The decisions still had to be right.

What's next

Close remaining PDF polish issues (whitespace, truncation, orphaned pages) and lock down Firestore security rules with rate limiting on the enrichment and download routes

Add pre-filtering before LLM enrichment as source volume grows, so cost-per-brief stays in band

Run the first paid acquisition test to learn the real preview-to-download rate in the wild, not in conversation

Define the path from "shipped product" to "category-defining product" — whoever owns what a build-ready brief means at scale gets to shape the upstream space

GummySearch shut down. Rocket raised $15M to own the downstream side. The upstream space is still forming, and the next chapter is about whether Builder Brief earns the right to define it.

Want to know how any specific piece works? Get in touch.