skip to content
Meir Dick

Resume Forge: The AI Career Tool That Knows Your Whole Story

/ 8 min read

Table of Contents

Part of the #shipped-with-ai series — one build per week, documented honestly.


ResumeForge builds a persistent experience library, then uses AI to bridge the gap between who you are and what each role demands.

ResumeForge doesn’t write your resume. It helps you decide what to say. The problem is most people don’t have a way to surface what they’ve actually done — the specific wins, the real context, the thing that makes them the right person for this role. A single-shot AI resume just inherits that gap and dresses it up.

ResumeForge is a different process. Every role, project, and accomplishment lives in your experience library. Tag skills, attach evidence, and let AI surface exactly the right experience for each application. Your career history becomes structured and searchable. Paste a job posting and ResumeForge deconstructs it into must-haves, nice-to-haves, and culture signals — then generates several takes on each resume section to tell your best story.


How it works

Step 1: Build your Experience Library

You start by being interviewed by AI. Not a form — a conversation designed to elicit real experiences, the kind of specificity that doesn’t come out when you’re staring at a blank text field.

The output gets parsed into modular, reusable components: skills, accomplishments, projects, education. Each tagged, each searchable. The key design decision is discreteness — pieces you can recombine per role, not a monolithic document you rewrite from scratch every time. You can also import an existing resume to seed the library, or build via voice interview.

Step 2: Inject a job posting

Paste the posting. Resume Forge builds an idealized candidate profile — not just a keyword list, but a model of what the hiring manager actually wants. It pairs this with company research, pulled automatically.

Step 3: Gap analysis

The system compares the idealized candidate against your library. Where do you fall short? Where do you have transferable experience that could be reframed? AI surfaces suggestions — you decide what to do with them.

Step 4: Human-in-the-loop resume builder

This is where most tools skip a step. Resume Forge doesn’t auto-generate a resume from your library — it puts you in the loop. You choose which experiences to surface and how to weight them. AI helps you say it right. The judgment stays with you.

Only then does it generate the resume and cover letter.

The output is specific because the process is structured — not because AI guessed well.

Step 5: Career Chat — AI as your coach

The experience library doesn’t just feed the resume builder. It powers a coaching chatbot that knows your actual career — not a generic career-advice bot. Ask it why you’re stalling in interviews, how to position a career pivot, whether your experience maps to a specific type of role. It has context: your full library, your gap analyses, your application history. The advice is grounded in your data, not boilerplate.

This is the part most AI tools miss. Generating a document is the end of their product. Coaching you through the broader decision is where the real leverage is.


Under the hood

This section is for the technically curious — what’s actually going on inside the app.

Queue architecture

Every AI call is async. There’s no “wait for the API” in a request lifecycle. Resume generation, job analysis, interview processing — all dispatched to Laravel queues, backed by database drivers in development and Redis in production. The front end polls for job completion via Inertia. This is essential for anything involving LLMs: you can’t predict latency, and blocking the UI on a 15-second OpenAI call is a reliability disaster.

Resume/job parsing

When you paste a job description or upload an existing resume, it goes through a structured extraction pipeline. The raw text hits a parsing prompt that returns a typed JSON schema — job requirements decomposed into must-haves, nice-to-haves, culture signals, implicit signals. Resumes get parsed into the experience library schema: roles, accomplishments, skills, education, each as discrete objects. The schema is enforced via PHP DTOs and validated before persistence.

Payment metering

Credits are the currency. Each AI operation has a defined cost, deducted atomically from the user’s credit ledger via a database transaction. The implementation guards against two failure modes: double-deduction under concurrent requests (handled with pessimistic locking) and crediting a failed operation (deduction only commits when the AI call succeeds). BYOK mode (bring your own API key) bypasses metering entirely for power users. The free tier grants credits on signup via Polar webhooks.

PDF generation

The most brittle part of the stack. PDF layout is handled server-side — HTML-to-PDF via a headless renderer, with a custom template system that maps resume sections to structured layouts. In practice, PDF generation across real-world content is unreliable: long entries overflow, fonts behave differently across environments, and edge cases are endless. The gap is real. The fallback is DOCX download, which degrades gracefully — structured content, portable format, user can apply their own template.

AI chatbot with injected context

The career chat doesn’t use a generic system prompt. At conversation start, it serializes the user’s experience library — roles, accomplishments, skills, gap analyses, active applications — and injects it as context into the system message. The model “knows” your career before the first message. Conversation history is persisted per-session; the context window gets trimmed intelligently when it grows too large. The coaching persona is defined in the system prompt with explicit constraints: it should challenge assumptions, not just validate them.

Try it → resume-forge.laravel.cloud
Open source → github.com/meirdick/career-forge


⏱️ The Clock: 48 hours across 2 sprints


⚖️ The Reality: How close is it to production?

Before shipping anything I run a structured production-readiness eval — I built a reusable skill for it, Laravel-focused. It covers auth hardening, queue resilience, billing edge cases, PII handling, error monitoring, and infra costs. I ran it before and after both sprints.

skills.sh/meirdick/laravel-prod-ready/laravel-prod-ready

Resume Forge passed more of it than a typical 48-hour build.

What’s solid:

  • Auth is Laravel Fortify — 2FA, email verification, secure defaults out of the box
  • All AI calls run async via queues — no timeouts, no blocking the UI
  • Payments via Polar with a proper credit ledger, BYOK mode, and a free tier
  • Deployed on Laravel Cloud
  • Error monitoring and observability via Laravel Nightwatch — real-time exception tracking, query performance, job failure alerts

What’s still open:

  • Load testing — haven’t stress tested at real user volume yet
  • PII audit — career data is sensitive; this needs a proper pass before scale
  • PDF generation — this is genuinely hard. Content overflow, font inconsistencies, edge cases that only surface with real user data. The DOCX fallback is there for a reason, and I expect it to do real work in early production
  • Real user edge cases — always a gap until actual users hit it

This is a known list, not an unknown one. That distinction matters.

Verdict: shippable. Monitor heavily for the first wave of users.


🧠 The Lessons

Building a secure credit system for the first time

Polar was new to me — this is my third paid app, previously Stripe. The learning curve wasn’t the payment flow itself; it was building a reliable credit system on top of it.

The hard parts: atomic transactions so credits never get deducted on a failed call, race condition handling so concurrent requests don’t double-deduct, and failed charge recovery that doesn’t corrupt the ledger. I got it right. It took longer than I expected — which is usually a sign it was worth the time.

Why I made it open source

The transparency isn’t accidental — it’s the whole point.

I built a resume tool. If I ever use it in a job search, I’d want the project itself to be evidence of my skills, not just a line on a CV. The natural thing to do is link from the application straight to the code and the write-up — let the work speak. That’s why the transparency page exists: not as a disclaimer, but as a portfolio artifact. Anyone evaluating me can trace the decisions, see the architecture, read the reasoning. The code is part of the story.

Making it open source is a direct expression of that. Hiding the implementation would be a contradiction — I’m building a tool about transparency and human-in-the-loop judgment. The code should be readable.


🛠️ The Stack

The base is the same as every project I build: Laravel + React Starter Kit. Secure auth defaults, TypeScript, Inertia v2, Tailwind v4. I never solve auth from scratch — the security surface is too important and the defaults here are good.

The practical advantage of a consistent base: Claude (via Laravel Boost) has strong context on this stack and writes to it fluently from day one. The codebase is on GitHub — you can see the structural decisions directly.

Production:

  • Laravel Cloud — zero-config deployment, auto-scaling, managed infrastructure. No ops overhead for a solo build
  • Laravel Nightwatch — observability layer. Exception tracking, slow query alerts, job failure monitoring. The gap between “it works in dev” and “it works under real users” is where most early-stage apps fail; Nightwatch closes it