Refactor your tech career
for the AI era.
Personalized AI-fluency curriculum. A mentor that knows your progress. Real capstone projects you can put on your resume. The fastest path from "I should learn AI" to "I ship AI in my role."
In May 2026, AI fluency is the dividing line.
The engineer who can ship a working agent in two days is being paid more than the one who can't — sometimes 40% more. The platform engineer who built the company's AI gateway is now running the AI platform team. The PM who can credibly spec an eval is the one leading the AI initiative.
The gap between AI-fluent and AI-aware isn't closing. It's compounding.
But the path to that fluency is broken. Generic AI courses teach a little of everything, badly. Tutorials and blog posts are everywhere, but the half-life of AI best practice has dropped to weeks. What was sharp in February is mid by May.
Refactor is built around the only thing that actually matters: becoming the most AI-fluent person at your company in the seat you already hold.
Not a researcher. Not a generalist. The person whose role is exactly what yours is — who can build and ship and review AI work in your stack, in your meetings, in your code reviews.
We pick your role. We meet your skills where they actually are. We update the curriculum monthly so what you learn this week is what shipped this week.
You don't refactor your career once. You refactor it continuously. Refactor is the platform that makes that possible.
Built for the role you're already in
Tech work doesn't look like it did 18 months ago.
If your day-to-day still feels familiar, it's because the change is happening around you faster than your habits are catching up. These are the shifts our curriculum is calibrated to.
Browser-using and computer-using agents become mainstream. Voice becomes the primary interface for many apps. Persistent memory and personalization gets serious. Small fine-tuned models eat the long tail of inference. AI cost engineering becomes a named discipline. Refactor's curriculum updates monthly so you're learning where the puck is going, not where it was.
An AI mentor that actually knows you.
From "I should learn AI" to "I'm running the AI initiative."
A small selection of what's happened in the last six months.
"Six weeks of Refactor and I shipped my first agent into production. I'm now the AI lead on the payments team. The mentor caught gaps I didn't know I had — eval coverage was the thing that pushed me from prototype to production."
"The mock interviews are uncomfortably accurate. I got feedback that mirrored exactly what I heard from a real interviewer at my next-level job a week later. The platform-engineering track is the only curriculum I've found that takes the infra side of AI seriously."
"I went from being the PM that asks questions in AI meetings to the one running them. Refactor doesn't teach AI in the abstract — it teaches AI as something you ship through a real product process, with real evals and real risk classification."
Simple, monthly, cancel anytime.
Start free. Pay when the curriculum starts paying you back.
- ✓Adaptive skill assessment
- ✓First module of any track
- ✓Limited AI mentor (10 messages/day)
- ✓Community access
- ✓All tracks (SWE, DevOps, PM)
- ✓Unlimited AI mentor
- ✓Mock interviews (text + voice)
- ✓Capstone projects
- ✓Public portfolio
- ✓Monthly curriculum updates
- ✓Priority support
- ✓Everything in Pro
- ✓Admin dashboard
- ✓Curriculum calibrated to your JD
- ✓Cohort tooling
- ✓Skills reporting & exports
- ✓SSO + SCIM
- ✓Dedicated CSM
Annual plans save 20%. Education and non-profit discounts available.
Common questions
Who is Refactor for?
Mid- to senior-level tech professionals — engineers, platform/DevOps, PMs — who already know their craft and need to layer AI fluency on top. If you're brand new to tech entirely, this isn't the right starting point. If you've shipped real software and feel like AI is slipping past you, you're exactly who we built it for.
How is this different from a Coursera or Udemy course?
Three things. First, it's role-specific — engineers, platform people, and PMs all see different curricula because the AI fluency they need is different. Second, the AI mentor knows your code, your skill gaps, and your career goal — it's not a chatbot. Third, the curriculum is updated monthly because in AI, anything older than six months is outdated.
How current is the curriculum?
Updated monthly. The current version covers MCP, reasoning models, agent orchestration, the EU AI Act, eval-driven development, prompt caching, and AI cost engineering. We publish a public changelog so you can see what changed and why.
What if I'm a complete beginner with AI?
Module 1 of every track assumes zero AI background — it covers how modern LLMs actually work, capabilities, limits, and where the field is. The skill assessment will route you correctly: if you've already mastered the foundations, you skip them. If not, you start there.
Can my company sponsor or buy this for me?
Yes. The Team plan includes admin dashboards, skills reporting, and the option to calibrate the curriculum to your specific job descriptions. We have a one-pager you can forward to L&D — request it here.
Will this actually help me get hired?
The capstone is a real, shippable project that becomes the centerpiece of your portfolio. Mock interviews are calibrated against current role expectations at top companies. We don't make hiring guarantees — that's not a thing — but the structure is built so the work you do here is the work you can show in interviews.
Do you cover [specific topic, like fine-tuning, voice agents, EU AI Act]?
Probably yes. Fine-tuning, distillation, voice agents, computer use, MCP, EU AI Act, NIST AI RMF, prompt injection defense, and AI cost engineering are all covered in role-appropriate depth. See the full curriculum changelog for what shipped this month.
Refactor takes 10 minutes to start.
Find your gaps. Build something real. Ship it on your resume.
You're building a RAG system over 50,000 internal docs. Retrieval keeps surfacing irrelevant chunks even when queries are clearly worded.
Which is most likely the first thing to investigate?
Welcome back, Neha.
Curriculum
Updated monthly · Calibrated to May 2026 reality + the bets we're making on 2027–2028.
From AI-assisted coding to building, evaluating, and shipping agentic systems in production. Calibrated for senior+ engineers who already ship code and need to layer AI fluency on top. Updated for May 2026 — covers MCP, reasoning models, agent orchestration, eval-driven development, and the production patterns the top AI-native companies are using right now.
Build the infrastructure that lets your company ship AI safely, cheaply, and reliably. Covers the full LLMOps stack — gateways, observability, FinOps for AI, prompt injection defense, EU AI Act technical compliance, and AI for operations itself. Calibrated for senior platform engineers, SREs, and infra leads who need to own AI infrastructure.
Inference vs training economics. GPU landscape today (H100/H200/B200/MI300). Model serving primitives. Inference engines (vLLM, TGI, TensorRT-LLM, SGLang). Throughput vs latency tradeoffs. Quantization (FP8, INT4, ternary).
Cloud platforms compared (Bedrock, Vertex, Together, Fireworks, Replicate). Self-hosting tradeoffs. Multi-region deployment. Auto-scaling for spiky AI traffic. Cold-start mitigation. Edge deployment for latency-critical apps.
Model gateways (LiteLLM, Portkey, OpenRouter). Prompt management & versioning. Vector DB ops at scale (pgvector, Turbopuffer, Pinecone). Observability platforms (Langfuse, Helicone, Braintrust). Eval pipelines as CI. Prompt CI/CD. Secrets management for AI.
Token economics in 2026. Prompt caching strategies. Semantic caching. Model routing (cheap → expensive). Batch processing & async pipelines. Cost monitoring & alerts. Budget enforcement. FinOps for AI is becoming a named discipline — be the person who runs it.
OpenTelemetry GenAI & OpenLLMetry. Tracing AI calls end-to-end. Eval-as-monitor patterns. Drift detection in production. Incident response for AI systems. SLOs for non-deterministic systems. Postmortems with LLM-specific factors. Chaos engineering for agents.
OWASP LLM Top 10 (2026 update). Prompt injection defense at the gateway. PII redaction & DLP for AI traffic. Secrets in prompts. EU AI Act technical requirements (now in force). ISO 42001 / NIST AI RMF implementation. Audit trails for AI decisions. Red-teaming AI systems.
LLM-powered log analysis. Incident summarization with AI. Runbook agents. AI-assisted on-call rotations. ChatOps with AI. Auto-remediation patterns (and when not to). Code review agents for ops PRs. The platform team itself becomes AI-augmented.
Internal AI gateway architecture. Self-service AI for product teams. Governance & quotas. Multi-tenant isolation. Cost attribution by team/product. Platform metrics & adoption. Evangelizing internal platforms — being the person who builds the company's AI infrastructure is a career-defining move.
Become the PM who can credibly scope, ship, and measure AI features. Covers AI literacy without hand-waving, designing AI experiences that earn user trust, eval-driven product development, EU AI Act compliance, and AI-native product strategy. Calibrated for product managers at growth-stage and enterprise companies who need to be the AI lead in the room.
How LLMs actually work — no hand-waving. Capability map for May 2026. Reasoning vs non-reasoning models (when to use which). Cost-latency-quality triangle. Open vs closed model decisions. Multimodal capabilities & UX. Where AI is still bad (and getting better fast).
Identifying real AI-fit problems vs AI feature theater. Opportunity sizing for AI features. User research with AI in the loop. Build vs buy vs orchestrate. Distinguishing AI-native problems from AI-augmented ones. Telling demos apart from products.
UX patterns that work in 2026: copilots, autocomplete, agents, ambient AI. Trust & transparency. Designing for failure modes (hallucinations, edge cases). Confidence indicators. Human-AI handoffs. Voice and multimodal UX. Personalization without creepiness.
Writing AI feature specs that engineers can actually build. Eval-driven product development. Dataset curation & labeling as a PM responsibility. A/B testing AI features. Shipping iteratively (canary, staged rollout). Working with ML engineers vs AI engineers.
Offline evals vs online metrics. Building golden datasets as a PM. LLM-as-judge for product metrics. Quality gates for shipping AI. Detecting drift in production. North-star metrics for AI products. The cost-quality-latency triangle in product decisions.
Moats in the AI era (data, distribution, workflow). Choosing model providers. Open vs closed source decisions. Fine-tuning decisions for PMs. Vertical vs horizontal AI products. Pricing AI features. Defensibility analysis. Competitive intelligence with AI.
EU AI Act for PMs (in force as of 2026). Risk classification for AI features. Bias & fairness audits. Privacy by design. AI red-teaming as PM responsibility. Transparency requirements. Customer trust & disclosure. The PM owns the risk register, not the legal team.
2026–2030 product landscape. Agentic vs feature-based products. Persistent memory & context. Voice as primary interface. AI organizations: structuring teams. Career strategy for PMs in the AI era. Reading the next 3 years and betting accordingly.
When to use tool calling vs. structured outputs
Both let you constrain what the model returns, but they solve different problems. Structured outputs force a JSON shape. Tool calling lets the model decide when to invoke a function and which one.
Use tool calling when the model needs to take action (call a payment API, query a database, hit a search index). Use structured outputs when you just need clean data extraction.
const tools = [{
name: "refund_payment",
description: "Issue a refund for a charge",
input_schema: {
type: "object",
properties: {
charge_id: { type: "string" },
amount_cents: { type: "integer" }
},
required: ["charge_id"]
}
}];
const response = await client.messages.create({
model: "claude-sonnet-4-6",
tools,
messages: [{ role: "user", content: query }]
});
Refund-routing agent
Build a function that takes a customer message and routes it to one of three actions: issue_refund, flag_for_review, or request_more_info.
- ·Use Anthropic SDK with tool calling
- ·Handle refunds within 30 days only
- ·Flag suspicious patterns to human review
- ·Pass all 5 test cases
2
3 const client = new Anthropic();
4
5 const tools = [
6 {
7 name: "issue_refund",
8 description: "Issue a full or partial refund",
9 input_schema: {
10 type: "object",
11 properties: {
12 charge_id: { type: "string" },
13 reason: { type: "string" }
14 },
15 required: ["charge_id", "reason"]
16 }
17 },
18 // TODO: add flag_for_review and request_more_info
19 ];
20
21 export async function routeRefund(message: string) {
22 // TODO: call client.messages.create with tools
23 }
Mock interview
Calibrated to your target role: Senior SWE · AI-fluent
Walk me through how you'd architect this — and how you'd evaluate whether it's safe enough to ship.
Ship a customer-support agent
Real codebase · Real evals · Real artifact for your portfolio
Neha K.
Senior Software Engineer · Refactoring for AI
Backend engineer with 7 years building payments infra. Currently leveling up on production AI — RAG systems, agent loops, and eval rigor. Open to AI-engineering roles at growth-stage companies.
A production-grade support agent over a knowledge base of 2,400 docs. Handles refund routing with guardrails, includes eval harness over 100 golden cases.