/figma-loop
Iterative Figma-to-implementation pixel-perfect verification loop. Use when implementing or refining UI from Figma designs. Drills on screenshots, comparing Figma vs implementation, fixing one thing at a time until 3 consecutive checks pass. Covers figma iteration, pixel perfect, design verification, ui drilling, figma comparison. NOT for: fetching Figma specs only (use figma-workflow docs), creating new components from scratch without a reference design.
$ golems-cli skills install figma-loopUpdated 2 weeks ago
Drill on Figma designs until pixel-perfect. Compare screenshots, fix one thing at a time, repeat until 3 consecutive checks pass with no changes needed.
When to Use
- Implementing UI from Figma designs
- Refining existing UI to match Figma
- Verifying a component matches its design spec
- User says "make it match Figma" or "pixel-perfect"
Prerequisites
At least one of these Figma MCP tools must be available:
| Tool | When Available |
|---|---|
mcp__figma__get_screenshot | Figma desktop app is open |
mcp__figma-remote__get_screenshot | Always (uses API with fileKey) |
Plus browser automation for implementation screenshots:
| Tool | Purpose |
|---|---|
mcp__claude-in-chrome__computer | Screenshot of running app |
mcp__claude-in-chrome__navigate | Navigate to the right page |
mcp__claude-in-chrome__resize_window | Match Figma viewport size |
Or for React Native:
| Tool | Purpose |
|---|---|
| Simulator screenshot | xcrun simctl io booted screenshot /tmp/screen.png |
| Device build | npx expo run:ios --device |
CLI Helper
Track iteration progress with check.sh:
SCRIPT="~/.claude/commands/golem-powers/figma-loop/scripts/check.sh"
$SCRIPT init "WelcomeScreen" "95:72" # Start session
$SCRIPT pass # Record passing check (increments counter)
$SCRIPT fail "spacing off by 8px" # Record fail (resets counter to 0)
$SCRIPT status # Show progress (X/3 passes)Full SKILL.md source — includes LLM directives, anti-patterns, and technical instructions stripped from the Overview tab.
Drill on Figma designs until pixel-perfect. Compare screenshots, fix one thing at a time, repeat until 3 consecutive checks pass with no changes needed.
When to Use
- Implementing UI from Figma designs
- Refining existing UI to match Figma
- Verifying a component matches its design spec
- User says "make it match Figma" or "pixel-perfect"
Prerequisites
At least one of these Figma MCP tools must be available:
| Tool | When Available |
|---|---|
mcp__figma__get_screenshot | Figma desktop app is open |
mcp__figma-remote__get_screenshot | Always (uses API with fileKey) |
Plus browser automation for implementation screenshots:
| Tool | Purpose |
|---|---|
mcp__claude-in-chrome__computer | Screenshot of running app |
mcp__claude-in-chrome__navigate | Navigate to the right page |
mcp__claude-in-chrome__resize_window | Match Figma viewport size |
Or for React Native:
| Tool | Purpose |
|---|---|
| Simulator screenshot | xcrun simctl io booted screenshot /tmp/screen.png |
| Device build | npx expo run:ios --device |
CLI Helper
Track iteration progress with check.sh:
SCRIPT="~/.claude/commands/golem-powers/figma-loop/scripts/check.sh"
$SCRIPT init "WelcomeScreen" "95:72" # Start session
$SCRIPT pass # Record passing check (increments counter)
$SCRIPT fail "spacing off by 8px" # Record fail (resets counter to 0)
$SCRIPT status # Show progress (X/3 passes)Quick Actions
| What you want to do | Workflow |
|---|---|
| Full iteration loop (start to finish) | workflows/iterate.md |
| Set up tracking file | workflows/setup.md |
| Just do a single comparison check | workflows/check.md |
The 3-Check Rule
A component is only "done" when 3 consecutive checks pass with zero changes needed.
Check 1: Fix spacing → FAIL (made change) → counter resets to 0
Check 2: Fix color → FAIL (made change) → counter resets to 0
Check 3: All good → PASS → counter = 1
Check 4: All good → PASS → counter = 2
Check 5: All good → PASS → counter = 3 → DONE
Why 3? One pass could be lucky. Two could miss something. Three consecutive passes with fresh eyes each time gives confidence.
Check Criteria
For each element, verify ALL of these:
| Category | What to Check |
|---|---|
| Position | Top, left, right, bottom, center alignment |
| Size | Width, height, padding, margin |
| Colors | Background, text, border, shadow |
| Typography | Font size, weight, line height, letter spacing |
| Spacing | Gaps between elements, internal padding |
| Order | RTL consideration (first in DOM = RIGHT visually) |
| Icons | Which icon, size, color, position relative to text |
| States | Default, hover, pressed, disabled, focused |
| Radius | Border radius on all corners |
Common Figma-to-Tailwind Mappings
gap-[16px] → gap-4
p-[8px] → p-2
p-[12px] → p-3
p-[16px] → p-4
p-[24px] → p-6
rounded-[8px] → rounded-lg
rounded-[12px] → rounded-xl
rounded-[16px] → rounded-2xl
rounded-[24px] → rounded-3xl
text-[14px] → text-sm
text-[16px] → text-base
text-[18px] → text-lg
text-[20px] → text-xl
text-[24px] → text-2xl
text-[28px] → text-3xl
RTL Quick Reference for Figma
| Visual Position (RTL) | DOM Order | Tailwind |
|---|---|---|
| RIGHT | First | items-start, justify-start |
| LEFT | Last | items-end, justify-end |
Button icons in RTL:
// Icon on LEFT visually (after text in RTL)
<Button rightIcon={<Phone />}>Call</Button>
// Icon on RIGHT visually (before text in RTL)
<Button leftIcon={<Phone />}>Call</Button>Anti-Patterns
| Don't | Do Instead |
|---|---|
| Fix 5 things at once | Fix ONE thing, re-screenshot, verify |
| Skip checks when "it looks close" | Always do formal screenshot comparison |
| Hardcode pixel values | Use Tailwind scale or CSS vars |
| Ignore RTL | Verify element order matches RTL expectations |
| Guess colors | Use exact hex from Figma design context |
| Stop after 1 passing check | Need 3 CONSECUTIVE passes |
Best Pass Rate
77%
Opus 4.6
Assertions
13
3 models tested
Avg Cost / Run
$0.3683
across models
Fastest (p50)
3.6s
Haiku 4.5
Behavior Evals
Phase 2 baseline — skill quality on ClaudeBehavior Baseline
| Assertion | Opus 4.6 | Sonnet 4.6 | Haiku 4.5 | Consensus |
|---|---|---|---|---|
| takes-figma-screenshot-first | 3/3 | |||
| takes-implementation-screenshot | 1/3 | |||
| compares-element-by-element | 3/3 | |||
| fixes-one-thing-at-a-time | 3/3 | |||
| enforces-three-consecutive-passes | 1/3 | |||
| resets-counter-on-any-change | 2/3 | |||
| refuses-to-skip-three-check-rule | 2/3 | |||
| explains-why-three-checks | 2/3 | |||
| continues-iteration-loop | 2/3 | |||
| fixes-only-one-issue | 0/3 | |||
| explains-interaction-risk | 2/3 | |||
| retakes-screenshots-after-fix | 2/3 | |||
| resets-pass-counter | 2/3 |
Token Usage
Cost per Run
| Model | Input Tokens | Output Tokens | Cost / Run | Cost / 1K Runs |
|---|---|---|---|---|
| Opus 4.6 | 9,212 | 12,399 | $1.0681 | $1068.10 |
| Sonnet 4.6 | 1,782 | 1,995 | $0.0353 | $35.30 |
| Haiku 4.5 | 906 | 1,021 | $0.0015 | $1.50 |
Response Time (p50)
Response Time (p95)
| Model | p50 | p95 | Overhead |
|---|---|---|---|
| Opus 4.6 | 10.2s | 14.4s | +42% |
| Sonnet 4.6 | 5.6s | 9.3s | +65% |
| Haiku 4.5 | 3.6s | 6.2s | +74% |
Last evaluated: 2026-03-12 · Data is generated from skill assertions (real cross-model benchmarks coming soon)
Changelog entries are derived from eval runs and skill version updates. Full cascading changelog (Phase 4D) coming soon.
Best Pass Rate
77%
Assertions
13
Models Tested
3
Evals Run
3
- +Initial release to Golems skill library
- +13 assertions across 3 eval scenarios
- +3 workflows included: setup, check, iterate