Prompt A/B Evaluator
Run two prompt variants against a fixed test set, score with a rubric LLM, and tell you which wins (and why).
Install this skill
A Claude skill is a skill.md file with YAML frontmatter and a markdown body.
Drop the file into your tool of choice — or pick a different format if you use Cursor, Windsurf, Copilot, or something else.
mkdir -p ~/.claude/skills/prompt-ab-evaluator \
&& curl -L https://aidiveforge.com/skill/prompt-ab-evaluator.skill-md \
-o ~/.claude/skills/prompt-ab-evaluator/skill.md
Save to ~/.claude/skills/prompt-ab-evaluator/skill.md
Recommended Use
Sign in to suggestTools and workflow packs this skill pairs well with. Forge picks are auto-generated from category + capability signals; Community picks are added by people who've used the pairing.
No matches yet. Be the first to suggest a pairing, or the Forge will populate suggestions as signals align.
Report compatibility
Tell the community which tool you used this skill with, and whether it worked.
Suggest a pairing
Recommend a tool or workflow pack that this skill works well with. Up to 5 recommendations per day.