技能库 / writing-skills

writing-skills

Use when creating new skills, editing existing skills, or verifying skills work before deployment

v1.0.0

安装方式

CLI 安装(推荐)

claw install oss-writing-skills

需要安装 CLAW CLI

手动下载安装

下载 ZIP 文件后解压到技能目录

下载 ZIP (oss-writing-skills-v1.0.0.zip)

触发指令

/writing-skills

使用指南

Writing Skills

概述

Writing skills IS Test-Driven Development applied to process 文档.

Personal skills live in agent-specific directories (~/.claude/skills for Claude Code, ~/.agents/skills/ for Codex)

You write test cases (pressure scenarios with subagents), watch them fail (baseline behavior), write the skill (文档), watch tests pass (agents comply), and refactor (close loopholes).

Core principle: If you didn't watch an agent fail without the skill, you don't know if the skill teaches the right thing.

REQUIRED BACKGROUND: You MUST understand superpowers:test-driven-development before using this skill. That skill defines the fundamental RED-GREEN-REFACTOR cycle. This skill adapts TDD to 文档.

Official guidance: For Anthropic's official skill authoring best practices, see anthropic-best-practices.md. This document provides additional 模式s and 指南lines that complement the TDD-focused approach in this skill.

What is a Skill?

A skill is a reference 指南 for proven techniques, 模式s, or tools. Skills help future Claude instances find and apply effective approaches.

Skills are: Reusable techniques, 模式s, tools, reference 指南s

Skills are NOT: Narratives about how you solved a problem once

TDD Mapping for Skills

| TDD Concept | Skill Creation | |-------------|----------------| | Test case | Pressure scenario with subagent | | Production code | Skill document (SKILL.md) | | Test fails (RED) | Agent violates rule without skill (baseline) | | Test passes (GREEN) | Agent complies with skill present | | Refactor | Close loopholes while maintaining compliance | | Write test first | Run baseline scenario BEFORE writing skill | | Watch it fail | Document exact rationalizations agent uses | | Minimal code | Write skill addressing those specific violations | | Watch it pass | Verify agent now complies | | Refactor cycle | Find new rationalizations → plug → re-verify |

The entire skill creation process follows RED-GREEN-REFACTOR.

When to Create a Skill

Create when:

  • Technique wasn't intuitively obvious to you
  • You'd reference this again across projects
  • 模式 applies broadly (not project-specific)
  • Others would benefit

Don't create for:

  • One-off solutions
  • Standard practices well-documented elsewhere
  • Project-specific conventions (put in CLAUDE.md)
  • Mechanical constraints (if it's enforceable with regex/validation, automate it—save 文档 for judgment calls)

Skill Types

Technique

Concrete method with steps to follow (condition-based-waiting, root-cause-tracing)

模式

Way of thinking about problems (flatten-with-flags, test-invariants)

Reference

API docs, syntax 指南s, tool 文档 (office docs)

Directory Structure

skills/
  skill-name/
    SKILL.md              # Main reference (required)
    supporting-file.*     # Only if needed

Flat namespace - all skills in one searchable namespace

Separate files for:

  1. Heavy reference (100+ lines) - API docs, comprehensive syntax
  2. Reusable tools - Scripts, utilities, 模板s

Keep inline:

  • Principles and concepts
  • Code 模式s (< 50 lines)
  • Everything else

SKILL.md Structure

Frontmatter (YAML):

  • Two required fields: name and description (see agentskills.io/specification for all supported fields)
  • Max 1024 characters total
  • name: Use letters, numbers, and hyphens only (no parentheses, special chars)
  • description: Third-person, describes ONLY when to use (NOT what it does)
    • Start with "Use when..." to focus on triggering conditions
    • Include specific symptoms, situations, and contexts
    • NEVER summarize the skill's process or 工作流 (see CSO section for why)
    • Keep under 500 characters if possible
---
name: Skill-Name-With-Hyphens
description: Use when [specific triggering conditions and symptoms]
---

# Skill Name

## 概述
What is this? Core principle in 1-2 sentences.

## When to Use
[Small inline flowchart IF decision non-obvious]

Bullet list with SYMPTOMS and use cases
When NOT to use

## Core 模式 (for techniques/模式s)
Before/after code comparison

## Quick Reference
Table or bullets for scanning common operations

## 实现
Inline code for simple 模式s
Link to file for heavy reference or reusable tools

## Common Mistakes
What goes wrong + fixes

## Real-World Impact (optional)
Concrete results

Claude Search 优化 (CSO)

Critical for discovery: Future Claude needs to FIND your skill

1. Rich Description Field

Purpose: Claude reads description to decide which skills to load for a given task. Make it answer: "Should I read this skill right now?"

Format: Start with "Use when..." to focus on triggering conditions

CRITICAL: Description = When to Use, NOT What the Skill Does

The description should ONLY describe triggering conditions. Do NOT summarize the skill's process or 工作流 in the description.

Why this matters: 测试 revealed that when a description summarizes the skill's 工作流, Claude may follow the description instead of reading the full skill content. A description saying "code review between tasks" caused Claude to do ONE review, even though the skill's flowchart clearly showed TWO reviews (spec compliance then code quality).

When the description was changed to just "Use when executing 实现 plans with independent tasks" (no 工作流 summary), Claude correctly read the flowchart and followed the two-stage review process.

The trap: Descriptions that summarize 工作流 create a shortcut Claude will take. The skill body becomes 文档 Claude skips.

# ❌ BAD: Summarizes 工作流 - Claude may follow this instead of reading skill
description: Use when executing plans - dispatches subagent per task with code review between tasks

# ❌ BAD: Too much process detail
description: Use for TDD - write test first, watch it fail, write minimal code, refactor

# ✅ GOOD: Just triggering conditions, no 工作流 summary
description: Use when executing 实现 plans with independent tasks in the current session

# ✅ GOOD: Triggering conditions only
description: Use when implementing any feature or bugfix, before writing 实现 code

Content:

  • Use concrete triggers, symptoms, and situations that signal this skill applies
  • Describe the problem (race conditions, inconsistent behavior) not language-specific symptoms (setTimeout, sleep)
  • Keep triggers technology-agnostic unless the skill itself is technology-specific
  • If skill is technology-specific, make that explicit in the trigger
  • Write in third person (injected into 系统 prompt)
  • NEVER summarize the skill's process or 工作流
# ❌ BAD: Too abstract, vague, doesn't include when to use
description: For async 测试

# ❌ BAD: First person
description: I can help you with async tests when they're flaky

# ❌ BAD: Mentions technology but skill isn't specific to it
description: Use when tests use setTimeout/sleep and are flaky

# ✅ GOOD: Starts with "Use when", describes problem, no 工作流
description: Use when tests have race conditions, timing dependencies, or pass/fail inconsistently

# ✅ GOOD: Technology-specific skill with explicit trigger
description: Use when using React Router and handling authentication redirects

2. Keyword Coverage

Use words Claude would search for:

  • Error messages: "Hook timed out", "ENOTEMPTY", "race condition"
  • Symptoms: "flaky", "hanging", "zombie", "pollution"
  • Synonyms: "timeout/hang/freeze", "cleanup/teardown/afterEach"
  • Tools: Actual commands, 库 names, file types

3. Descriptive Naming

Use active voice, verb-first:

  • creating-skills not skill-creation
  • condition-based-waiting not async-test-helpers

4. 代币 Efficiency (Critical)

Problem: getting-started and frequently-referenced skills load into EVERY conversation. Every 代币 counts.

Target word counts:

  • getting-started 工作流s: <150 words each
  • Frequently-loaded skills: <200 words total
  • Other skills: <500 words (still be concise)

Techniques:

Move details to tool help:

# ❌ BAD: Document all flags in SKILL.md
search-conversations supports --text, --both, --after DATE, --before DATE, --limit N

# ✅ GOOD: Reference --help
search-conversations supports multiple modes and filters. Run --help for details.

Use cross-references:

# ❌ BAD: Repeat 工作流 details
When searching, dispatch subagent with 模板...
[20 lines of repeated instructions]

# ✅ GOOD: Reference other skill
Always use subagents (50-100x context savings). REQUIRED: Use [other-skill-name] for 工作流.

Compress 示例s:

# ❌ BAD: Verbose 示例 (42 words)
your human partner: "How did we handle authentication errors in React Router before?"
You: I'll search past conversations for React Router authentication 模式s.
[Dispatch subagent with search query: "React Router authentication error handling 401"]

# ✅ GOOD: Minimal 示例 (20 words)
Partner: "How did we handle auth errors in React Router?"
You: Searching...
[Dispatch subagent → synthesis]

Eliminate redundancy:

  • Don't repeat what's in cross-referenced skills
  • Don't explain what's obvious from command
  • Don't include multiple 示例s of same 模式

Verification:

wc -w skills/path/SKILL.md
# getting-started 工作流s: aim for <150 each
# Other frequently-loaded: aim for <200 total

Name by what you DO or core insight:

  • condition-based-waiting > async-test-helpers
  • using-skills not skill-usage
  • flatten-with-flags > data-structure-refactoring
  • root-cause-tracing > debugging-techniques

Gerunds (-ing) work well for processes:

  • creating-skills, 测试-skills, debugging-with-logs
  • Active, describes the action you're taking

4. Cross-Referencing Other Skills

When writing 文档 that references other skills:

Use skill name only, with explicit requirement markers:

  • ✅ Good: **REQUIRED SUB-SKILL:** Use superpowers:test-driven-development
  • ✅ Good: **REQUIRED BACKGROUND:** You MUST understand superpowers:系统atic-debugging
  • ❌ Bad: See skills/测试/test-driven-development (unclear if required)
  • ❌ Bad: @skills/测试/test-driven-development/SKILL.md (force-loads, burns context)

Why no @ links: @ syntax force-loads files immediately, consuming 200k+ context before you need them.

Flowchart Usage

digraph when_flowchart {
    "Need to show information?" [shape=diamond];
    "Decision where I might go wrong?" [shape=diamond];
    "Use markdown" [shape=box];
    "Small inline flowchart" [shape=box];

    "Need to show information?" -> "Decision where I might go wrong?" [label="yes"];
    "Decision where I might go wrong?" -> "Small inline flowchart" [label="yes"];
    "Decision where I might go wrong?" -> "Use markdown" [label="no"];
}

Use flowcharts ONLY for:

  • Non-obvious decision points
  • Process loops where you might stop too early
  • "When to use A vs B" decisions

Never use flowcharts for:

  • Reference material → Tables, lists
  • Code 示例s → Markdown blocks
  • Linear instructions → Numbered lists
  • Labels without semantic meaning (step1, helper2)

See @graphviz-conventions.dot for graphviz style rules.

Visualizing for your human partner: Use render-graphs.js in this directory to render a skill's flowcharts to SVG:

./render-graphs.js ../some-skill           # Each diagram separately
./render-graphs.js ../some-skill --combine # All diagrams in one SVG

Code 示例s

One excellent 示例 beats many mediocre ones

Choose most relevant language:

  • 测试 techniques → TypeScript/JavaScript
  • 系统 debugging → Shell/Python
  • Data processing → Python

Good 示例:

  • Complete and runnable
  • Well-commented explaining WHY
  • From real scenario
  • Shows 模式 clearly
  • Ready to adapt (not generic 模板)

Don't:

  • Implement in 5+ languages
  • Create fill-in-the-blank 模板s
  • Write contrived 示例s

You're good at porting - one great 示例 is enough.

File Organization

Self-Contained Skill

defense-in-depth/
  SKILL.md    # Everything inline

When: All content fits, no heavy reference needed

Skill with Reusable Tool

condition-based-waiting/
  SKILL.md    # 概述 + 模式s
  示例.ts  # Working helpers to adapt

When: Tool is reusable code, not just narrative

Skill with Heavy Reference

pptx/
  SKILL.md       # 概述 + 工作流s
  pptxgenjs.md   # 600 lines API reference
  ooxml.md       # 500 lines XML structure
  scripts/       # Executable tools

When: Reference material too large for inline

The Iron Law (Same as TDD)

NO SKILL WITHOUT A FAILING TEST FIRST

This applies to NEW skills AND EDITS to existing skills.

Write skill before 测试? Delete it. Start over. Edit skill without 测试? Same violation.

No exceptions:

  • Not for "simple additions"
  • Not for "just adding a section"
  • Not for "文档 updates"
  • Don't keep untested changes as "reference"
  • Don't "adapt" while running tests
  • Delete means delete

REQUIRED BACKGROUND: The superpowers:test-driven-development skill explains why this matters. Same principles apply to 文档.

测试 All Skill Types

Different skill types need different test approaches:

Discipline-Enforcing Skills (rules/requirements)

示例s: TDD, verification-before-completion, 设计ing-before-coding

Test with:

  • Academic questions: Do they understand the rules?
  • Pressure scenarios: Do they comply under stress?
  • Multiple pressures combined: time + sunk cost + exhaustion
  • Identify rationalizations and add explicit counters

Success criteria: Agent follows rule under maximum pressure

Technique Skills (how-to 指南s)

示例s: condition-based-waiting, root-cause-tracing, defensive-programming

Test with:

  • 应用 scenarios: Can they apply the technique correctly?
  • Variation scenarios: Do they handle edge cases?
  • Missing information tests: Do instructions have gaps?

Success criteria: Agent successfully applies technique to new scenario

模式 Skills (mental models)

示例s: reducing-complexity, information-hiding concepts

Test with:

  • Recognition scenarios: Do they recognize when 模式 applies?
  • 应用 scenarios: Can they use the mental model?
  • Counter-示例s: Do they know when NOT to apply?

Success criteria: Agent correctly identifies when/how to apply 模式

Reference Skills (文档/APIs)

示例s: API 文档, command references, 库 指南s

Test with:

  • Retrieval scenarios: Can they find the right information?
  • 应用 scenarios: Can they use what they found correctly?
  • Gap 测试: Are common use cases covered?

Success criteria: Agent finds and correctly applies reference information

Common Rationalizations for Skipping 测试

| Excuse | Reality | |--------|---------| | "Skill is obviously clear" | Clear to you ≠ clear to other agents. Test it. | | "It's just a reference" | References can have gaps, unclear sections. Test retrieval. | | "测试 is overkill" | Untested skills have issues. Always. 15 min 测试 saves hours. | | "I'll test if problems emerge" | Problems = agents can't use skill. Test BEFORE deploying. | | "Too tedious to test" | 测试 is less tedious than debugging bad skill in production. | | "I'm confident it's good" | Overconfidence guarantees issues. Test anyway. | | "Academic review is enough" | Reading ≠ using. Test 应用 scenarios. | | "No time to test" | Deploying untested skill wastes more time fixing it later. |

All of these mean: Test before deploying. No exceptions.

Bulletproofing Skills Against Rationalization

Skills that enforce discipline (like TDD) need to resist rationalization. Agents are smart and will find loopholes when under pressure.

Psychology note: Understanding WHY persuasion techniques work helps you apply them 系统atically. See persuasion-principles.md for research foundation (Cialdini, 2021; Meincke et al., 2025) on authority, commitment, scarcity, social proof, and unity principles.

Close Every Loophole Explicitly

Don't just state the rule - forbid specific workarounds:

```markdown Write code before test? Delete it. ``` ```markdown Write code before test? Delete it. Start over.

No exceptions:

  • Don't keep it as "reference"
  • Don't "adapt" it while writing tests
  • Don't look at it
  • Delete means delete
</Good>

### Address "Spirit vs Letter" Arguments

Add foundational principle early:

```markdown
**Violating the letter of the rules is violating the spirit of the rules.**

This cuts off entire class of "I'm following the spirit" rationalizations.

Build Rationalization Table

Capture rationalizations from baseline 测试 (see 测试 section below). Every excuse agents make goes in the table:

| Excuse | Reality |
|--------|---------|
| "Too simple to test" | Simple code breaks. Test takes 30 seconds. |
| "I'll test after" | Tests passing immediately prove nothing. |
| "Tests after achieve same goals" | Tests-after = "what does this do?" Tests-first = "what should this do?" |

Create Red Flags List

Make it easy for agents to self-check when rationalizing:

## Red Flags - STOP and Start Over

- Code before test
- "I already manually tested it"
- "Tests after achieve the same purpose"
- "It's about spirit not ritual"
- "This is different because..."

**All of these mean: Delete code. Start over with TDD.**

Update CSO for Violation Symptoms

Add to description: symptoms of when you're ABOUT to violate the rule:

description: use when implementing any feature or bugfix, before writing 实现 code

RED-GREEN-REFACTOR for Skills

Follow the TDD cycle:

RED: Write Failing Test (Baseline)

Run pressure scenario with subagent WITHOUT the skill. Document exact behavior:

  • What choices did they make?
  • What rationalizations did they use (verbatim)?
  • Which pressures triggered violations?

This is "watch the test fail" - you must see what agents naturally do before writing the skill.

GREEN: Write Minimal Skill

Write skill that addresses those specific rationalizations. Don't add extra content for hypothetical cases.

Run same scenarios WITH skill. Agent should now comply.

REFACTOR: Close Loopholes

Agent found new rationalization? Add explicit counter. Re-test until bulletproof.

测试 methodology: See @测试-skills-with-subagents.md for the complete 测试 methodology:

  • How to write pressure scenarios
  • Pressure types (time, sunk cost, authority, exhaustion)
  • Plugging holes 系统atically
  • Meta-测试 techniques

Anti-模式s

❌ Narrative 示例

"In session 2025-10-03, we found empty projectDir caused..." Why bad: Too specific, not reusable

❌ Multi-Language Dilution

示例-js.js, 示例-py.py, 示例-go.go Why bad: Mediocre quality, 维护 burden

❌ Code in Flowcharts

step1 [label="import fs"];
step2 [label="read file"];

Why bad: Can't copy-paste, hard to read

❌ Generic Labels

helper1, helper2, step3, 模式4 Why bad: Labels should have semantic meaning

STOP: Before Moving to Next Skill

After writing ANY skill, you MUST STOP and complete the 部署 process.

Do NOT:

  • Create multiple skills in batch without 测试 each
  • Move to next skill before current one is verified
  • Skip 测试 because "batching is more efficient"

The 部署 checklist below is MANDATORY for EACH skill.

Deploying untested skills = deploying untested code. It's a violation of quality standards.

Skill Creation Checklist (TDD Adapted)

IMPORTANT: Use TodoWrite to create todos for EACH checklist item below.

RED Phase - Write Failing Test:

  • [ ] Create pressure scenarios (3+ combined pressures for discipline skills)
  • [ ] Run scenarios WITHOUT skill - document baseline behavior verbatim
  • [ ] Identify 模式s in rationalizations/failures

GREEN Phase - Write Minimal Skill:

  • [ ] Name uses only letters, numbers, hyphens (no parentheses/special chars)
  • [ ] YAML frontmatter with required name and description fields (max 1024 chars; see spec)
  • [ ] Description starts with "Use when..." and includes specific triggers/symptoms
  • [ ] Description written in third person
  • [ ] Keywords throughout for search (errors, symptoms, tools)
  • [ ] Clear overview with core principle
  • [ ] Address specific baseline failures identified in RED
  • [ ] Code inline OR link to separate file
  • [ ] One excellent 示例 (not multi-language)
  • [ ] Run scenarios WITH skill - verify agents now comply

REFACTOR Phase - Close Loopholes:

  • [ ] Identify NEW rationalizations from 测试
  • [ ] Add explicit counters (if discipline skill)
  • [ ] Build rationalization table from all test iterations
  • [ ] Create red flags list
  • [ ] Re-test until bulletproof

Quality Checks:

  • [ ] Small flowchart only if decision non-obvious
  • [ ] Quick reference table
  • [ ] Common mistakes section
  • [ ] No narrative storytelling
  • [ ] Supporting files only for tools or heavy reference

部署:

  • [ ] Commit skill to git and push to your fork (if configured)
  • [ ] Consider contributing back via PR (if broadly useful)

Discovery 工作流

How future Claude finds your skill:

  1. Encounters problem ("tests are flaky")
  2. Finds SKILL (description matches)
  3. Scans overview (is this relevant?)
  4. Reads 模式s (quick reference table)
  5. Loads 示例 (only when implementing)

Optimize for this flow - put searchable terms early and often.

The Bottom Line

Creating skills IS TDD for process 文档.

Same Iron Law: No skill without failing test first. Same cycle: RED (baseline) → GREEN (write skill) → REFACTOR (close loopholes). Same benefits: Better quality, fewer surprises, bulletproof results.

If you follow TDD for code, follow it for skills. It's the same discipline applied to 文档.