A meeting request landed in my calendar. Someone I didn't know, from a company I'd heard of but didn't know well. Call the next morning.
I typed one line: "research [name] at [company]."
Eight minutes later I had a profile. Career arc. Communication patterns extracted from a conference talk and a podcast. Three questions tailored to what they'd mentioned caring about publicly. A briefing structured around the purpose of the call.
I hadn't explained what a profile was, what format I needed, or where to save it. I hadn't described the research steps or named the sources to check. I hadn't reminded Charles who I am, what I'm working on, or what a good output looks like.
You can go further. "Research [name] at [company] for my 10:00 on [topic]." He'll pull the full calendar invite, the company background, and the person together. One output. Each context point you add is one less thing you need to reconstruct in the meeting itself.
If that person had been in the vault already, from a previous meeting or a past conversation, the briefing would have opened differently. Last discussed. What was unresolved. The skill doesn't just research. It briefs.
What a skill actually is
A skill is a markdown file.
A SKILL.md sitting in a folder. Text. Structure. Instructions. Claude Code reads a CLAUDE.md at session start that lists every skill with its trigger phrases. When a trigger fires, the skill file loads. Not before. Everything else stays out of context until it's needed.
The people research skill defines a persona: thorough researcher, synthesis first. It sets context: vault paths, output location, relationship types. It specifies output format: an executive-standard briefing with quick reference, career analysis, communication patterns, questions to ask. And it includes explicit permission to fail: only use public information or authenticated LinkedIn where a session is available, flag gaps, don't fill them.
Persona. Context. Output format. And permission to fail.
I didn't type them that morning. They were already there, in the skill. Every time.
.agents/skills/
├── people-research/
│ ├── SKILL.md ← persona, context, format, permission to fail
│ └── workflows/
│ └── people-research.md ← research pipeline
├── second-brain/ ← GTD inbox processing, weekly reviews
├── linkedin-writer/ ← draft and challenge LinkedIn posts
├── playwright-fetch/ ← auth browser for JS-heavy pages
├── linkedin-analytics/ ← post metrics via Playwright, updates vault
├── competitive-intel/ ← structured research across eight data sources
├── coach/ ← triathlon training plans from Garmin data
└── ...
Two ways to invoke a skill: a trigger phrase or a slash command. Phrases are conversational and context-rich. "Research [name] at [company] for my 10:00 on [topic]" gives the skill something to work with before it starts. Slash commands are direct: /people-research, /daily-plan. My practice: phrases when I'm adding context inline, slash commands for structured workflows I run on a schedule. Daily planning, weekly review. The skill already knows what to gather. I just want it to start.
The rule I apply: if I do it twice, I turn it into a skill.
A growing library has one real friction: remembering what exists. The trigger phrases live in the skill files. Charles knows them. But if you forget you have a skill for something, you won't reach for it. The answer isn't better documentation. It's building habits around the skills you actually use. The ones you haven't touched in a month probably shouldn't be in the library.
Going deeper: daily planning
The most refined skill I have is daily planning. In the first article I described what it felt like when Charles started behaving like a Chief of Staff. Daily planning is where that behavior gets built.
What it does now: before I see anything, it silently gathers everything in parallel. It reads the vault and reads a compressed summary of all active projects (a Projects-Summary.md) and greps the high-priority open tasks. Checks today's Google Calendar. Reads yesterday's daily note for carryover. Counts inbox items. All before the first word appears on screen.
Then it gives its assessment first. What needs attention, what's stalling, what's overdue. Opinion, then questions. That's a standing rule in the skill file.
It also runs a pre-flight check before any task recommendations: a friend not contacted in 3 weeks gets flagged by name. A family need. Anything overdue that lives outside a project. The things that fall through the cracks when you go 200% on work.
── GATHER (silent, parallel) ─────────────────────────────────
Projects-Summary.md + high-priority tasks
Google Calendar for today
Yesterday's daily note (carryover)
Inbox count
── PRE-FLIGHT CHECK ──────────────────────────────────────────
Friends not contacted in 3+ weeks: flagged by name
Family needs, overdue todo items outside projects
── ASSESSMENT ────────────────────────────────────────────────
What needs attention, what's stalling, what's overdue
Opinion first, then questions
→ [me] confirm energy level, time available, context
── TASK RECOMMENDATIONS ──────────────────────────────────────
Prioritized list for the day
17:00 protected, never filled
It wasn't always this way.
Early version of my daily planning read every project MOC in full. Token cost was enormous. Sessions were slow. I fixed it by building the Projects-Summary cache. Next iteration: it was asking too many questions before showing anything. I flipped the order: assessment first, then questions. Then: the pre-flight check was missing entirely. I added it after noticing a pattern.
Each time something felt wrong, I worked out a better approach in that session. Then I asked Charles to update the skill file: a direct edit to the SKILL.md with the new rule. The next session ran better. Iteration by iteration, it became something that actually works the way I think.
Prompting best practices, baked in
One of the most effective prompting patterns is to separate writing from evaluation. Generate something, have a critic tear it apart, synthesize the result. It works because AI is a sharper editor than it is an original writer: pointing out what's weak is easier than producing something strong from scratch. The technique is called adversarial validation.
I have that. I call it the stress test. One phrase, one pass: four reviewers, each with a different agenda, applied in sequence. The Builder checks if the implementation holds up. The Skeptic looks for hype and hollow claims. The Practitioner asks if a real person would actually sustain this. The Editor cuts everything that doesn't earn its place. Each persona is defined in a Working Personas file Charles reads. The output: a consolidated verdict on what survives all four passes.
The personas change with the job. For PMM editorial content, a different set: Skeptical DXP Buyer, Competitor's PMM, PMM Peer. For this series, these four. The stress test is the pattern. The personas are the configuration.
I didn't invent the technique. I encoded it. Now I invoke it with three words instead of four paragraphs.
That's the difference between knowing a prompting technique and having it. Skills are where technique becomes repeatable. You don't type the method every session. You write it once into the skill, refine it over time, and invoke it when needed.
Chaining: how an article gets built
No skill runs alone. Here is the full chain for publishing an article in The Second Brain Stack series, including this one:
── CAPTURE ───────────────────────────────────────────────────
[me] voice note, idea, rough context, whatever's in your head
↓
[me] > "new blog post"
↓
[charles] BLOG SKILL
drafts article in vault: frontmatter,
series context, MOC backlink
── WRITE ─────────────────────────────────────────────────────
[me] write in Obsidian
(this part stays mine)
── REVIEW & IMPROVE ──────────────────────────────────────────
[me] > "review [post]"
↓
[charles] BLOG SKILL
frontmatter, SEO audit, structure,
tone alignment with series
hook and closing line test
↓
[me] > "run the stress test"
↓
[charles] four review personas in sequence,
consolidated synthesis report
↓
[me] > "draft LinkedIn post"
↓
[charles] LINKEDIN-WRITER SKILL
drafts based on article in my voice
↓
[me] write LinkedIn post in Obsidian
(by choice)
── PUBLISH ───────────────────────────────────────────────────
[me] > "publish [post]"
↓
[charles] BLOG SKILL
strips Obsidian syntax, copies to repo,
git commit, git push, Vercel deploys
↓
[me] post on LinkedIn (manual by choice)
── MEASURE ───────────────────────────────────────────────────
24 hours later
↓
[charles] LINKEDIN-ANALYTICS SKILL
playwright-fetch: authenticated browser,
pulls post analytics
appends to stats, updates dashboard
Every input I make is either a creative decision or a human-in-the-loop moment: a single command or an approval. Charles handles everything between. Five skills. No re-explaining context between steps.
The blog skill writes to the vault. The linkedin-writer reads from it without being told to. Neither coordinates with the other. They share state through the vault. That's what makes this a chain and not just a sequence: each skill reads what the previous one wrote, through a shared foundation. Playwright closes the loop using the same auth session file that fetches pages during research. One session on disk, two skills pointing at it.
The analytics step is manually triggered. A cron job that fails silently when the auth session expires is worse than a deliberate pull. For now, I choose when to close the loop.
What compounding actually looks like
The skills started rough.
People research: basic workflow, got the format wrong a few times. Daily planning: slow, expensive, too many questions before showing anything useful. The stress test: one-size-fits-all personas that didn't know my voice, my goals, or my writing rules.
Each session, something got tightened. The persona got sharper. The output format got more specific. A new rule got encoded: no em dashes, 17:00 protected, say "I don't know" instead of guessing.
The skill gets better every time I use it. Not because the AI learned something. Because I got clearer about what I actually wanted, and I wrote it down.
Not a bigger skill library. Skills that get better shaped to you.
Prompts reset. Skills compound.
No comments section here. If you have questions or want to leave a comment, the LinkedIn post is the place. I read every reply.
The Second Brain Stack · ← Part 1: Chatbot to Chief of Staff · Next: the vault.
