<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>PieterBrinkman.com</title>
    <link>https://www.pieterbrinkman.com</link>
    <description>Tech blog about IoT, ESP32, Azure, Sitecore, and Software Development</description>
    <language>en-US</language>
    <atom:link href="https://www.pieterbrinkman.com/feed.xml" rel="self" type="application/rss+xml"/>
    <item>
      <title><![CDATA[MCP: the live data layer your AI system is missing]]></title>
      <link>https://www.pieterbrinkman.com/2026/03/28/mcp-the-live-data-layer/</link>
      <guid isPermaLink="true">https://www.pieterbrinkman.com/2026/03/28/mcp-the-live-data-layer/</guid>
      <pubDate>Sat, 28 Mar 2026 00:00:00 GMT</pubDate>
      <description><![CDATA[None of that information was in the vault. No file listed Saturday's schedule. Charles didn't remember it. He looked.]]></description>
      <content:encoded><![CDATA[
Saturday morning. Coffee barely poured. Four events already in the family calendar.

Wednesday evening, a scheduled task ran. Charles pulled up the soccer club website and found my son's match. Home game, 11:30 kickoff. Then he opened the korfbal schedule. korfbal is a Dutch sport, think basketball and netball had a child. The schedule page is fully JavaScript-rendered, so he spun up a headless browser to read it. Parsed the table. Found matches for three daughters: two at home, one away in the afternoon. Checked the family calendar for conflicts. None. Created four events with emoji titles, arrival times, and driving distances. Color-coded home versus away. Flagged an overlap in the afternoon so we could plan around it.

```
  ┌─────────────────────────┐   ┌─────────────────────────┐
  │  WEDNESDAY TASK         │   │  FRIDAY WEEKLY REVIEW   │
  │  scheduled · 21:00      │   │  manual trigger         │
  └────────────┬────────────┘   └────────────┬────────────┘
               │                             │
               └──────────────┬──────────────┘
                              │
                              ▼
                     ┌────────────────┐
                     │  VAULT         │
                     │  team names    │
                     │  divisions     │
                     │  home grounds  │
                     └───────┬────────┘
                             │
              ┌──────────────┴──────────────┐
              │                             │
              ▼                             ▼
    ┌──────────────────┐         ┌──────────────────────┐
    │  DuckDuckGo      │         │  Playwright           │
    │  soccer schedule │         │  korfball schedule    │
    │                  │         │  (JS-rendered)        │
    └────────┬─────────┘         └──────────┬────────────┘
             │                              │
             └──────────────┬───────────────┘
                            │
                            │   away games only
                            ├──────────────────────►  Google Maps
                            │                          travel time
                            │                          → arrive by X
                            ▼
         ┌────────────────────────────────────────┐
         │  Google Calendar                       │
         │                                        │
         │  conflict check · color code           │
         │                                        │
         │ ⚽ soccer  · 11:30 · home  🟢          │
         │ 🏐 korfbal · 12:00 · home  🟢          │
         │ 🏐 korfbal · 12:00 · home  🟢          │
         │ 🏐 korfbal · 14:30 · away  🔵  ⚠      │
         │                   overlap flagged       │
         └────────────────────────────────────────┘
```

Before I built this, Saturday morning planning was a Friday ritual. Open the soccer club website, find my son's team, note the time. Open the korfbal club website or app, scroll through the schedule, find three different teams for three daughters. Check which games are home and which are away, and how long the drive is. Figure out if the afternoon away game overlaps with the midday home game. Open Google Calendar, create four events manually, guess at arrival times. Twenty minutes on a good day. Longer if a schedule changed and I didn't notice.

No more websites. No manual schedule hunting. No figuring out who plays where and whether the afternoon away game conflicts with the morning kickoff. Charles did all of that. And he didn't do it from memory. No file in the vault listed Saturday's schedule.

The weekly review on Friday checks the schedules again. That week, a match had been rescheduled. Charles caught it during the Friday check, updated the calendar, and flagged it in the briefing. By the time Saturday morning came, the right time was already there.

You could build this with IFTTT or Zapier. A web scraper that checks the schedule, a calendar integration that creates events. I know because I tried. The difference is where the intelligence lives. An automation runs a fixed recipe: check URL, parse table, create event. If the page layout changes, the recipe breaks. If two games overlap, the recipe doesn't notice, or the recipe gets impossibly complex trying to handle it. If there's already a family birthday on the calendar, the recipe doesn't care.

MCP puts the data inside the AI conversation. Charles sees the schedule, the calendar, and the rules simultaneously. When an away game overlaps with another appointment, he flags it. When a match time changed, he notices because he reads the current page of the association, he compares it against what was already in the calendar, and updates it. The automation does one thing. The AI finds the patterns and does the thinking.

The [vault](https://www.pieterbrinkman.com/2026/03/12/the-vault-why-the-foundation-matters/) holds what I've written down. [CLAUDE.md](https://www.pieterbrinkman.com/2026/03/23/claude-md-is-not-a-prompt/) holds how I want to work. Neither one has eyes. They can't see something that changed on a sports club website, check whether 11:30 is free, or know that the korfbal tournament moved from Saturday to Sunday.

<Callout type="info" title="→ Article 3: The vault">
[How 245 markdown files become a persistent memory that Charles carries into every session.](https://www.pieterbrinkman.com/2026/03/12/the-vault-why-the-foundation-matters/)
</Callout>

The previous article ended with a question. The vault remembers. CLAUDE.md shapes behavior. But neither has live context. What happens when you give the system eyes and ears?

## How MCP connects everything

MCP stands for Model Context Protocol. If that means nothing to you, you're not alone. Here's what it actually does.

An API is a specific connection to a specific service. If you want to read Google Calendar, you use the Google Calendar API. If you want Garmin data, you use the Garmin API. Each one has its own authentication, its own data format, its own quirks. Build ten connections, learn ten APIs.

MCP is not an API. It's a protocol. A common language that any service can implement. The difference matters. An API says "here's how to talk to me specifically." A protocol says "here's how anything can talk to anything."

Without MCP, Claude reads files and writes files in my vault. That's it. Powerful, but limited to what's already on disk. With MCP, Claude reads my calendar, tracks my workouts, fetches a web page, creates an event, searches the internet, controls my smart home. It goes from reading documents to participating in my day.

The best analogy I've found: USB for AI. You don't build a custom cable for every device you own. You build one port. Every device that speaks USB works. MCP is that port. Every data source that implements the protocol becomes available to the AI through the same interface. No custom integration per service. One standard. Many connections.

Before MCP, Claude was a reader. It consumed what you gave it and responded. With MCP, Claude becomes a participant. It doesn't wait for information. It can go out, get it, and act on it.

Here's what's connected to Charles right now:

- **Google Calendar.** Reads events, creates events, finds free time, checks conflicts.
- **Notion.** Task inbox, project database, and knowledge capture pipeline.
- **Granola.** Meeting transcripts from local storage.
- **DuckDuckGo.** Web search and clean page fetching via Jina Reader.
- **Garmin.** Training data, sleep, body battery, resting heart rate trends.
- **Home Assistant.** Smart home control, sensor data, automations.
- **NotebookLM.** Google's research AI. You've probably heard the audio overviews by now. Drop in documents, get structured summaries, study guides, and audio digests. The summaries sync to the vault, turning long reads to grounded research for context that Charles can reference.

Each one gives Charles a capability he didn't have before. Together, they turn a file-based assistant into one that sees what's happening right now.

A brief note on infrastructure: most MCP servers don't come pre-packaged. They need to be self-hosted: thin wrappers around existing APIs that speak the MCP protocol. I run all of them on a single mini PC under my desk. One entry point for every AI agent, one place to maintain them, available 24/7.

Here's the full architecture in one view:

```
╔══════════════════════════════════════════════════════════════╗
║  LOADED AT SESSION START                                     ║
╠══════════════════════════════════════════════════════════════╣
║                                                              ║
║  ┌────────────────────────────────────────────────────────┐  ║
║  │  CLAUDE.md  ·  180 lines                               │  ║
║  │  Persona · 14 Workflows · Standing Rules               │  ║
║  └──────────────────────┬─────────────────────────────────┘  ║
║                         │ @imports                           ║
║                         ▼                                    ║
║  ┌────────────────────────────────────────────────────────┐  ║
║  │  .claude/rules/  ·  4 files  ·  320 lines              │  ║
║  │  vault-structure · tools · moc-linking · sync          │  ║
║  └────────────────────────────────────────────────────────┘  ║
║                                                              ║
║  ┌────────────────────────────────────────────────────────┐  ║
║  │  Auto Memory  ·  MEMORY.md  ·  first 200 lines         │  ║
║  └────────────────────────────────────────────────────────┘  ║
║                                                              ║
╚══════════════════════════════════════════════════════════════╝
                          │
                you type your first message
                          │
                          ▼
┌──────────────────────────────────────────────────────────────┐
│  LOADED ON DEMAND                                            │
│                                                              │
│  "plan my day"     ──►  second-brain/daily-plan.md           │
│  "plan zaterdag"   ──►  kids-sports/SKILL.md                 │
│  "research [name]" ──►  people-research/SKILL.md             │
│                                                              │
│  14 skills · 100-300 lines each · loads on trigger           │
│                                                              │
│  Vault files read as needed                                  │
│  40-Projects/  ·  50-Areas/  ·  80-Permanent-Notes/          │
└──────────────────────────────────────────────────────────────┘
                          │
                          ▼
┌──────────────────────────────────────────────────────────────┐
│  LIVE DATA  ·  MCP                                           │
│                                                              │
│  ── Out of the box ───────────────────────────────────────   │
│                                                              │
│  Google Calendar  (Anthropic native · zero infrastructure)   │
│  ├─ reads events · creates events · checks conflicts         │
│  └─ sizes the daily plan to what the calendar actually holds │
│                                                              │
│  Granola  (own MCP · free tier)                              │
│  ├─ meeting summaries via MCP                                │
│  └─ full transcripts from local storage                      │
│                                                              │
│  ── Self-hosted wrappers  ·  mini PC · always on ─────────   │
│                                                              │
│  Garmin  (no public API · unofficial Connect API)            │
│  ├─ training load · heart rate · sleep · body battery        │
│  └─ compared against vault baseline → coaching decisions     │
│                                                              │
│  DuckDuckGo + Jina Reader                                    │
│  └─ web search · clean article fetch · no context switching  │
│                                                              │
│  Home Assistant                                              │
│  └─ smart home · sensors · automations                       │
│                                                              │
│  Notion                                                      │
│  └─ smart task inbox · project database                      │
│                                                              │
│  NotebookLM                                                  │
│  └─ study summaries synced to vault                          │
│                                                              │
└──────────────────────────────────────────────────────────────┘
```

## Meeting prep: four sources, one brief

In the first article of this series, I described a moment. Morning briefing. Charles spots a meeting with someone not in the vault. "Want me to prep?" Fifteen minutes later: LinkedIn background, two YouTube interviews, a summary of priorities, meeting notes structured around the calendar invite.

That morning, four data sources worked together. None of them knew about each other. Charles decided what to connect, in what order, for what purpose.

**Google Calendar MCP** pulled the meeting details. Time, attendees, agenda from the invite. Charles now knew who, when, and what the meeting was about.

**DuckDuckGo MCP** searched for the person's name and company. Found their LinkedIn profile, a recent conference talk, two articles they'd written. Web search that happened inside the AI session, not in a browser tab I had to alt-tab to. No context switching. No copy-pasting URLs back into the conversation. Charles searched, found, and moved to the next step without me touching a browser.

**Jina Reader** (through the same DuckDuckGo MCP) fetched each page as clean markdown. No ads, no navigation bars, no cookie banners. Just the content. The LinkedIn profile became a structured career summary. The YouTube interviews became full transcripts via a separate skill. Two 30-minute interviews, reduced to key themes and communication style in minutes.

**The vault** (local files) checked for existing relationship notes. Found nothing. Created one. Stored the research. Linked it to the meeting and the company. Next time this person appears on the calendar, Charles doesn't start from zero. He starts from the profile, the meeting notes, and anything that happened since. The first meeting takes fifteen minutes to prep. The second takes three.

Output: a structured brief ready before the meeting started. Four data sources. One output. Fifteen minutes.

In [Article 1](https://www.pieterbrinkman.com/2026/03/09/how-i-promoted-my-ai-from-chatbot-to-chief-of-staff/) I described the output. Now you're seeing the wiring. And the wiring reveals something important: that sequence had a guide. The meeting prep skill, a workflow I wrote once and stored in the system, told Charles to start with the calendar invite, then search for the person, then fetch the pages, then check the vault. The skill is the recipe. MCP is what makes the ingredients available. The skill defines the steps. MCP provides the data to execute them. Take away either one and the output doesn't happen.

<Callout type="info" title="→ Article 2: The skills layer">
[How modular skills turn one-off workflows into building blocks — the recipe layer that tells Charles what to do with the data MCP provides.](https://www.pieterbrinkman.com/2026/03/09/the-skills-layer-what-compounding-actually-looks-like/)
</Callout>

MCP doesn't make your AI smart. It gives your AI something to be smart about.

## The calendar isn't a reminder. It's a constraint.

Every morning I say "plan my day." What happens next changed fundamentally when the calendar became live.

In the early days, before I added Google Calendar through MCP, Charles read the vault. He knew my projects, my priorities, what was overdue. He could tell me what to work on. But he had no idea what my day actually looked like. I'd manually type: "I have meetings at 10, 2, and 4." He'd plan around what I told him. If I forgot to mention a meeting, the plan ignored it. If a meeting got moved after we planned, the plan was wrong. I was the bottleneck for my own system's accuracy.

Now Charles pulls today's calendar before we start planning. He sees every meeting. He calculates the gaps. He knows I have a standup at 09:00, a design review at 14:00, and nothing after 15:30. He sizes the plan to fit.

```
08:45-09:00  Morning planning (this session)
09:00-09:30  Standup
09:30-12:00  [Focus] Article draft - 2.5 hours
12:00-13:00  Lunch
13:00-15:00  [Sport] Wingfoiling - 2 hour
15:30-17:00  Buffer / quick wins / flex
17:00-19:00  Family buffer (protected)
```

The calendar isn't informational. It's a constraint the system respects.

Focus blocks get sized to gaps. Priorities get scoped to available time. If the day is more than 50% booked with meetings, Charles says it directly: "Today's a meetings day. Pick ONE priority for the gaps. Trying to fit three will produce zero."

That sentence comes from the [daily planning skill](https://www.pieterbrinkman.com/2026/03/09/the-skills-layer-what-compounding-actually-looks-like/), not from me.

This is where the architecture becomes visible. The family buffer at 17:00-19:00 is a rule. It's in CLAUDE.md. It fires every session, regardless of what the calendar says. The gaps between meetings are live data. They change every day. The rule is always the same. The plan adapts to what the calendar actually holds.

On a Tuesday with three hours of meetings, I get three priorities and two focus blocks. On a Thursday with six hours of meetings, I get one priority and a buffer. Same rules. Different data. Different plan. Every morning.

<Callout type="info" title="→ Article 4: CLAUDE.md is not a prompt">
[The behavioral layer that tells Charles when to push back, what to protect, and how to run the same rules every session, whether or not you remember to ask.](https://www.pieterbrinkman.com/2026/03/23/claude-md-is-not-a-prompt/)
</Callout>

Without MCP, the calendar was something I described to my AI. With MCP, the calendar is something the AI reads for itself. The difference sounds small. In practice, it changed how every morning works. I stopped managing context and started reviewing proposals. Charles shows me the day map. I say "looks right" or "swap the focus blocks." The plan is done in two minutes instead of ten, and it's based on the actual calendar, not my memory of it.

## Garmin: training data inside the conversation

I'm training for a quarter triathlon. May 2026 and I'm not built for running. Started from zero. No endurance background. No human coach. The only sport I do seriously is wingfoiling and windsurfing. Still is, and Charles knows wingfoiling is my highest priority. Garmin doesn't.

Garmin sees heart rate data and completed activities. It would push a standard program: swim, bike, run, more intervals. What it can't do is recognize that my Tuesday on the water counts as training. Garmin doesn't know what wingfoiling is or takes it into account as training, it just measures the data.

My wingfoiling sessions average 154 to 156 beats per minute for 70 to 115 minutes. That's not a casual afternoon on the water. That's zone 4 effort sustained for over an hour. Harder than most of my training runs.

Without Charles, that data sat in the Garmin app. I'd think "that was intense" and plan the next day's run as if nothing happened. My body knew. My training plan didn't.

Now Charles is my coach. He knows my weight, my height, my race date in May, and that I'll take a good wind day over a scheduled run every time. He counts wingfoiling as 1.5x training load. If he sees a session on Tuesday and a hard run planned for Wednesday, the plan changes. Not because I remembered to tell him. Because the data told him.

I leave notes too, a quick line in my daily note after a session. "Run was fine, right knee hurt the first 15 minutes." They live in the vault. Charles will give me tips for my knee and after the next run he will follow-up and ask about my knee is doing.  He plans my training day, he reads them alongside the Garmin data, checks the calendar for the week ahead, and adjusts the plan toward the race date. That's what Garmin can't do. Garmin has the data. It doesn't have the conversation, context or intelligence.

The Garmin MCP puts existing data where decisions are made. But Garmin data alone doesn't mean much without something to compare it against. The vault holds the baseline: swim distance per session going back to lesson one, weekly training load over time, where each discipline started and where it needs to be by May. When Garmin shows what the body did this week, Charles has the full curve to compare it against. Not "here are your stats." Here's where you are relative to where you need to be.

I'm running a the Garmin MCP server myself, because Garmin is lacking proper integration points. Until vendors catch up, you need somewhere to run the wrappers. Here's what that looks like.

## A Linux box, three Docker containers, and no subscription fees

Here's the honest version: what it takes to run this, and what you're actually giving up.

MCP is a protocol, not a product. A service doesn't automatically speak it. When a vendor hasn't built an MCP server, you build a thin wrapper: a small process that speaks MCP on one end and calls their existing API on the other. That's what self-hosting means here.

Two categories in my setup. Some services ship MCP support out of the box:

- **Google Calendar.** Anthropic's native integration. Zero infrastructure required.
- **Granola.** Meeting summaries come through Granola's own MCP on the free tier; when I need full transcripts, those pull from local storage on my laptop.

Others need a self-hosted wrapper:

- **Garmin.** No official public API, no native MCP server. Garmin's data API is partner-only; personal health data sits locked in Garmin Connect. The wrapper uses the unofficial Connect API via session cookies to get it out.
- **DuckDuckGo + Jina Reader.** Web search and clean article fetching, wrapped in a lightweight SSE gateway.
- **Home Assistant.** A local process exposing smart home sensors and controls.
- **NotebookLM.** Google's research AI has no official MCP server. The wrapper connects to it via browser automation, syncing study summaries back to the vault.

More vendors should support MCP. Garmin is the obvious candidate: the data is personal, the developer community is large, and an unofficial API already exists because people built one themselves. A first-party MCP server would eliminate every wrapper in the ecosystem overnight. Same for Strava, Oura, and every fitness platform where the data is yours but the access isn't. The protocol is ready. The vendor are keeping your data hostage.

I run the self-hosted MCP wrappers on a mini PC under my desk, available 24/7. No API fees for the data sources I use most. Garmin data is free. Google Calendar is free. DuckDuckGo is free. The MCP servers themselves are open source.

There's something else worth saying clearly. The local server doesn't store your data, it's a relay. Your calendar lives on Google's servers. Your health data lives on Garmin's. The MCP wrappers sit in the middle, fetching that data on demand and passing it into the conversation. But Charles is Claude and is not local. It's a cloud service. When Charles reads your calendar, your heart rate, your kids' sports schedules, that data goes to Anthropic as conversation context. The data was never fully private to begin with. What you're deciding is which AI gets to read it.

<Callout type="warning" title="On data and privacy">
Every MCP connection routes personal data through a cloud AI. Your calendar lives on Google's servers. Your health data lives on Garmin's. When Charles reads either, that data goes to Anthropic as conversation context. A local model would keep it on-device but costs you capability. Make that choice with your eyes open. I have. You might decide differently.
</Callout>

I should be honest about the complexity too. This is a weekend project, not an evening project, at least for the self-hosted parts.

The native integrations are stable. Google Calendar, Notion and others works because Anthropic owns and maintains that connection. Zero infrastructure, zero maintenance on my end. The complexity is in the wrappers you run yourself.

And those can be rough. The MCP ecosystem is young. Configuration files have quirks. Auth tokens expire and need refreshing. The korfbal schedule page required a headless browser because the site is fully JavaScript-rendered, and getting Playwright to run reliably inside a workflow took more debugging than I'd like to admit.

Every new self-hosted connection is an evening of reading docs, testing configs, and fixing things that work on the second try but not the first. The Garmin MCP needed manual cookie extraction before auth tokens worked. The DuckDuckGo gateway crashed twice before I found the right restart policy. The protocol is stable. The implementations around it are still maturing.

What helps: Charles can assist with the setup itself. For the Garmin MCP, I pointed him to the GitHub repo and he guided me through the config step by step: auth scopes, missing environment variables, the cookie extraction process. The complexity is real, but you're not working through it alone.

What I want is to: own the infrastructure, own the data, own the connections. When a new MCP server appears for a service I use, I plug it in. No vendor approval required. No pricing tier upgrade. Mostly one config change, one restart, one new capability. Last month I added Garmin. This month I connected Home Assistant. Next month, maybe Strava, or a local LLM for sensitive documents.

If you're reading this thinking "I don't have a home server," that's fine. You don't need one. MCP servers run on a laptop. Claude Code's built-in integrations (like Google Calendar) need zero infrastructure at all. You could start with just the calendar connection and have a fundamentally different planning experience tomorrow morning.

The home server is a convenience for running things 24/7, serving data to automation workflows, and keeping everything centralized. It's not a prerequisite.

## The system that sees

Saturday morning. Four events in the family calendar. Arrival times. Locations.

A match had been rescheduled earlier in the week. I didn't know until Charles told me on Friday. He didn't know until he looked. No file was updated. No notification was set. The website changed, the weekly review caught it, and by Saturday morning the calendar already reflected it.

The vault has the team names. Charles knows which child plays for which team and which division. CLAUDE.md defined the trigger ("plan mijn zaterdag") and the rules: always check for conflicts before creating events, always include arrival times. MCP provided the live data: the schedule that had changed during the week, the calendar that confirmed no conflicts.

Three layers. Memory, behavior, live context.

**Memory through Obsidian Vault**
Start with just the vault. Charles knows every project, every priority, every next action. But he has no idea what the day looks like. He'll suggest a three-hour focus block on a day that's back-to-back meetings. The knowledge is right. The plan is wrong.

**Behaviour through CLAUDE.md**
Give him only the rules, no memory. The family buffer fires at 17:00. The pre-flight runs. But on what? There are no projects to rank, no people to look up, no history to build on. A well-calibrated system with nothing to calibrate against.

**Live data through MCP**
Give him only the live data, no structure. A calendar full of events, heart rate data from last week, sports schedules, meeting transcripts. Google Calendar tells you what's happening. Garmin tells you how hard you've been working. Neither tells you what to do about any of it.

The system works because the layers build on each other. **The vault tells Charles what I know. The rules tell him how I work. MCP tells him what's happening right now.**

Saturday morning. Coffee barely poured. Four events already in the calendar. I didn't do any of that.

*[The Second Brain Stack](/category/second-brain-stack). Next up: The mobile capture loop. An idea at 11pm. A structured note by 9am.*

No comments section here. If you have questions or want to discuss, <a href="https://www.linkedin.com/posts/pbrink_saturday-morning-coffee-barely-poured-activity-7444719623280803840-PU9K" rel="nofollow" target="_blank">the LinkedIn post is the place</a>. I read every reply.
]]></content:encoded>
      <author>pieter</author>
      <category><![CDATA[Artificial Intelligence]]></category>
      <category><![CDATA[AI Agents]]></category>
    </item>
    <item>
      <title><![CDATA[CLAUDE.md is not a prompt. It's a Personal Operating System.]]></title>
      <link>https://www.pieterbrinkman.com/2026/03/24/claude-md-is-not-a-prompt/</link>
      <guid isPermaLink="true">https://www.pieterbrinkman.com/2026/03/24/claude-md-is-not-a-prompt/</guid>
      <pubDate>Tue, 24 Mar 2026 00:00:00 GMT</pubDate>
      <description><![CDATA[I didn't build CLAUDE.md to be smarter. I built it to run rules I wrote on a day I was thinking clearly, whether or not I'm thinking clearly today. Here's the structure behind it.]]></description>
      <content:encoded><![CDATA[
Tuesday morning. Coffee on the desk. Terminal open. "Plan my day."

Charles loads the vault, scans my projects, checks the calendar. Before any task recommendations appear, before the priority list, before the day map — the pre-flight check.

```
⚠  Second Brain Stack, Article 4. Last action: 6 days ago.
⚠  Outline exists. No draft. No deadline set.
⚠  Overdue: Tuesday's follow-up still open.
✓  Family buffer 17:00-19:00: clear.
```

I hadn't abandoned the article. I'd just let it go quiet. I was deep in something else. Something that felt closer to done, something that competed harder for available focus. The outline had been sitting in the vault for a week. No next action. No scheduled writing block. No pull toward completion. Just a folder and a vague intention.

I opened the draft that morning.

You're reading it now.

Not because I suddenly became more disciplined about writing. Because a system caught what my brain was designed to miss when I'm deep in something else. The pre-flight check reads every active project, calculates days since last action, and flags anything with no next action mapped. It's mechanical. It's relentless. It doesn't care that the other project felt more urgent.

That surfacing lasted five seconds. But the infrastructure behind it, the reason it happened at all, is what this article is about.

The [last article](/the-vault-why-the-foundation-matters) ended with a promise: `CLAUDE.md` tells your AI when to push back.

This is what that looks like.

## The file that never disappears

`CLAUDE.md` is a markdown file that sits at the root of the project directory. Claude Code reads it automatically at the start of every session. No paste required. No prompt needed. It's there before the conversation begins.

That changes everything about how the interaction works.

A prompt is something you write fresh each time. You paste it, the model reads it, you get a response. Next session, gone. You paste it again, or you don't, and the behavior drifts. We all spend time in that loop. Every morning you'd re-explain context, re-state preferences, re-establish the ground rules. It felt like training a new team member every single day.

`CLAUDE.md` doesn't work that way. It persists. It loads silently. It shapes every session without you having to remember to ask. The AI reads it before you type your first word. By the time you say "plan my day," the behavioral layer is already active. This is what turns a generic AI into Charles.

Think of it as the personality of your agent. Not what it should do right now, but how it should behave always. Not a conversation starter. A behavioral framework.

### Three layers

The file has three distinct layers:

- **Configuration** — paths and pointers. Where the vault is, where the skills live.
- **Workflows** — a routing table. Trigger phrases that map to defined behaviors.
- **Standing Rules** — always active. Never triggered by a phrase. They fire every session whether you ask or not.

Here's my real setup, condensed:

```markdown
## Configuration
- **Vault path:** `C:\agents\2ndbrain\vault`
- **Skill reference:** `.agents/skills/second-brain/SKILL.md`

## Workflows

### Daily Planning
**Triggers:** "plan my day", "what should I work on"
- Run Pre-Flight Check BEFORE any task recommendations
- Scan `40-Projects/` and `50-Areas/` for next actions
- Ask about energy level, time available, context

### Inbox Processing
**Triggers:** "process my inbox", "GTD processing"
- Auto-detect URLs and fetch web content
- Ask clarifying questions in batches, not one-by-one
- Route actionable items to Projects/Areas

[12 more workflows with the same structure]

## Standing Rules

**Family Buffer:** 17:00-19:00 is protected by default.
Never schedule tasks in this window.

**Stale Projects:** If a project has had no activity
for 7+ days, flag it. Activate it or park it.

# These four lines import reference files into every session
@.claude/rules/vault-structure.md
@.claude/rules/tools.md
@.claude/rules/moc-linking.md
@.claude/rules/sync-status.md
```

**Configuration** is the thinnest layer. Paths, file locations, where the skill files live. Almost entirely pointers. Charles needs to know where the vault is and where to find the detailed workflow definitions. Not the contents of those files, just where to look.

**Workflows** are a routing table. Each entry has a trigger phrase and a behavior summary. Fourteen triggers, fourteen defined behaviors. "Plan my day" means: run the Pre-Flight Check, scan projects and areas, ask about energy and time, propose a focused task list. "Process my inbox" means: read the inbox files, check Notion, detect URLs, ask clarifying questions in batches, route by type. I'm not describing what I want right now. I'm building a vocabulary. A shared language between me and Charles that means every session starts with context instead of explanation.

Most of the detail lives downstream. Each workflow entry in `CLAUDE.md` is a two-line summary pointing to a full skill file with the actual logic. The routing is the interface; the detail lives in the skill files.

**Standing rules** are the third layer — and the most important one. Always active. Never triggered by a phrase. They don't wait for you to ask. The family buffer fires at 17:00 whether or not you mention it. The stale project check runs before you see a single task recommendation. They're in the file. They load every session. That distinction — between a rule and an instruction — is what the next section is about.

The four `@import` lines at the bottom are what make the thin-file principle work. Claude Code's official import syntax pulls four separate rule files into context at session start, each containing reference documentation that Charles needs but doesn't belong in the behavioral layer:

- **Vault**: the folder tree, task syntax, project conventions. What Charles needs for any file operation:

```
vault/
├── 10-Dashboards/        # Live task dashboards
├── 20-Daily-Notes/       # Planning, journal, quick todos
├── 30-Inbox/             # GTD capture
│   └── Daily/            # YYYY-MM-DD.md
├── 40-Projects/          # Multi-step outcomes with deadlines
│   └── {Project}/
│       ├── {Project} MOC.md
│       └── Resources/
├── 50-Areas/             # Ongoing responsibilities
├── 60-Resources/         # Cross-project references
├── 70-Archives/
├── 80-Permanent-Notes/   # Zettelkasten synthesized insights
└── 90-Templates/
```
- `tools.md`: which tool for what job. Jina Reader vs Playwright vs direct API. Decision rules, not something to figure out each session.
- `moc-linking.md`: the linking protocol that keeps notes connected. Every new file gets a backlink to its parent index.
- `sync-status.md`: the git workflow for keeping two repos in sync after every commit.

```
╔══════════════════════════════════════════════════════════╗
║  LOADED AT SESSION START                                 ║
╠══════════════════════════════════════════════════════════╣
║                                                          ║
║  ┌────────────────────────────────────────────────────┐  ║
║  │  CLAUDE.md  ·  180 lines                           │  ║
║  │  Persona · 14 Workflows · Standing Rules           │  ║
║  └─────────────────────┬──────────────────────────────┘  ║
║                        │ @imports                        ║
║                        ▼                                 ║
║  ┌────────────────────────────────────────────────────┐  ║
║  │  .claude/rules/  ·  4 files  ·  320 lines          │  ║
║  │  vault-structure · tools · moc-linking · sync      │  ║
║  └────────────────────────────────────────────────────┘  ║
║                                                          ║
║  ┌────────────────────────────────────────────────────┐  ║
║  │  Auto Memory  ·  MEMORY.md  ·  first 200 lines     │  ║
║  └────────────────────────────────────────────────────┘  ║
║                                                          ║
╚══════════════════════════════════════════════════════════╝
                        │
              you type your first message
                        │
                        ▼
┌──────────────────────────────────────────────────────────┐
│  LOADED ON DEMAND                                        │
│                                                          │
│  "plan my day"    ──►  second-brain/daily-plan.md        │
│  "deep dive [X]"  ──►  competitive-intel/SKILL.md        │
│  "research [name]"──►  people-research/SKILL.md          │
│                                                          │
│  14 skills  ·  100-300 lines each  ·  loads on trigger   │
│                                                          │
│  Vault files read as needed                              │
│  40-Projects/  ·  50-Areas/  ·  80-Permanent-Notes/      │
└──────────────────────────────────────────────────────────┘
                        │
                        ▼
┌──────────────────────────────────────────────────────────┐
│  LIVE DATA  ·  MCP servers                               │
│                                                          │
│  Google Calendar  ·  Garmin  ·  DuckDuckGo               │
│  Granola (meetings)  ·  n8n (automations)                │
└──────────────────────────────────────────────────────────┘
```

<Callout title="The 200-line limit">

My `CLAUDE.md` had grown to 520 lines. Anthropic recommends under 200. Moving 340 lines of reference material into those four files brought the main file to 180. The reason the limit matters: past 200 lines, consistency drops. Longer files create more surface area to miss an instruction or skip something buried deep. A shorter, well-structured file produces more reliable behavior than a comprehensive one trying to contain everything.

</Callout>

One file to route the session. Four rule files to hold the reference. Many skill files to define the detail.

The previous articles mentioned `CLAUDE.md` briefly. [Article 1](/how-i-promoted-my-ai-from-chatbot-to-chief-of-staff) described the morning briefing it enables. [Article 3](/the-vault-why-the-foundation-matters) showed how behavioral files in the vault define how Charles thinks. This article goes inside the file itself. Because the most important thing in `CLAUDE.md` isn't the vault path or the workflow triggers.

It's the rules.

## Rules persist. Instructions reset.

I told Charles I had 60 minutes of deep focus time. He picked two tasks that needed focus time and fit the window, nothing more. Next session, full day planning. The constraint existed only where I stated it. That's an instruction: it applies once, then the default returns.

The family buffer doesn't need to be told. It's in the file. It fires every time. That's the distinction that matters most in this article: the difference between an instruction and a rule.

An instruction is something you say once. "Check my calendar." "Draft a LinkedIn post." "Summarize this transcript." It applies to the current session and disappears when the conversation ends.

A rule applies every session. You write it once, and it fires every time, whether you remember it or not. Whether you feel like hearing it or not.

Here are three rules from my `CLAUDE.md`:

**"17:00-19:00 is a protected family window. Never schedule tasks in this window."**

Every daily planning session, Charles maps my day. He reads the calendar, calculates available focus blocks, and proposes where to put deep work. 17:00-19:00 is always marked protected. Not because I asked today. Because the rule says so every day.

**"When uncertain, say so. Never make stuff up."**

For research, competitive intelligence, or any factual claim: if Charles can't verify it from context or a fetched source, he says "I don't know" or "I couldn't verify this." Honest gaps over confident fabrications. I'd rather have a blank than a lie.

**"If a friend hasn't been contacted in 3+ weeks, create a concrete task. Not a 'someday' note."**

Not "remind me to think about calling someone." A task. This week. With a name attached. The difference between a reminder and a task is whether it creates accountability or just awareness. A reminder says "you should probably do this." A task says "do this by Friday."

These rules aren't suggestions I repeat each morning. They're standing instructions that fire every session: specific, loaded, present. Whether or not I remember to invoke them.

Worth being direct about how this actually works: `CLAUDE.md` is context Claude reads at session start, not enforced configuration. There's no hard compliance guarantee, no technical lock preventing deviation. The rules work because well-written, specific instructions loaded consistently at every session outperform instructions you try to remember to give each morning. That's the whole mechanism. Persistence and specificity, not enforcement.

I said `CLAUDE.md` tells your AI when to push back. This is what that looks like. Not a confrontation. A guardrail that was already in place before the conversation started.

<Callout type="info" title="Article 2: The skills layer">

Every time I build the same workflow twice, I turn it into a skill. This article explains how modular, reusable skills become the building blocks that make the whole system compound over time.

[Read the full article →](/the-skills-layer-what-compounding-actually-looks-like)

</Callout>

## Architecture, not discipline

The rules above sound like good practice. Protect family time. Be honest about uncertainty. Keep in touch with friends. Reasonable, straightforward, the kind of advice you'd read in a productivity blog and nod along to.

The reason they're encoded in a file instead of just "things I try to do" is practical.

Go deep on something interesting, and everything else quietly falls off the list. Not because it stopped mattering. Because nothing is watching it.

The design principle I landed on: **the solution is architecture, not discipline.**

Discipline means: remember which projects have gone quiet. Remember which deadline crept up with no task mapped. Remember to be present at dinner. And when you miss one, feel guilty about it.

Architecture means: build a system that watches all of it, so you can go deep without carrying the weight of what might be quietly dropping. That's the core of Getting Things Done. The difference is Charles is running it, not me trying to remember to.

I call it a Personal Operating System. Three components:

**Pre-Flight Check.** Before every planning session. Projects gone quiet? Deadlines with no task mapped? Friend overdue? Handle it first, then go deep.

**Weekly Heartbeat.** Sunday or Monday. A scan, not a session. Output: 1-3 concrete tasks with dates.

**Family Buffer.** 17:00-19:00. Not an event. A default.

It was 17:15 on a Wednesday. I was mid-build on something that felt close to done. Charles flagged the family buffer. Not a suggestion. A reference to a rule I wrote when I was thinking clearly: "That's the rule. It's what lets you be present tonight."

He doesn't override. He names the conflict. The decision is mine. The rule doesn't say "block this." It says: surface the tension, let me decide, but never let it go unspoken.

I'm not arguing with Charles. I'm arguing with a past version of myself who knew better. When I push back against the family buffer, I'm pushing back against the version of me who wrote the rule knowing I would push back.

That's a different kind of accountability.

## The mirror you can't close

Writing behavioral rules forced a specific kind of honesty.

There's something uncomfortable about writing a rule that says, in plain text, "when work gets interesting, I tend to drop my friends." Internal resolutions are private and forgiving. A rule in a configuration file is specific, persistent, and indifferent to how you feel about it today.

Most people think about giving their AI more information. More context, more history, more data. I think about giving it more authority.

Not authority to decide. Authority to surface. Authority to hold the line on a rule I wrote. Authority to name a tension I'm trying to avoid.

Writing the rules down is uncomfortable. It's also the most honest self-reflection you'll do. Living with them running every day is freeing. The discomfort is one-time. The freedom is recurring.

<Callout type="info" title="Article 1: How I promoted my AI from chatbot to chief of staff">

Before CLAUDE.md made sense, I needed to understand what I was building. This article explains how a generic AI assistant became a chief of staff — with context, a name, and a daily briefing that starts before I ask.

[Read the full article →](/how-i-promoted-my-ai-from-chatbot-to-chief-of-staff)

</Callout>

## The difference between memory and behavior

Memory without behavior is a filing cabinet. Behavior without memory is a chatbot. The system needs both.

The vault is memory: every project, every decision, every note. `CLAUDE.md` is behavior: how to act on that memory, when to push back, what to check first. Skills are the execution: the detailed workflows that create consistency. Together they are Charles: an AI that knows what's happening, how to respond, and how exactly to do it.

<Callout type="info" title="Article 3: The vault">

CLAUDE.md points to the vault for everything it needs to know. This article explains how 245 markdown files became a persistent memory system — the foundation that makes Charles useful across every session.

[Read the full article →](/the-vault-why-the-foundation-matters)

</Callout>

With the vault and CLAUDE.md, Charles knows who I am and how I work. But he doesn't know what's on my calendar right now, whether a meeting just moved, or whether my training plan needs adjusting. I pull that live data using MCP servers. That's the next layer, and the topic of next week's article.

*[The Second Brain Stack](/category/second-brain-stack). Next up: MCP — what changes when your AI gets a live feed.*

---

No comments section here. If you have questions or want to leave a comment, [the LinkedIn post is the place](LINKEDIN_URL). I read every reply.
]]></content:encoded>
      <author>pieter</author>
      <category><![CDATA[Second Brain Stack]]></category>
      <category><![CDATA[Artificial Intelligence]]></category>
      <category><![CDATA[AI Agents]]></category>
    </item>
    <item>
      <title><![CDATA[The vault: why the foundation matters before the AI does]]></title>
      <link>https://www.pieterbrinkman.com/2026/03/18/the-vault-why-the-foundation-matters/</link>
      <guid isPermaLink="true">https://www.pieterbrinkman.com/2026/03/18/the-vault-why-the-foundation-matters/</guid>
      <pubDate>Wed, 18 Mar 2026 00:00:00 GMT</pubDate>
      <description><![CDATA[I failed at GTD three times. The problem wasn't discipline. It was maintenance. Here's what changed when I stopped being the one doing it.]]></description>
      <content:encoded><![CDATA[
I tried Getting Things Done (GTD) three times. Once with a physical notebook. Once with Notion. Once with Obsidian and a carefully designed folder structure I was genuinely proud of.

Each time I got six weeks in and fell off. The inbox piled up. The weekly review got skipped. The project list drifted into a graveyard of things I meant to do.

I kept telling myself it was a discipline problem. It wasn't. It was a maintenance problem.

GTD works. PARA works. Obsidian is excellent software. The problem is that these systems require someone to maintain them consistently. And if your brain works like mine, you already know that's not a reliable strategy.

What changed wasn't the system. It was that I stopped being the one maintaining it.

## The vault as memory

There is no session continuity at the model level. Every time I open a terminal, Charles starts fresh. He doesn't remember yesterday's conversation. He doesn't remember last week's decisions.

The vault is what bridges that gap.

Not storage. Memory. Storage holds files. Memory holds context.

The vault accumulates: completed projects with what was learned, captured decisions and the reasoning behind them, meeting notes linked to the people who were there, research that informed a choice two projects ago.

When Charles reads it in the morning, he's not catching up on what he missed. He's resuming.

The morning briefing in [the first article](https://www.pieterbrinkman.com/2026/03/09/how-i-promoted-my-ai-from-chatbot-to-chief-of-staff/) didn't come from his memory. It came from reading the vault. The flag about a friend not contacted in three weeks came from a task in the Relationships MOC created two weeks prior. The triathlon load assessment came from the Training Log, last updated two days earlier.

No retention. Full context. That's what the vault makes possible.

## What the vault actually is

The vault is 245 markdown files in a folder on my laptop, version-controlled with git, backed up daily. Ten numbered folders. Projects, areas, research, meeting history, behavioral rules for an AI. With 763+ connections between notes.

Not a database. Not a SaaS subscription. Not a proprietary format. Plain files I can open with any text editor. Obsidian is the tool I use to read and navigate them: a free markdown editor that renders these files with links, graphs, and search. But the files themselves are just text.

```
vault/
├── 10-Dashboards/        ← task queries, live views
├── 20-Daily-Notes/       ← workspace: planning, quick todos
├── 30-Inbox/             ← GTD capture
├── 40-Projects/          ← multi-step outcomes with deadlines
├── 50-Areas/             ← ongoing responsibilities
├── 60-Resources/         ← cross-project reference
├── 70-Archives/          ← completed and inactive
├── 80-Permanent-Notes/   ← synthesized knowledge
└── 90-Templates/         ← note templates for every type
```

The structure follows GTD for task management and PARA for information architecture. Numbered folders for sorting. Gaps of ten for future expansion. Nothing invented. Nothing that requires maintenance from a vendor.

I chose Markdown deliberately. Obsidian renders it beautifully for me. But Charles reads it natively: no conversion, no API, no parsing layer. He opens a file the same way a text editor does.

Plain text turned out to be both the most future-proof format and the most AI-readable format. I didn't plan that. It's just what fell out of the constraint.

## Maps of Content: the knowledge graph

Every project has one file that serves as its anchor: the Map of Content, known as MOC. From the MOC, links branch out to meeting notes, research files, task lists. From those notes, links point back. The graph is navigable in both directions.

15 MOCs across the vault. 763 links connecting them.

![The vault's knowledge graph in Obsidian: 245 notes, 763 connections. Larger nodes are MOCs — the hubs everything connects back to.](/images/vault-knowledge-graph.webp)

When Charles reads a project MOC, he doesn't get a document. He gets a knowledge graph.

Here's what that looks like for the triathlon project:

```
Quarter-Triathlon-Prep MOC
├── Status: 12 weeks to race. Swim is the constraint.
├── Tasks: [ ] book second pool session, [ ] 8-week run build
├── [[Training-Log]]          ← 8 weeks of Garmin baseline
├── [[Swim-Lessons]]          ← Fridays 07:00, 8 sessions
├── [[Triathlon-Coach-Skill]] ← coaching rules for Charles
└── linked to Garmin MCP      ← live training load
     ↑
    linked back from all four
```

From one MOC, Charles knows the project status, the open tasks, the constraints, the history, and where the live data comes in. He follows the links when he needs depth. He reads the anchor when he needs overview.

There are three link types I maintain deliberately, encoded as standing rules in `CLAUDE.md`.

**Vertical links** connect parent to child: the project MOC links down to meeting notes, meeting notes link up to the MOC. The hierarchy is always recoverable.

**Horizontal links** connect across domains: a triathlon training note links to a recovery note in Areas, which links to the Garmin data note. Knowledge from one domain informs another.

**Referential links** connect to permanent knowledge: a project note links to a synthesized insight in Permanent Notes, built from multiple projects over time. The kind of knowledge that belongs to no single project but keeps showing up.

No orphaned files. Every new file created, by me or by Charles, gets backlinked immediately, bidirectionally. This is a written rule in `CLAUDE.md`. It is not optional — that's why it's Charles's job, not mine.

This article exists in the vault, linked to the series definition file, linked to the LinkedIn post, linked to the analytics note that will track its performance after publishing.

## Token efficiency: the briefing pattern

A vault with hundreds of files is too large to read in full every session. And reading every MOC in full adds up fast.

The difference in practice:

```
Token load comparison for Projects:
   13 project MOCs combined: ~20,000 tokens (81KB of text)
   Projects-Summary.md:   ~1,000 tokens  (4KB of text)
```

`Projects-Summary.md` is 74 lines. One compressed view of everything active. Charles reads this first. If he needs depth on a specific project, he follows the link into the full MOC, and from there into individual meeting notes or research files. If not, he moves on with full situational awareness.

Same information. 5% the cost.

Every MOC also has priority sections that Charles knows to scan in order. "High Priority" first. "Next Actions" second. "Someday/Maybe" skipped in daily planning entirely. The AI doesn't need to be told where to look. The structure tells it.

**Daily notes** are deliberately thin. They are workspaces, not task lists. Tasks live in MOCs, where they persist across sessions. If a task lands only in today's daily note, it vanishes from sight tomorrow.

16 **Obsidian templates** in the vault. One for each note type: project, area, meeting, person, web capture, permanent note, weekly review. When Charles creates a file, it starts with the correct structure. Consistency isn't enforced after the fact. It's baked in at creation.

You wouldn't hand a new employee every document in the company on day one. You'd give them a briefing, a map of where things live, and a pointer to go deeper when needed. That's the pattern.

## Two users, one vault

I work in Obsidian. Charles works in the terminal. We read and write the same vault.

Markdown supports two comment types that turn it into a two-channel communication layer.

`%%` is the Obsidian comment. Invisible in Obsidian's reading mode. But Claude reads raw markdown. So a note can carry a message I'll never see in the rendered view:

```
%% Charles: this client prefers async over calls.
   Remember during next meeting prep. %%
```

I ignore it visually. Charles reads it as standing instruction. Same file, different messages.

I use the same pattern while drafting. When I'm working on an article, I leave questions and revision notes in `%%` comments for Charles to resolve before the next iteration, the same way I would with a human editor. He reads them. I never see them in Obsidian's reading mode.

HTML comments do the same job for structural directives:

```html
<!-- Scanned FIRST by daily planning: check this section every session -->
```

Invisible to Obsidian's reader. Explicit instruction for the AI. No separate configuration file, no extra step. The document is self-describing.

There is also a collaboration contract: every file Charles creates is linked bidirectionally on the same day. I can open the vault tomorrow and trace exactly where any note came from, what project it belongs to, what conversation triggered it, and what other notes point to it.

Most AI collaboration is linear: you prompt, it responds, you evaluate. This is different. We share a workspace. We both read it. We both write to it.

When I drop a note into the inbox tonight, Charles will find it in the morning and understand where it fits. When Charles creates a research note after a people-research session, I'll see it linked from the project MOC before I open it. Neither of us re-explains what the other missed. We just pick up where the vault left off.

Shared context. No handoff.

Most of the time. On the days I skip a session entirely, the inbox sits unread. The vault waits. Nothing decays — it just pauses until I return.

## Behavioral definitions as vault files

The most consequential files in the vault don't contain tasks or research. They define how Charles thinks. All three live in `80-Permanent-Notes/`.

**`Assisting-User-Context.md`:** Identity, operating principles, permission rules, goals. Charles reads this to know who I am, what I care about, and how to handle edge cases. Not a prompt. A document I edit in Obsidian when my priorities shift.

**`Working-Personas.md`:** 12 named review lenses. The Builder checks if the implementation holds up. The Skeptic looks for hype and hollow claims. The Practitioner asks if a real person would sustain this. The Editor cuts everything that doesn't earn its place. "Run the stress test" invokes all four. The result is a consolidated verdict on whether an output survives adversarial review.

**`Personal-Operating-System.md`:** Rules that compensate for known attention patterns. 17:00-19:00 is a protected family window, never scheduled during planning. A pre-flight check runs before task recommendations, not after. Friends not contacted in three weeks get flagged by name, with a task to fix it. Encoded because the pattern is predictable and the failure mode is well-documented.

The principle: if I want to change how Charles behaves without touching code, it lives in the vault. `CLAUDE.md` is the routing layer containing summaries and pointers.

Deliberately thin here. What those files contain, and why encoding personal patterns into AI rules changes everything, is the next article.

## The foundation before the AI

The vault was always possible. GTD existed for decades. PARA was documented publicly. Obsidian is free software anyone can download.

The methodology was never the bottleneck. I had that. Three times.

What wasn't possible was someone other than me maintaining it. Consistently. Every session. Every file. Every link. Every day, including the days when maintenance is the last thing I want to do.

That's what changed.

We don't share a memory. We share a workspace. And the workspace does what I could never do alone: show up every morning, link every note, read every priority, and never lose a thread.

No comments section here. If you have questions or want to leave a comment, [the LinkedIn post is the place](https://www.linkedin.com/posts/pbrink_every-time-i-open-a-terminal-and-start-claude-activity-7440136636698492928-jrwb). I read every reply.

*[The Second Brain Stack](/category/second-brain-stack) · [← Part 2: The Skills Layer](/2026/03/09/the-skills-layer-what-compounding-actually-looks-like/) · Next up: `CLAUDE.md`. Not a prompt. A personal operating system that tells your AI when to push back.*
]]></content:encoded>
      <author>pieter</author>
      <category><![CDATA[Second Brain Stack]]></category>
      <category><![CDATA[Artificial Intelligence]]></category>
      <category><![CDATA[AI Agents]]></category>
    </item>
    <item>
      <title><![CDATA[How I promoted my AI from chatbot to chief of staff]]></title>
      <link>https://www.pieterbrinkman.com/2026/03/09/how-i-promoted-my-ai-from-chatbot-to-chief-of-staff/</link>
      <guid isPermaLink="true">https://www.pieterbrinkman.com/2026/03/09/how-i-promoted-my-ai-from-chatbot-to-chief-of-staff/</guid>
      <pubDate>Mon, 09 Mar 2026 00:00:00 GMT</pubDate>
      <description><![CDATA[I gave my AI a name, a memory, and a set of rules. Here's what I built, and why it stopped feeling like a tool.]]></description>
      <content:encoded><![CDATA[
Every morning I open a terminal and type three words: "plan my day."

Charles comes back with a briefing. Tasks in flight, project updates, what's overdue, what's coming. Then he gets to my calendar.

> "You have a 10am with someone I don't have in the vault. Want me to prep?"

I said yes. Fifteen minutes later he came back with their LinkedIn background, two YouTube interviews I hadn't seen, a summary of their priorities. Then meeting notes, structured around the purpose of the meeting itself, pulled from the calendar invite.

I hadn't asked for any of that. He just knew what a Chief of Staff would do.

That's when I stopped thinking of this as a tool.

```
> plan my day

──────────────────────────────────────────────────
  CHARLES · Morning briefing · Monday 9 March 2026
──────────────────────────────────────────────────

PRE-FLIGHT CHECK
  ✓ No overdue tasks
  ✓ Family window (17:00–19:00) is clear
  ⚑ Tom — last contact 3 weeks ago. Flag for this week?

TODAY'S CALENDAR
  09:00  Team standup (30 min)
  10:00  ⚠ HEADS UP — Alex Chen · unknown contact
  14:00  Deep work block
  17:00  [FAMILY WINDOW — protected, not scheduled]

⚠ HEADS UP: Alex Chen at 10:00
  No record in vault. No prior meetings.
  Want me to research and prep?

TODAY'S FOCUS
  ⏫ Finish draft review → "I don't use AI. I run it."
  ⏫ LinkedIn challenge pass — article 1
  🚴 Triathlon prep — 45 min bike (zone 2, recovery pace)
     -> Garmin: 3 sessions this week · load on track
  ▷ Garden: frost forecast before Saturday
  ⏳ Waiting — Meeting confirmation (expected this week)

──────────────────────────────────────────────────
  2 high-priority · 1 unknown contact · 1 waiting
  Type "prep alex chen" to start meeting research
──────────────────────────────────────────────────
```

Charles is my AI. And yes, I gave him a name.

He runs on `Claude Code` and lives in a terminal on my laptop. Not a chat window. Not a browser tab. A terminal I open every morning the same way an executive opens a briefing document.

Here's what's underneath. Four layers. One system.

## The memory: Obsidian Vault

Everything I know lives in a folder of structured markdown files on my laptop. Projects, research notes, meeting history, people, drafts in progress. Not in a chat history that resets. Not in a tool with a vendor lock-in. In plain files I can read, search, and edit directly in Obsidian.

What makes it more than a filing system is the links. Every note can reference every other note. A project links to the people involved. A meeting note links to the company. A research note links to the articles that informed it. When Charles reads my vault he doesn't just get a document, he gets a knowledge graph. Context flows through the connections.

```
  [[Project A]] ──── [[Person: Alex]]
       │                    │
  [[Meeting: 2026-03-01]] ──┘
       │
  [[Company: Acme Corp]]
```

The structure follows Getting Things Done (GTD) for task and project management and PARA for information architecture. Projects, Areas, Resources, Archives. Simple enough to maintain, powerful enough to scale.

## The interface: Claude Code

Most AI interactions are stateless. You open a chat, give context, get output, close the tab. Next session, you start over.

`Claude Code` is different. It runs in a terminal and reads a file called `CLAUDE.md` at the root of my vault. Think of it as the system prompt that never disappears. It tells Charles who I am, who he is, how I work, what my priorities are, what my projects are, and how he's supposed to behave.

Every session he opens the vault, reads the relevant files, and picks up where we left off. I don't re-explain context. I don't re-prompt from scratch. I just talk to him.

> That's the difference between a chatbot and a Chief of Staff.

## Skills that compound

Early on I was prompting my way through the same workflows every time. Competitive research, inbox processing, meeting prep, LinkedIn drafting. From scratch, every time.

Skills are a native `Claude Code` feature. A skill is a markdown file containing instructions, workflow steps, and the context Charles needs to do a specific job well. Anthropic ships some built-in, and there's a growing community library at [skills.sh](https://skills.sh). I've built my own on top of those.

Current skills: people research, GTD inbox processing, competitive intelligence, LinkedIn thought leadership drafting, daily planning with a pre-flight check, triathlon coaching, and a few more. Each one gets refined every time I use it. I notice what worked, what didn't, and update the skill file. Next time it runs better.

> "If I do it twice, I turn it into a skill."

Skills are where the compounding really happens. Each one adds a capability that makes the system more useful than the session before.

## Live data: MCP

The vault is static knowledge. `MCP`, the Model Context Protocol, connects Charles to live data.

Google Calendar tells him what's actually happening today. Garmin gives him my training load for the week. Web search lets him research in real time. Notion connects to my capture inbox.

This is how the meeting prep in the opening actually worked. Charles saw the meeting in my calendar, checked the vault for history, found nothing, searched LinkedIn and YouTube in real time, combined it with the meeting agenda from the calendar invite, and built a brief. Four data sources, one output, fifteen minutes.

That's the difference between an AI that reads documents and one that works with your day.

## How it fits together

```
┌─────────────────────────────┐
│         Charles (AI)        │
│     your Chief of Staff     │
└──────────────┬──────────────┘
               │ reads
┌──────────────▼──────────────┐
│          CLAUDE.md          │   ← identity · rules · persona
└──────────────┬──────────────┘
               │ opens
┌──────────────▼──────────────┐     ┌──────────────────────┐
│       Obsidian Vault        │◄────│        Skills        │
│                             │     │                      │
│  Projects · Meetings        │     │  Daily planning      │
│  People · Tasks             │     │  People research     │
│  Research · Drafts          │     │  Inbox processing    │
│  Areas · Archives           │     │  LinkedIn drafting   │
│  + more...                  │     │  + more...           │
│  -- -- -- -- -- -- -- -- -- │     └──────────────────────┘
│  ★ Memory & Context ★       │
└──────────────┬──────────────┘
               │ queries
┌──────────────▼──────────────┐
│       MCP live data         │
│                             │
│  Google Calendar · Garmin   │
│  Web search · Notion        │
│  + more...                  │
└─────────────────────────────┘
```

## The capture loop

Ideas don't wait for a desk. I message a Telegram bot when something hits me. An `n8n` workflow catches it, routes it to a Notion inbox, and Charles processes it into the vault the next morning with context and routing. By the time I sit down, yesterday's ideas are already in the system.

```
┌──────────┐     ┌──────────┐     ┌──────────────┐     ┌──────────────┐
│ Telegram │────▶│   n8n    │────▶│ Notion inbox │────▶│    Charles   │
└──────────┘     └──────────┘     └──────────────┘     └──────────────┘
idea hits          catches &         queued for            routes into
anywhere           routes            next morning          vault
```

Zero friction to capture. Structure when it matters.

## The rules

A system without guardrails is just a faster way to make mistakes.

I gave Charles rules, built into his `CLAUDE.md`. 17:00-19:00 is a protected family window. He doesn't fill it during planning. When he doesn't know something, he says so rather than making something up. When I'm staying busy on things that feel productive but aren't, he flags it.

Rules live in skill files too. The daily planning skill has a pre-flight check that runs before any task recommendations: overdue items, family needs, friend connections. The things that fall through the cracks when you go 200% on work.

These aren't constraints. They're what makes Charles trustworthy enough to actually rely on.

## It hasn't been perfect

Daily planning was burning through tokens fast. I fixed it by building Map of Content files, compressed overviews of each project that Charles loads instead of scanning every note. When he needs more depth, he follows the links into the vault.

I've tried Obsidian and GTD many times before. And I failed every time. Maintaining the structure was the part I kept failing at. What changed with Charles wasn't the system, it was that he does the maintaining. He used what was already there, saw patterns and made sense of it. When something breaks, you ask him to refactor. It's all just files. You iterate, improve and keep going.

Charles and I are a true dream team. The kind that only works because neither of us starts over.

## What building this taught me

AI fluency isn't just about knowing the right prompts. And yes, it definitely helps. But every layer I added forced me to be more explicit about what I actually want. To name my priorities. To define how I work. To articulate what matters.
Charles didn't do that thinking for me. He forced me to do it myself.

We don't start over. We build on.

Development is moving fast enough that I'm already questioning parts of this setup. Is `n8n` and Telegram the right capture loop? Maybe. Anthropic just released remote control for `Claude Code`, which allows me to connect to my instance from the Claude code app and have the full power of Charles available on my phone. That changes the capture story completely.

But that's for another article.

---

One last detail. This article has been sitting in my vault since last week, drafted by Charles. An AI describing how it works, stored in the system it built. The storage is the system, writing about itself.

If you made it here, you've covered a lot of ground. And this is still just the overview. I haven't named every `MCP` connection, every skill, or every rule in `CLAUDE.md`. Each layer has more depth than one article can hold.

[The Second Brain Stack](/category/second-brain-stack/) series will go there. Layer by layer. Use case by use case.

No comments section here. If you have questions or want to leave a comment, [the LinkedIn post is the place](https://www.linkedin.com/posts/pbrink_every-morning-i-open-a-terminal-and-type-activity-7436823215508819968-eDeW?utm_source=share&utm_medium=member_desktop&rcm=ACoAAACIoqoBPcPiQTbpY_pQvzVS4gIZQQpgWZg). I read every reply.

*[The Second Brain Stack](/category/second-brain-stack/) · [Part 2: The Skills Layer →](/2026/03/09/the-skills-layer-what-compounding-actually-looks-like/) · [Part 3: The Vault →](/2026/03/18/the-vault-why-the-foundation-matters/)*
]]></content:encoded>
      <author>pieter</author>
      <category><![CDATA[Artificial Intelligence]]></category>
      <category><![CDATA[AI Agents]]></category>
      <category><![CDATA[Second Brain Stack]]></category>
    </item>
    <item>
      <title><![CDATA[The skills layer: what compounding actually looks like]]></title>
      <link>https://www.pieterbrinkman.com/2026/03/09/the-skills-layer-what-compounding-actually-looks-like/</link>
      <guid isPermaLink="true">https://www.pieterbrinkman.com/2026/03/09/the-skills-layer-what-compounding-actually-looks-like/</guid>
      <pubDate>Mon, 09 Mar 2026 00:00:00 GMT</pubDate>
      <description><![CDATA[I said 'if I do it twice, I turn it into a skill.' Here's what that means in practice, and what happens when skills start chaining.]]></description>
      <content:encoded><![CDATA[
A meeting request landed in my calendar. Someone I didn't know, from a company I'd heard of but didn't know well. Call the next morning.

> I typed one line: "research [name] at [company]."

Eight minutes later I had a profile. Career arc. Communication patterns extracted from a conference talk and a podcast. Three questions tailored to what they'd mentioned caring about publicly. A briefing structured around the purpose of the call.

I hadn't explained what a profile was, what format I needed, or where to save it. I hadn't described the research steps or named the sources to check. I hadn't reminded Charles who I am, what I'm working on, or what a good output looks like.

You can go further. *"Research [name] at [company] for my 10:00 on [topic]."* He'll pull the full calendar invite, the company background, and the person together. One output. Each context point you add is one less thing you need to reconstruct in the meeting itself.

If that person had been in the vault already, from a previous meeting or a past conversation, the briefing would have opened differently. Last discussed. What was unresolved. The skill doesn't just research. It briefs.

## What a skill actually is

A skill is a markdown file.

A `SKILL.md` sitting in a folder. Text. Structure. Instructions. Claude Code reads a `CLAUDE.md` at session start that lists every skill with its trigger phrases. When a trigger fires, the skill file loads. Not before. Everything else stays out of context until it's needed.

The people research skill defines a persona: thorough researcher, synthesis first. It sets context: vault paths, output location, relationship types. It specifies output format: an executive-standard briefing with quick reference, career analysis, communication patterns, questions to ask. And it includes explicit permission to fail: only use public information or authenticated LinkedIn where a session is available, flag gaps, don't fill them.

Persona. Context. Output format. And permission to fail.

I didn't type them that morning. They were already there, in the skill. Every time.

```
.agents/skills/
├── people-research/
│   ├── SKILL.md          ← persona, context, format, permission to fail
│   └── workflows/
│       └── people-research.md   ← research pipeline
├── second-brain/         ← GTD inbox processing, weekly reviews
├── linkedin-writer/      ← draft and challenge LinkedIn posts
├── playwright-fetch/     ← auth browser for JS-heavy pages
├── linkedin-analytics/   ← post metrics via Playwright, updates vault
├── competitive-intel/    ← structured research across eight data sources
├── coach/                ← triathlon training plans from Garmin data
└── ...
```

Two ways to invoke a skill: a trigger phrase or a slash command. Phrases are conversational and context-rich. *"Research [name] at [company] for my 10:00 on [topic]"* gives the skill something to work with before it starts. Slash commands are direct: `/people-research`, `/daily-plan`. My practice: phrases when I'm adding context inline, slash commands for structured workflows I run on a schedule. Daily planning, weekly review. The skill already knows what to gather. I just want it to start.

The rule I apply: if I do it twice, I turn it into a skill.

> A growing library has one real friction: remembering what exists. The trigger phrases live in the skill files. Charles knows them. But if you forget you have a skill for something, you won't reach for it. The answer isn't better documentation. It's building habits around the skills you actually use. The ones you haven't touched in a month probably shouldn't be in the library.

## Going deeper: daily planning

The most refined skill I have is daily planning. In the first article I described what it felt like when Charles started behaving like a Chief of Staff. Daily planning is where that behavior gets built.

What it does now: before I see anything, it silently gathers everything in parallel. It reads the vault and reads a compressed summary of all active projects (a `Projects-Summary.md`) and greps the high-priority open tasks. Checks today's Google Calendar. Reads yesterday's daily note for carryover. Counts inbox items. All before the first word appears on screen.

Then it gives its assessment first. What needs attention, what's stalling, what's overdue. Opinion, then questions. That's a standing rule in the skill file.

It also runs a pre-flight check before any task recommendations: a friend not contacted in 3 weeks gets flagged by name. A family need. Anything overdue that lives outside a project. The things that fall through the cracks when you go 200% on work.

```
── GATHER (silent, parallel) ─────────────────────────────────
  Projects-Summary.md + high-priority tasks
  Google Calendar for today
  Yesterday's daily note (carryover)
  Inbox count

── PRE-FLIGHT CHECK ──────────────────────────────────────────
  Friends not contacted in 3+ weeks: flagged by name
  Family needs, overdue todo items outside projects

── ASSESSMENT ────────────────────────────────────────────────
  What needs attention, what's stalling, what's overdue
  Opinion first, then questions

→ [me]  confirm energy level, time available, context

── TASK RECOMMENDATIONS ──────────────────────────────────────
  Prioritized list for the day
  17:00 protected, never filled
```

It wasn't always this way.

Early version of my daily planning read every project MOC in full. Token cost was enormous. Sessions were slow. I fixed it by building the Projects-Summary cache. Next iteration: it was asking too many questions before showing anything. I flipped the order: assessment first, then questions. Then: the pre-flight check was missing entirely. I added it after noticing a pattern.

Each time something felt wrong, I worked out a better approach in that session. Then I asked Charles to update the skill file: a direct edit to the SKILL.md with the new rule. The next session ran better. Iteration by iteration, it became something that actually works the way I think.

## Prompting best practices, baked in

One of the most effective prompting patterns is to separate writing from evaluation. Generate something, have a critic tear it apart, synthesize the result. It works because AI is a sharper editor than it is an original writer: pointing out what's weak is easier than producing something strong from scratch. The technique is called adversarial validation.

I have that. I call it the stress test. One phrase, one pass: four reviewers, each with a different agenda, applied in sequence. The Builder checks if the implementation holds up. The Skeptic looks for hype and hollow claims. The Practitioner asks if a real person would actually sustain this. The Editor cuts everything that doesn't earn its place. Each persona is defined in a Working Personas file Charles reads. The output: a consolidated verdict on what survives all four passes.

The personas change with the job. For PMM editorial content, a different set: Skeptical DXP Buyer, Competitor's PMM, PMM Peer. For this series, these four. The stress test is the pattern. The personas are the configuration.

I didn't invent the technique. I encoded it. Now I invoke it with three words instead of four paragraphs.

That's the difference between knowing a prompting technique and having it. Skills are where technique becomes repeatable. You don't type the method every session. You write it once into the skill, refine it over time, and invoke it when needed.

## Chaining: how an article gets built

No skill runs alone. Here is the full chain for publishing an article in The Second Brain Stack series, including this one:

```
── CAPTURE ───────────────────────────────────────────────────

[me]      voice note, idea, rough context, whatever's in your head
           ↓
[me]    > "new blog post"
           ↓
[charles]  BLOG SKILL
           drafts article in vault: frontmatter,
           series context, MOC backlink

── WRITE ─────────────────────────────────────────────────────

[me]      write in Obsidian
           (this part stays mine)

── REVIEW & IMPROVE ──────────────────────────────────────────

[me]    > "review [post]"
           ↓
[charles]  BLOG SKILL
           frontmatter, SEO audit, structure,
           tone alignment with series
           hook and closing line test
           ↓
[me]    > "run the stress test"
           ↓
[charles]  four review personas in sequence,
           consolidated synthesis report
           ↓
[me]    > "draft LinkedIn post"
           ↓
[charles]  LINKEDIN-WRITER SKILL
           drafts based on article in my voice
           ↓
[me]      write LinkedIn post in Obsidian
           (by choice)

── PUBLISH ───────────────────────────────────────────────────

[me]    > "publish [post]"
           ↓
[charles]  BLOG SKILL
           strips Obsidian syntax, copies to repo,
           git commit, git push, Vercel deploys
           ↓
[me]      post on LinkedIn (manual by choice)

── MEASURE ───────────────────────────────────────────────────

           24 hours later
           ↓
[charles]  LINKEDIN-ANALYTICS SKILL
           playwright-fetch: authenticated browser,
           pulls post analytics
           appends to stats, updates dashboard
```

Every input I make is either a creative decision or a human-in-the-loop moment: a single command or an approval. Charles handles everything between. Five skills. No re-explaining context between steps.

The blog skill writes to the vault. The linkedin-writer reads from it without being told to. Neither coordinates with the other. They share state through the vault. That's what makes this a chain and not just a sequence: each skill reads what the previous one wrote, through a shared foundation. Playwright closes the loop using the same auth session file that fetches pages during research. One session on disk, two skills pointing at it.

The analytics step is manually triggered. A cron job that fails silently when the auth session expires is worse than a deliberate pull. For now, I choose when to close the loop.

## What compounding actually looks like

The skills started rough.

**People research**: basic workflow, got the format wrong a few times. **Daily planning**: slow, expensive, too many questions before showing anything useful. **The stress test**: one-size-fits-all personas that didn't know my voice, my goals, or my writing rules.

Each session, something got tightened. The persona got sharper. The output format got more specific. A new rule got encoded: no em dashes, 17:00 protected, say "I don't know" instead of guessing.

The skill gets better every time I use it. Not because the AI learned something. Because I got clearer about what I actually wanted, and I wrote it down.

Not a bigger skill library. Skills that get better shaped to you.

**Prompts reset. Skills compound.**

No comments section here. If you have questions or want to leave a comment, [the LinkedIn post is the place](https://www.linkedin.com/posts/pbrink_last-monday-i-published-the-first-article-activity-7438213313676595200-9gub?utm_source=share&utm_medium=member_desktop&rcm=ACoAAACIoqoBPcPiQTbpY_pQvzVS4gIZQQpgWZg). I read every reply.

*[The Second Brain Stack](/category/second-brain-stack/) · [← Part 1: Chatbot to Chief of Staff](/2026/03/09/how-i-promoted-my-ai-from-chatbot-to-chief-of-staff/) · [Part 3: The Vault →](/2026/03/18/the-vault-why-the-foundation-matters/)*
]]></content:encoded>
      <author>pieter</author>
      <category><![CDATA[Second Brain Stack]]></category>
      <category><![CDATA[Artificial Intelligence]]></category>
      <category><![CDATA[AI Agents]]></category>
    </item>
    <item>
      <title><![CDATA[2025 Guide: Getting Started with ESPHome Device Builder in Home Assistant]]></title>
      <link>https://www.pieterbrinkman.com/2025/01/29/2025-getting-started-esphome-device-builder-home-assistant/</link>
      <guid isPermaLink="true">https://www.pieterbrinkman.com/2025/01/29/2025-getting-started-esphome-device-builder-home-assistant/</guid>
      <pubDate>Wed, 29 Jan 2025 00:00:00 GMT</pubDate>
      <description><![CDATA[Learn how to install and use the ESPHome Device Builder add-on in Home Assistant to flash and manage your ESP32 and ESP8266 devices.]]></description>
      <content:encoded><![CDATA[
A lot has changed since I wrote my original articles about ESPHome. The ecosystem has matured significantly, and ESPHome is now maintained by the Open Home Foundation. The tooling has been simplified with the introduction of the **ESPHome Device Builder**, making it easier than ever to create smart home devices.

This article replaces my previous guides on [flashing ESP chips with ESPHome](/2020/12/14/flash-esp-chip-with-esphome-node-firmware/) and the [2022 update](/2022/01/01/2022-update-flash-esphome-on-esp32-esp2866-nodemcu-board/).

## What is ESPHome?

ESPHome allows you to create configurations that turn common microcontrollers into smart home devices. A device configuration consists of one or more YAML files, and based on the content of these files, ESPHome creates custom firmware which you can install directly onto your device.

Hardware defined in the configuration—such as sensors, switches, lights, and so on—will automatically appear in Home Assistant's user interface. No coding required!

There are two parts to ESPHome:

- **The firmware** that runs on your device/microcontroller
- **The ESPHome Device Builder** that runs on your computer or Home Assistant server, providing a web interface to create, edit, and install configurations

## What are ESP32 and ESP8266 boards?

ESP boards are low-cost Wi-Fi chips with built-in flash memory, allowing you to build single-chip devices capable of connecting to Wi-Fi. Newer versions like the ESP32 also provide BLE (Bluetooth Low Energy) connectivity.

You can buy them for about 4-9 euros on [AliExpress](https://s.click.aliexpress.com/e/_DkQbLrb) or with faster delivery on [Amazon](https://amzn.to/3CzWZ6B).

Read more about ESP boards in my [introduction to ESP boards article](/2020/12/14/introducing-esp-boards/).

## Installing ESPHome Device Builder

The easiest way to get started with ESPHome is to install the Device Builder as a Home Assistant add-on.

1. In Home Assistant, go to **Settings → Add-ons → Add-on Store**
2. Search for **ESPHome Device Builder** and click on it
3. Click the **Install** button
4. Once installed, click **Start**
5. Click **Open Web UI** to launch the Device Builder

Alternatively, you can use this button to go directly to the add-on page:

[![Open your Home Assistant instance and show the ESPHome add-on.](https://my.home-assistant.io/badges/supervisor_addon.svg)](https://my.home-assistant.io/redirect/supervisor_addon/?addon=5c53de3b_esphome&repository_url=https%3A%2F%2Fgithub.com%2Fesphome%2Fhome-assistant-addon)

> **Note:** If you're running Home Assistant Core or in a way that does not provide access to add-ons, you can run the ESPHome Device Builder independently in Docker.

## Creating Your First Device Configuration

When you open the Device Builder for the first time, a wizard will guide you through creating your first configuration. You have three options:

### New Device Setup
The wizard guides you through platform selection, board configuration, and Wi-Fi setup to create a basic working configuration. This is the recommended option for beginners.

### Import from File
Upload an existing ESPHome configuration file (.yaml or .yml). This is useful for restoring backups and migrating configurations from other systems.

### Empty Configuration
Creates a minimal configuration file for advanced users who prefer to write their own configuration from scratch or paste from [devices.esphome.io](https://devices.esphome.io/).

## Initial Installation on Your Device

After creating your configuration, you'll need to install it on your ESP device. The initial installation is often the most challenging part—but only until you've done it a few times!

### Method 1: Direct USB Installation (Recommended)

This is the easiest method if your Home Assistant runs with HTTPS:

1. Connect your ESP board to your computer via USB
2. In the Device Builder, click the three-dot menu on your device and select **Install**
3. Choose **Plug into this computer**
4. A browser popup will appear—select your device's COM port and click **Connect**
5. ESPHome will compile and flash the firmware directly from your browser

> **Note:** This method requires HTTPS. If you use Nabu Casa, you can use the secure public URL found in Settings → Home Assistant Cloud → Remote Control.

### Method 2: Plug into Home Assistant Server

If your ESP is connected directly to your Home Assistant server via USB:

1. Connect the ESP board to a USB port on your Home Assistant server
2. In the Device Builder, click **Install** on your device
3. Choose **Plug into the computer running ESPHome Dashboard**
4. Select the correct serial port and proceed with installation

### Method 3: Manual Download

If the above methods don't work for your setup:

1. Click **Install** and choose **Manual Download**
2. Wait for compilation to complete and download the .bin file
3. Go to [ESPHome Web](https://web.esphome.io/) in your browser
4. Connect your ESP via USB and click **Connect**
5. Select **Install** and choose your downloaded .bin file

## Device Builder Interface

Once you have devices configured, the main page displays all your configuration files. For each device, you can:

- **UPDATE**: Appears when the device runs an older ESPHome version than available in the add-on
- **EDIT**: Opens the configuration editor
- **LOGS**: View device logs via USB/serial or Wi-Fi connection
- **Overflow menu** (three dots):
  - **Validate**: Check configuration for errors
  - **Install**: Open the install dialog
  - **Clean Build Files**: Delete generated build files to fix compile issues
  - **Delete**: Remove the configuration

Configuration files are stored in `<HOME_ASSISTANT_CONFIG>/esphome/`.

## Updating Your Device (Over-the-Air)

Once ESPHome is installed on your device, you never need to plug in a USB cable again! ESPHome supports Over-the-Air (OTA) updates.

Whenever you modify your device's configuration:

1. Click **Save** to store your changes
2. Click **Install** and choose **Wirelessly**
3. ESPHome will compile and push the new firmware over Wi-Fi

## Connecting Your Device to Home Assistant

Once your configuration is installed and the device connects to Wi-Fi, Home Assistant will automatically discover it. You'll see a notification offering to configure the new device.

Alternatively, manually add the device:

1. Go to **Settings → Devices & Services → Integrations**
2. Click **Add Integration** and search for **ESPHome**
3. Enter the device hostname (e.g., `living-room-sensor.local`) or IP address

## Adding Features to Your Device

Edit your device's YAML configuration to add components. Here's an example adding a GPIO switch:

```yaml
switch:
  - platform: gpio
    name: "Living Room Dehumidifier"
    pin: GPIO5
```

And a binary sensor for monitoring a GPIO pin:

```yaml
binary_sensor:
  - platform: gpio
    name: "Living Room Window"
    pin:
      number: GPIO0
      inverted: true
      mode:
        input: true
        pullup: true
```

After adding components, save and install to update your device. The new entities will automatically appear in Home Assistant!

## My ESPHome Projects

Read more about how I use ESPHome in my smart home:

- [Make your fireplace smart with ESPHome](/2020/12/14/bellfire-home-automation-project/)
- [Build a cheap Air Quality sensor using ESPHome](/2021/02/03/build-a-cheap-air-quality-meter-using-esphome-home-assistant-and-a-particulate-matter-sensor/)
- [Measure your water usage using ESPHome](/2022/02/02/build-a-cheap-water-usage-sensor-using-esphome-home-assistant-and-a-proximity-sensor/)

## Resources

- [ESPHome Documentation](https://esphome.io/)
- [ESPHome Device Database](https://devices.esphome.io/)
- [ESPHome Discord](https://discord.gg/KhAMKrd)
- [Home Assistant Community Forum](https://community.home-assistant.io/c/esphome/)

Happy automating! 🏠
]]></content:encoded>
      <author>pieter</author>
      <category><![CDATA[ESPHome]]></category>
      <category><![CDATA[Home Assistant]]></category>
      <category><![CDATA[Home Automation]]></category>
    </item>
    <item>
      <title><![CDATA[Next.js Conf presentation online]]></title>
      <link>https://www.pieterbrinkman.com/2022/10/31/next-js-conf-presentation-online/</link>
      <guid isPermaLink="true">https://www.pieterbrinkman.com/2022/10/31/next-js-conf-presentation-online/</guid>
      <pubDate>Mon, 31 Oct 2022 00:00:00 GMT</pubDate>
      <description><![CDATA[Last week I presented at one of the largest developer conferences in the world; Next.js Conf. The video is now available on Youtube.…]]></description>
      <content:encoded><![CDATA[
Last week I presented at one of the largest developer conferences in the world; Next.js Conf. The video is now available on Youtube.

https://www.youtube.com/watch?v=ati9lB4n\_2o&list=PLBnKlKpPeagll1CCK08EvjqgCq0C\_dXZq&index=21

### Stand up a commerce storefront in 5 minutes

In this session, we're going to build a storefront using Next.js Commerce powered by Sitecore OrderCloud, an API-first headless commerce platform.

Join this end-to-end demo highlighting the amazing developer experience and ease of use of the Next.js Commerce starterkit, in combination with Sitecore OrderCloud and Vercel.

From the setup of your dev environment, all the way to deploying and updating the environment hosted on Vercel.
]]></content:encoded>
      <author>pieter</author>
      <category><![CDATA[Public Speaking]]></category>
      <category><![CDATA[Sitecore]]></category>
      <category><![CDATA[Video]]></category>
    </item>
    <item>
      <title><![CDATA[SUGCON keynote recording available]]></title>
      <link>https://www.pieterbrinkman.com/2022/06/02/sugcon-keynote-recording-available/</link>
      <guid isPermaLink="true">https://www.pieterbrinkman.com/2022/06/02/sugcon-keynote-recording-available/</guid>
      <pubDate>Thu, 02 Jun 2022 00:00:00 GMT</pubDate>
      <description><![CDATA[The recordings of the SUGCON event are now available including the keynote that Jason and I delivered on Friday morning.…]]></description>
      <content:encoded><![CDATA[
The recordings of the SUGCON event are now available including the keynote that Jason and I delivered on Friday morning.

https://www.youtube.com/watch?v=4HdWsdjcOJY&list=PLvwdDTmlDsRydZB6Bj7QzHnJejzQql3j-&index=18&t=125s

## Session outline

Composable DXP is all the rage, but what if you already have a platform installation with years of investment into getting it just how you like? How do you gradually move your architecture over to something that is MACH and headless? Do you need a full rebuild? Do you need Next.js or not? Where does content go now? What are the benefits and downsides?

Pieter Brinkman (@pieterbrink123) and Jason St-Cyr (@StCyrThoughts) take an overview of a few XM and XP scenarios and how you can gradually migrate to a composable architecture, along with some of the advantages and disadvantages of different options.
]]></content:encoded>
      <author>pieter</author>
      <category><![CDATA[Composable DXP]]></category>
      <category><![CDATA[Events]]></category>
      <category><![CDATA[Public Speaking]]></category>
      <category><![CDATA[Sitecore]]></category>
    </item>
    <item>
      <title><![CDATA[Architecture guide to migrate from On-prem/PaaS to SaaS]]></title>
      <link>https://www.pieterbrinkman.com/2022/05/16/architecture-guide-to-migrate-from-on-prem-paas-to-saas/</link>
      <guid isPermaLink="true">https://www.pieterbrinkman.com/2022/05/16/architecture-guide-to-migrate-from-on-prem-paas-to-saas/</guid>
      <pubDate>Mon, 16 May 2022 00:00:00 GMT</pubDate>
      <description><![CDATA[Composable DXP is all the rage. Why is the industry moving towards SaaS and Composable architecture? What are the benefits of composable for customers? And…]]></description>
      <content:encoded><![CDATA[
Composable DXP is all the rage. Why is the industry moving towards SaaS and Composable architecture? What are the benefits of composable for customers? And is composable for everyone. These are all questions that are addressed in the Architecture guide to SaaS video.

Next to that we're also addressing that the actual migration. What if you already have a platform installation with years of investment into getting it just how you like? How do you gradually move your architecture over to something that is MACH and headless? Do you need a full rebuild?

https://www.youtube.com/watch?v=ZTjk5t9dfRQ&t=5s

If you want to learn more and need an introduction into Composable DXP you can read the [Introduction to Composable DXP article.](/2021/07/13/introduction-to-the-composable-dxp/)
]]></content:encoded>
      <author>pieter</author>
      <category><![CDATA[Composable DXP]]></category>
      <category><![CDATA[Public Speaking]]></category>
      <category><![CDATA[Sitecore]]></category>
      <category><![CDATA[Video]]></category>
    </item>
    <item>
      <title><![CDATA[Sharing my updates on Polywork]]></title>
      <link>https://www.pieterbrinkman.com/2022/03/11/sharing-my-updates-on-polywork/</link>
      <guid isPermaLink="true">https://www.pieterbrinkman.com/2022/03/11/sharing-my-updates-on-polywork/</guid>
      <pubDate>Fri, 11 Mar 2022 00:00:00 GMT</pubDate>
      <description><![CDATA[A lot of activities around my job and hobbies are not easy to share. I don't like sharing too much activities on LinkedIn and I don't want to write an…]]></description>
      <content:encoded><![CDATA[
A lot of activities around my job and hobbies are not easy to share. I don't like sharing too much activities on LinkedIn and I don't want to write an article on everything. It just takes too much time.

A teammember pointed me to Polywork.com (Thanks Jason). I'm really enjoying Polywork, it's a easy way to share short updates. I've chosen to share updates around work and personal passions, including hobbies like Surfing and BBQ.

You can find my timeline in the top menu or using the link below:

[Pieter Brinkman - ⚙️Technologist, 🌊 Waterman, 🔪 Foodie, 🦸‍♂️ Father](https://timeline.pieterbrinkman.com/)

Let me know if you want to join Polyworks, I still have some invite codes available.
]]></content:encoded>
      <author>pieter</author>
      <category><![CDATA[Portfolio]]></category>
      <category><![CDATA[Publications]]></category>
    </item>
    <item>
      <title><![CDATA[Build a cheap water usage sensor using ESPhome and a proximity sensor]]></title>
      <link>https://www.pieterbrinkman.com/2022/02/02/build-a-cheap-water-usage-sensor-using-esphome-home-assistant-and-a-proximity-sensor/</link>
      <guid isPermaLink="true">https://www.pieterbrinkman.com/2022/02/02/build-a-cheap-water-usage-sensor-using-esphome-home-assistant-and-a-proximity-sensor/</guid>
      <pubDate>Wed, 02 Feb 2022 00:00:00 GMT</pubDate>
      <description><![CDATA[This article will provide you with a walkthrough on how you can build a water usage meter sensor that integrates with your Home Assistant for under 10 $/EURO…]]></description>
      <content:encoded><![CDATA[
This article will provide you with a walkthrough on how you can build a water usage meter sensor that integrates with your Home Assistant for under 10 $/EURO without the need for any soldering or coding skills.

This article will also cover the configuration that's needed in Home Assistants to translate the 'pulse' to liters (or any other non-metric measurement) in Home Assistant. In the end you will have clear insights in how much water you are using per day, hour, and week.

## Why do you want to measure water usage of your home?

These days it's all about insights. I measure pretty much all my utilities, including power and [city heating](/2022/02/01/make-your-city-heating-stadsverwarming-smart-and-connect-it-home-assistant-energy-dashboard/). The last missing piece is water usage. Although the water in the Netherlands is not really expensive, I wanted to get more insights into how much water we are using and if there is any way to save some water. Unfortunately, water delivery doesn't come with a smart meter. There's just an analog counter. So how do you measure the water usage and make this analog meter smart?

## See water usage in the Home Assistant Energy Dashboard

The 2022.11 release of Home Assistant added the option to track water usage in the Home Assistant Energy dashboard. The ESPhome configuration has been updated to support this feature. Thanks to [MJV](https://community.home-assistant.io/u/mjv/summary) for sharing his configuration on the Home Assistant forums.

![How do you get data out of your water meter?](/images/2022/11/homeassistant-energy-dashboard.webp)

## How do you get data out of your water meter?

A water meter is an analog measuring instrument. Luckily the meter provides a spinning wheel that contains a magnet. That magnet can be recognized by a proximity sensor. This will provide you with a pulse, every pulse is equal to what has been defined by the meter. In my case; 1 pulse == 0.0001 M3 x 10 == 1 liter.

![How do you get data out of your water meter?](/images/2022/01/image-1.webp)

Be aware that there are many different water meters out there. Each meter can have a different magnet that the proximity sensor recognizes and also different values for a pulse meter. This information can be found on the meter.

## What do we need to build the smart water meter?

Now let's get started. What do you need to build the solution.

### A proximity sensor (LJ18A3)

A proximity sensor recognizes the proximity of metal. This sensor will be able to recognize the piece of metal that is placed on the meter and will provide a pulse whenever a full cycle of the meter has passed.

![proximity sensor read water values](/images/2022/01/image-2.webp)

![internal wiring of LJ18A3 proximity sensor](/images/2022/02/LJ18A3-8-z-Proximity-sensor.webp)

LJ18A3-8-Z/BX internal wiring

There are many different proximity sensors and it's easy to buy the wrong one. Also, there are many solutions written that need resistors to work. If you buy a LJ18A3 you don't need those. It's also important to make sure you buy a 5V version so you can power the sensor directly from the ESP. I did a lot of research and ended up with this LJ18A3-8Z/BX sensor sold on AliExpress.com . Important that you buy this sensor and not another variant, the resistor is already built in and it works on 5v.

### An ESP controller board

ESP boards are a low-cost Wi-Fi chips that have built-in flash chips allowing you to build a single-chip device capable of connecting to Wi-Fi. newer versions like the ESP32 boards also provide you BLE (Bluetooth low energy) and there’s loads of variety of boards you can use. 

For this project, I’m using the ESP8266, NodeMCU board.  I prefer using a Development Board because it comes with a USB and all the pins are pre-soldered, making it easy to use. You can buy this board at your favorite Chinese shop or Amazon somewhere for a price between 3-10 euros.  
Here are deep links to a number of stores AliExpress.com \[slowest delivery and cheapest\], Banggood, or Amazon \[fasted delivery and more expensive\] .  
Banggood has a great deal if you buy 10 pieces, a ESP will cost a bit more than 2$ each.

![NODEMCU ESP8266](/images/2020/12/image-13.webp)

NODEMCU ESP8266

### Shopping list

**Required**

-   A **ESP8266** board or **ESP32** if you also want to use Bluetooth features (not required)  
    For this project, I’m using the ESP8266, NodeMCU board. You can buy them at any of your favorite stores, here are some deep links:  
    [AliExpress.com](/recommends/esp8266-nodemcu-aliexpress/) *\[slowest delivery and cheapest\]*, [Banggood](/recommends/esp-banggood/) or [Amazon](/recommends/esp-amazon-com/) *\[fasted delivery and more expensive\]* .  
    [Banggood](https://www.banggood.com/Geekcreit-Wireless-NodeMcu-Lua-CH340G-V3-Based-ESP8266-WIFI-Internet-of-Things-IOT-Development-Module-p-1420166.html?warehouse=CN&ID=511073&p=B51912455124201402C@&custlinkid=1752218) has a great deal if you buy 10 pieces, an ESP will cost a bit more than 2$ each.
-   **LJ18A3-8Z/BX** **Proximity sensor**  
    Make sure you buy a 5v LJ18A3-8Z/BX sensor, see paragraph about the sensor above.  
    I could only find this sensor at [AliExpress.](https://s.click.aliexpress.com/e/_9H2JCU)
-   Tyraps or Velcro strips to connect the proximity sensor to your water meter  
    [AliExpress](https://s.click.aliexpress.com/e/_AewiI8)
-   Housing to securely fit all electronics on [Banggood](/recommends/esp-case-banggood/) (or [buy 5](/recommends/esp-case-5pc-banggood/) and it’s way cheaper)

**Optional**

-   [Dupont cables (](/recommends/dupont-cable-m_f-aliexpress/)[Amazon](https://amzn.to/3sgzDJl), [Banggood](https://www.banggood.com/40pcs-10cm-Female-To-Female-Jumper-Cable-Dupont-Wire-p-994059.html?warehouse=CN&ID=0&p=B51912455124201402C@&custlinkid=1752237) or [AliExpress](https://s.click.aliexpress.com/e/_9Qj2sT))  and a [Hot Glue gun](/recommends/hot-glue-gun-amazon_com/) or [Soldering iron](/recommends/soldering-iron-kit-amazon-com/) or shrink solder seal connectors ([Banggood](https://www.banggood.com/50PCS-Solder-Seal-Wire-Connectors-Waterproof-Heat-Shrink-Butt-Connectors-Electrical-Wire-Terminals-Insulated-Butt-Splices-p-1743516.html?cur_warehouse=CN&ID=233&rmmds=search&p=B51912455124201402C@&custlinkid=1752239) or [AliExpress](https://s.click.aliexpress.com/e/_ASq9zZ))

## Build the water usage meter sensor

Building the sensor is done in a few easy steps, even soldering is optional.

Some steps might to generic introduction articles. Make sure you find your way back to this article to follow along.

To build the water usage meter, we're going to execute the following easy 5 steps:

1.  Install ESPHome in Home Assistant and create the water sensor node
2.  Flash your ESP chip with ESPHome firmware
3.  Wire the proximity sensor to the ESP board
4.  Place the proximity sensor on your water meter
5.  Configure ESPHome to make proximity sensor measurements available in Home Assistant
6.  Configure Home Assistant; add sensors and UI components (Lovelace)

### **1\. Install ESPHome in Home Assistant and create the water sensor node**

ESPHome is a system to control your ESP8266/ESP32 by simple yet powerful configuration files and control them remotely through Home Automation systems like Home Assistant.

ESPHome is amazing, it's extremely powerful, easy and, more important very stable. It has never failed me. The integration with Home Assistant is seamless, including autodiscovery within Home Assistant and one-click configuration. Learn more about ESPHome in my [Introducing ESPHome](/2020/12/14/introducing-esphome/) article.

### **2\. Flash the ESP with ESPHome firmware**

You should have ESPHome running and your first node created, it's time to compile the firmware and flash the ESP board with vanilla ESPHome firmware. You can find step-by-step guidance in the [Flash ESP chip with ESPHome node firmware](/2022/01/01/2022-update-flash-esphome-on-esp32-esp2866-nodemcu-board/) article.

### **3\. Wire the proximity sensor to the ESP board and place the sensor on the water meter**

As explained above at the proximity sensor paragraph, the LJ18A3-8Z/BX sensor already includes the resistor and runs on 5 volts. Wiring this sensor is a really easy job. If you want to make it really easy, you order some female Dupont cables ([Banggood](https://www.banggood.com/40pcs-10cm-Female-To-Female-Jumper-Cable-Dupont-Wire-p-994059.html?warehouse=CN&ID=0&p=B51912455124201402C@&custlinkid=1752237) or [AliExpress](https://s.click.aliexpress.com/e/_9Qj2sT)) and shrink solder seal connectors ([Banggood](https://www.banggood.com/50PCS-Solder-Seal-Wire-Connectors-Waterproof-Heat-Shrink-Butt-Connectors-Electrical-Wire-Terminals-Insulated-Butt-Splices-p-1743516.html?cur_warehouse=CN&ID=233&rmmds=search&p=B51912455124201402C@&custlinkid=1752239) or [AliExpress](https://s.click.aliexpress.com/e/_ASq9zZ)).

Wire colors can be slightly different, I've seen both orange and pink be used for ground.

| Proximity Sensor **LJ18A3-8Z/BX** | ESP8266 (ESP32) |
| --------------------------------- | --------------- |
| Black                             | D6              |
| Orange / Pink                     | 5V (VIN)        |
| Blue                              | Ground (G/GRN)  |

Wiring table

Below is an image of the wiring that needs to happen.

![4\. Place the proximity sensor on your water meter](/images/2022/02/esp8266-LJ18A3-8Z-proximity-sensor-wiring.webp)

### **4\. Place the proximity sensor on your water meter**

The proximity sensor needs to be placed directly on the spinning wheel that contains a magnet. It should look something like this, the location might vary per meter.

![4\. Place the proximity sensor on your water meter](/images/2022/02/proximity-sensor-location-water-meter.webp)

To find the right location, you need to open a water tap and let the water flow. The small radar will start spinning. Place the sensor directly on the sensor. The sensor has a red LED light on top, the LED will turn on when the magnet passes. Every time the magnet passes and LED lights up means that a full cycle has occurred. The meter will also show the measurement of a full cycle, in my case this was liters.

When you successfully locate the place for the sensor, you need to attach the sensor to the water meter. There are multiple ways to do this. Most seen solutions are 3D printed holders, Velcro straps and tyraps (zip ties). I've chosen large Tyraps (zip ties) for my solution. See the picture below.

![5\. Configure ESPHome to make proximity sensor measurements available in Home Assistant](/images/2022/02/proximity-sensor-on-watermeter.webp)

### **5\. Configure ESPHome to make proximity sensor measurements available in Home Assistant**

All set, the sensor is placed, and it's time to expose the measurement values and send them over to Home Assistant.

Open ESPHome and click EDIT on your node. The ESPHome configuration editor will now show. Add the following configuration at the bottom of the configuration.

sensor:

```
sensor:
- platform: pulse_counter
    pin: GPIO12
    update_interval : 6s
    name: "water pulse"
    id: water_pulse

- platform: pulse_meter
    pin: GPIO12
    name: "Water Pulse Meter"
    unit_of_measurement: "liter/min"
    icon: "mdi:water"
    total:
      name: "Water Total"
      unit_of_measurement: "liter"

  - platform: pulse_meter
    pin: GPIO12
    name: "Water Pulse Meter"
    unit_of_measurement: "liter/min"
    icon: "mdi:water"
    total:
      name: "Water Meter Total"
      unit_of_measurement: "m³"
      id: water_meter_total
      accuracy_decimals: 3
      device_class: water
      state_class: total_increasing
      filters:
        - multiply: 0.001

  - platform: template
    name: "Water Usage Liter"
    id: water_flow_rate
    accuracy_decimals: 1
    unit_of_measurement: "l/min"
    icon: "mdi:water"
    lambda: return (id(water_pulse).state * 10);
    update_interval: 6s

```

### **7\. Add the water sensor to the Energy Dashboard**

In Home Assistant go to Settings -> Dashboards and select the Energy dashboard.

Find the Water Consumption section and add the Water Meter Total sensor as a water data source.

![Give Home Assistant a few hours and your water consumption will show in the energy dashboard!](/images/2022/11/add-water-sensor-to-dashboard.webp)

Give Home Assistant a few hours and your water consumption will show in the energy dashboard!

![6\. Configure Home Assistant; add sensors and UI components (Lovelace)](/images/2022/11/Homeassistant-energy-water-usage-graph.webp)

### **6\. Configure Home Assistant; add sensors and UI components (Lovelace)**

Before we can show the measurements in Home Assistant we need to add a utility meter sensor to Home Assistant.

The utility meter sensor provides functionality to track consumptions of various utilities, including water. The sensor operates in cycles that can be defined. From hourly all the way till yearly. You will supply a total value of usage and the sensor will track the delta (difference) of the total between the period. It also comes with some additional reports and insights about usage. More info can be find on the Utility meter page on [the Home Assistant](https://www.home-assistant.io/integrations/utility_meter/) website

#### Add the utility meter to the Home Assistant configuration.yaml

For the water meter we're going to track hourly, daily, monthly and yearly. Create the utility meter and sensors by adding following configuration to your configuration.yaml file.

```
utility\_meter:
  util\_water\_usage\_hourly:
    source: sensor.water\_total
    cycle: hourly
  util\_water\_usage\_daily:
    source: sensor.water\_total
    cycle: daily
  util\_water\_usage\_monthly:
    source: sensor.water\_total
    cycle: monthly
  util\_water\_usage\_yearly:
    source: sensor.water\_total
    cycle: yearly
```

Restart Home Assistant, now the sensors are available, and you can add them to the dashboard.

#### Add the sensors to Home Assistant Lovelace UI

For viewing the measurements in my lovelace dashboard I used a custom card called [mini-graph-card](https://github.com/kalkih/mini-graph-card), this card can be easily installed using HACS. I use the following Lovelace configuration to show the water usage.

```
\# Hourly water usage
type: custom:mini-graph-card
entities:
  - entity: sensor.util\_water\_usage\_hourly
name: Water usage last 24 hours
group\_by: hour
hours\_to\_show: 24
show:
  graph: bar

```

![Last 24 hours water usage mini-graph-card result](/images/2022/02/image-4.webp)

Last 24 hours water usage mini-graph-card result

```
\# Daily water usage
type: custom:mini-graph-card
entities:
  - entity: sensor.util\_water\_usage\_daily
name: Water usage (day)
hours\_to\_show: 168
aggregate\_func: max
group\_by: date
show:
  graph: bar
```

![Daily water usage mini-graph-card result](/images/2022/02/image-5.webp)

Daily water usage mini-graph-card result

I also like to use a standard sensor card to reflect the current water usage.

```
type: sensor
entity: sensor.water\_pulse\_meter
graph: line
name: Current water usage
```

![Current water usage with sensor card](/images/2022/02/image-6.webp)

Current water usage with sensor card

These are just three examples of UI components. You can easily create your variants by using different Lovelace cards or mini-graph configurations. You can also decide if you want to report on Hour, Day, Month or Year. All these sensors are available as we defined them in the previous step (step 5).

![Historical chart utility meter home assistant](/images/2022/01/image-4.webp)

The cool thing about using the utility meter sensor is that you can drill into reports. If you click the Lovelace card you will see a historic chart.

In this card, you can also click *show more*. This will bring you to the historical reports. In these reports you can filter down on selected times and get more insights.

That's it! Now you are tracking your water usage using a simple sensor that costs under 10$. Let me know how you use the water meter and what Lovelace configuration you use to visualize the water usage.
]]></content:encoded>
      <author>pieter</author>
      <category><![CDATA[ESPHome]]></category>
      <category><![CDATA[Home Assistant]]></category>
      <category><![CDATA[Home Automation]]></category>
    </item>
    <item>
      <title><![CDATA[Make your city heating (stadsverwarming) smart and connect it Home Assistant energy dashboard]]></title>
      <link>https://www.pieterbrinkman.com/2022/02/01/make-your-city-heating-stadsverwarming-smart-and-connect-it-home-assistant-energy-dashboard/</link>
      <guid isPermaLink="true">https://www.pieterbrinkman.com/2022/02/01/make-your-city-heating-stadsverwarming-smart-and-connect-it-home-assistant-energy-dashboard/</guid>
      <pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
      <description><![CDATA[Some places in the Netherlands have city heating. The intention is to reuse warmth from industry to heat your houses and get warm water. In theory this is a…]]></description>
      <content:encoded><![CDATA[
Some places in the Netherlands have city heating. The intention is to reuse warmth from industry to heat your houses and get warm water. In theory this is a very nice system, but in real world this provides pure vendor lock in, as you can't switch vendor. This makes city heating very expensive for the consumer. Also most of the warmth is not generated by (green) industry residual warmth but generated by burning gas, biomass and other fuels. So not really a green solution. But enough about my complaints about city heating. More important: how can you read the values and get insights of your usage.

![Some places in the Netherlands have city heating. The intention is to reuse warmth from industry to heat your houses and…](/images/2022/01/Stadsverwarming-in-Home-Assistant.webp)

The idea of a smart home is not to only control your home, but also get insights out of your home. A warm home and shower is a crucial for a comfortable day-to-day. It's also one of the biggest costs of living. So we need insights.

Home Assistant has an amazing Energy dashboard, that can also your heating by gas. Unfortunately city heating is not measures with M3 gas but it's measured measurement called GigaJoule (GJ). So how do we get the energy insights of city heating into Home Assistant?

That's what I'm going to describe in this article :).

## How does city heating work (stadsverwarming)? What is GJ and M3?

A heat meter consists of a volume meter, Two pieces of temperature sensors (one for incoming water temp and one for the return water temperature).

The energy consumption is calculated in the calculation module of the heat meter. This meter calculates the amount of volume and temperatures into energy in GigaJoule (GJ) every 2 seconds.

Energy is calculated using the following formula: Q=M x C x Delta T  
Q= Energy  
M = Mass (measured water volume)  
C = Specific heat of the medium (this changes with the temperature and the differences are also included in the calculation)

Delta T = Flow temperature – return Temperature

Luckily the city heating convertor takes care of this calculations, we don't need to worry about this we will only use GJ and M3.

**Convert heat to m3.**

To get a clearer picture of your actual consumption, it can be useful to convert consumption in GJ to m3 gas. You can do this with the following formula: Heat GJ x 32 = … m3 gas

**Domestic hot water**

Domestic hot water is measured with only a volume meter. Convert the measured volume in m3 from the annual consumption / meter readings for hot tap water to GJ with the following rule: Hot tap water m3 x 0.21 = …GJ

We will use the formula for M3 to GJ and GJ to M3 to translate the city heating to gas M3 measurements, which we than can use in the Energy dashboard of Home Assistant.

## What do we need to read city heating values?

To build read-out the measurements you need following hardware.

### IR Schreib/Lesekopf USB (Optokopf)

An IR Schreib/Lesekopf is a Infrared read and write USB device. It's a German device. I've ordered it directly from the manufactory and it arrived in the Netherlands in 4 days. The price at the moment was about 45 euro's: [IR Schreib/Lesekopf USB (Optokopf)-ART0027 (weidmann-elektronik.de)](https://shop.weidmann-elektronik.de/index.php?page=product&info=24)

**UPDATE:** In the comments of this article I got a tip from Erik for a cheaper variant from AliExpress. Click here to order from [AlieExpress](https://s.click.aliexpress.com/e/_DCnUMMf) the costs are around 20-25 euro's, the delivery thime will be a bit longer than ordering from Germany.

![Raspberry Pi (RPI)](/images/2022/01/image-5.webp)

### Raspberry Pi (RPI)

A Raspberry Pi is a small, cheap, low energy using micro computer. The IR sensor will be connected through USB to this RPI, the RPI will linux and will be used to execute scripts that will get the measurements from the Kamstrup and send it over to Home Assistant.

![Raspberry Pi (RPI)](/images/2022/01/image-6.webp)

You can buy a Rasberry Pi at almost every electronic store. Here's a direct link for [Amazon](https://www.amazon.nl/Raspberry-Model-Mainboard-Polig-Microsd-Geheugenkaartsleuf/dp/B00LPESRUK/ref=sr_1_3?__mk_nl_NL=%C3%85M%C3%85%C5%BD%C3%95%C3%91&crid=3GZQLCTDEVRY0&keywords=rapsberry+pi&qid=1643659330&rnid=16579970031&s=electronics&sprefix=rapsberry+pi%2Caps%2C56&sr=1-3). Perhaps you have an old one laying around, this can basically run from any RPI. You don't need a fancy shiny fast new one :)

Make sure that you have a [microSD card](https://amzn.to/35Iq7at) (min 8GB), a powerfull [USB charger with a micro USB cable](https://amzn.to/3GnrBUc).

## Let's start the build

### 1\. Install Rasbian on the Rasberry PI  

Follow the instructions on the Raspberry.org page on [Installing Raspbian with NOOBS](https://projects.raspberrypi.org/en/projects/noobs-install)

2\. Install and configure the Mosquitto MQTT broker addon in Home Assistant

Follow steps on this Github page to confige the MQTT broker in Home Assistant.

[addons/DOCS.md at master · home-assistant/addons · GitHub](https://github.com/home-assistant/addons/blob/master/mosquitto/DOCS.md)

### 3\. Install the scripts to read out the measurements from the Kamstrup

My city heater is a Kamstrup Multical 402. I've tried to find solution to read out the heater before, but never with success. During my recent search I stumbled on a great Github repository from [Matthijs Visser](https://github.com/matthijsvisser/kamstrup-402-mqtt).

He created a project that provides a Python library that enables communication with the Kamstrup Multical 402 heat meter. The configured parameters will be read from the meter at a certain interval and published in MQTT messages. On his [Github](https://github.com/matthijsvisser/kamstrup-402-mqtt) page you can find all the steps to install his script including the pre-requisites. I do want to provide a few additional details for the less technical people that want to build this.

I'm not going into to much details, but I want to provide you enough to get started.

1.  Get SSH access to your RPI (here's an [article](https://phoenixnap.com/kb/enable-ssh-raspberry-pi) that explains how to do this)
2.  Install pre requisites for the script to work. On the Github page of the script is a list of pre requisites. You need to install these scripts; Pyserial, Paho MQTT and yaml. You can do this using following commands.

```
sudo su
apt-get update
apt-get install python3-pip
pip3 install pyserial
pip3 install paho-mqtt
pip3 install yaml-1.3
```

3.  Download (clone) the scripts

You now need to download the scripts from the Github repository. You will do this by downloading Git and cloning the repository using following commands.

```
apt-get install git
git clone https://github.com/matthijsvisser/kamstrup-402-mqtt.git
```

4.  Configure the script

Read the [Github](https://github.com/matthijsvisser/kamstrup-402-mqtt) instruction from Matthijs. In short this is what you need to do. Open the config.yaml and configure your MQTT broker and parameters you want to send.

```
nano config.yaml
```

In the config yaml set the IP of the MQTT broker you configured at step 2.

```
mqtt:
    host: 192.168.0.3
    port: 1883
    client: kamstrup
    topic: kamstrup
    qos: 0
    retain: False
    authentication: False

serial\_device:
    com\_port: /dev/ttyUSB0
kamstrup:
    parameters:
    - energy
    - volume
    - temp1
    - temp2
    - tempdiff

```

5.  Place the IR reader on the Kamstrup and verify the data.

Find the right place to put the IR reader. It needs to be located right across the two IR leds exposed on the Kamstrup. This is very precise work and you need to place the IR reader and execute the script while watching the log. Again all documented very well on Matthijs his GIT page. If everything goes correctly you should see something like this in your log.

![6. Run the script as service](/images/2022/01/image-7.webp)

6.  Run the script as service

Follow the steps on Matthijs his Git to run the script as a service -> [instructions](https://github.com/matthijsvisser/kamstrup-402-mqtt#running-as-a-systemd-service).

## Create sensors in Home Assistant

Perfect, we're all set. The RPI is sending the readings to the MQTT broker. We can now go into the configuration.yaml and read the values from MQTT and translate them to sensors.

In the configuration of the script we defined the topic of the MQTT message as /kamstrup/. Following JSON values are posted to the MQTT broker in the topic kamstrup/values.

```
{
  "energy": 170.127,
  "volume": 1212.508,
  "temp1": 56.300000000000004,
  "temp2": 31.900000000000002,
  "tempdiff": 24.400000000000002,
  "flow": 7
}
```

Open your Configuration.yaml and add following mqtt sensors.

```
\- platform: mqtt
    name: "CH\_Consumed\_Energy"
    state\_topic: "kamstrup/values"
    value\_template: "{{ value\_json.energy }}"
    unit\_of\_measurement: "GJ"
  - platform: mqtt
    name: "CH\_Consumed\_Water"
    state\_topic: "kamstrup/values"
    value\_template: "{{ value\_json.volume }}"
    unit\_of\_measurement: "m³"
  - platform: mqtt
    name: "CH\_Temperature\_in"
    state\_topic: "kamstrup/values"
    value\_template: "{{ value\_json.temp1 }}"
    unit\_of\_measurement: "°C"
  - platform: mqtt
    name: "CH\_Temperature\_out"
    state\_topic: "kamstrup/values"
    value\_template: "{{ value\_json.temp2 }}"
    unit\_of\_measurement: "°C"
  - platform: mqtt
    name: "CH\_Temperature\_diff"
    state\_topic: "kamstrup/values"
    value\_template: "{{ value\_json.tempdiff }}"
    unit\_of\_measurement: "°C"
  - platform: mqtt
    name: "CH\_Current\_flow"
    state\_topic: "kamstrup/values"
    value\_template: "{{ value\_json.flow }}"
    unit\_of\_measurement: "l/uur"

```

This will provide you with a number of sensors you can use in you dashboards. Restart Home Assistant and execute the script on the RPI. Now check in the Home Assitant Developer Tools if the states are updated.

![Translate GJ into gas that we can use in the Energy Dashboard](/images/2022/02/image.webp)

### Translate GJ into gas that we can use in the Energy Dashboard

Add following mqtt sensor in the configuration.yaml, directly under the other sensors you just created.

```
  - platform: mqtt
    name: "CH\_to\_Gas"
    state\_topic: "kamstrup/values"
    # apply formula to value to translate to gas
    value\_template: "{{ value\_json.energy | float \* 32 }}"
    unit\_of\_measurement: "m³"
    state\_class: 'total\_increasing'
    # Set device class to gas so we can use the sensor in the energy dashboard
    device\_class: 'gas'

```

In this sensor we're applying the GJ to M3 gas formula as the value and setting the device\_class to gas. Restart home assistant and now add the gas sensor as Gas Consumptoin source in the energy settings (Configuration -> Energy)  

![The results](/images/2022/02/image-1.webp)

## The results

Now wait a few hours and see your city heating consumption appear in the Energy dashboard of Home Assistant. You'll now get insights in how much heating you use.

![And also the Gas will be added to the very cool Energy Distribution widget.](/images/2022/02/image-3.webp)

And also the Gas will be added to the very cool Energy Distribution widget.

![Home Assistant Energy Distribution widget](/images/2022/02/image-2.webp)

Home Assistant Energy Distribution widget

Hope this article helped. Let me know if you have any questions. Big thanks to [Matthijs Visser](https://github.com/matthijsvisser) for his great work on the scripts to read out the values.

Happy automating!

## Short summary in Dutch

Dit is de oplossing om Eneco stadsverwarming en ook anderen stadsverwarming meters uit te lezen. Zelf heb ik een Kamstrup 402 en hiermee werkt deze oplossing perfect. Het artikel zou te volgen moeten zijn voor iedereen met technische interessen en mensen die zelf het leuk vinden om te knutselen.

Het grote voordeel is dat je hiermee inzicht krijgt in je verbruik van stadsverwarming. De sensor leest tot 3 cijfers achter de comma uit. Hierdoor kan je echt per uur goed zien wat je verbruikt, het Home Assistant Energy dashboard is hier het perfecte middel voor.

Ik hoop dat het helpt!
]]></content:encoded>
      <author>pieter</author>
      <category><![CDATA[Home Assistant]]></category>
      <category><![CDATA[Home Automation]]></category>
    </item>
    <item>
      <title><![CDATA[How to access the Align command with keyboard shortcuts in PowerPoint]]></title>
      <link>https://www.pieterbrinkman.com/2022/01/13/how-to-access-arrange-comments-with-keyboard-shortcuts-in-powerpoint/</link>
      <guid isPermaLink="true">https://www.pieterbrinkman.com/2022/01/13/how-to-access-arrange-comments-with-keyboard-shortcuts-in-powerpoint/</guid>
      <pubDate>Thu, 13 Jan 2022 00:00:00 GMT</pubDate>
      <description><![CDATA[I make a lot of presentations using PowerPoint and I'm also on the receiving end of a lot of presentation. I truly believe that a good presentations is not…]]></description>
      <content:encoded><![CDATA[
I make a lot of presentations using PowerPoint and I'm also on the receiving end of a lot of presentation.

I truly believe that a good presentations is not made with slides but with a good story and talk track. However, the slides need to be clean and not distracting. One thing that always distracts me is miss alignment of visuals, icons not being centered or columns not evenly distributed.

## PowerPoint Align objects functions

![PowerPoint has a great feature called Align to help you address this. You can find the Align options under the Arrange…](/images/2022/01/arrange-objects-in-powerpoint.webp)

PowerPoint has a great feature called Align to help you address this. You can find the Align options under the Arrange button located in the Drawing section underneath the Home Tab.

Using these Align option will save you a lot of time and increase the quality of your presentations. I use these options every day, and unfortunately I couldn't find a way to access them using keyboard shortcuts. In this post I'll share the workaround I use to access the align options using keyboard shortcuts.

After a short search on internet I found you that you can extend the Quick Access bar on the top of PowerPoint. The Quick Access bar menu items can be accesses using Keyboard shortcuts.

## Adding *Align Selected Objects* to the the Quick Access Bar

The Quick Access Toolbar is in the left in the top bar of PowerPoint.  
By default the Autosave, save and undo options are in there. Adding items to the Quick Access Bar is actually really easy, you just need to know that you can :).

Click on the downwards arrow icon at the end of the Toolbar, the Customize Quick Access Toolbar menu will now show. In this menu click *More Commands*.

![Adding Align Selected Objects to the the Quick Access Bar](/images/2022/01/cutomize-quick-access-toolbar-powerpoint2.webp)

In the following dialog you can change the buttons in the Quick Access Toolbar. Select in the Choose commands from drop down the *Home Tab* and select *Align \[Align Objects\]* and click Add to add it to the toolbar.

Click Oke to close the dialog and the Align button should now be added to your Quick Access toolbar.

![You can access the Align Menu by clicking on it. You can also access it using keyboard shortcuts. To do this you need to…](/images/2022/01/align-in-the-quick-access-toolbar.webp)

You can access the Align Menu by clicking on it. You can also access it using keyboard shortcuts. To do this you need to find out what number the menu item has been allocated by PowerPoint.

Press the **ALT** key on your keyboard. The ribbon should now show all kind of numbers and letters. Confirm the number that the Align menu has.

![Adding Align Selected Objects to the the Quick Access Bar](/images/2022/01/powerpoint-ribbon-keyboard-shortcuts.webp)

![Adding Align Selected Objects to the the Quick Access Bar](/images/2022/01/image.webp)

In my case I can assess the align menu with the number 6. Let's start using it. Select a few shapes that you want to align and press ALT+6. The Align menu will now expand and you have all the align options available. The align menu shows the keyboard short cut for the options, use the corresponding letter for the alignment that you need. For example, if you want to Distribute your selected shapes Horizontally you are going to use following keyboard commands.

-   ALT+6 (to access the align menu in the Quick Access Toolbar)
-   H (to execute the Distribute Horizontally command)

That's it! Start using alignment features in PowerPoint, it will make your slides look a lot better.
]]></content:encoded>
      <author>pieter</author>
      <category><![CDATA[Daily Tip]]></category>
      <category><![CDATA[Productivity]]></category>
    </item>
    <item>
      <title><![CDATA[2022 update: Flash ESPhome on ESP32 ESP2866 NodeMCU board]]></title>
      <link>https://www.pieterbrinkman.com/2022/01/01/2022-update-flash-esphome-on-esp32-esp2866-nodemcu-board/</link>
      <guid isPermaLink="true">https://www.pieterbrinkman.com/2022/01/01/2022-update-flash-esphome-on-esp32-esp2866-nodemcu-board/</guid>
      <pubDate>Sat, 01 Jan 2022 00:00:00 GMT</pubDate>
      <description><![CDATA[UPDATE January 2026: This article is now outdated. ESPHome has introduced the new ESPHome Device Builder with a simplified workflow. Read the 2026 Guide:…]]></description>
      <content:encoded><![CDATA[
***UPDATE January 2026*: This article is now outdated. ESPHome has introduced the new ESPHome Device Builder with a simplified workflow. Read the [2026 Guide: Getting Started with ESPHome Device Builder](/2026/01/29/2026-getting-started-esphome-device-builder-home-assistant/) for the latest instructions.**

A lot is changed since I wrote multiple articles around ESPhome. One of the major things is that ESPhome is now part of Home Assistant core and it comes with a nice integrated User Interface.

## What are ESP32 and ESP2866 nodeMCU boards?

ESP boards are a low cost Wi-Fi chips that have built in flash chips allowing you to build a single chip device capable of connecting to Wi-Fi. newer versions like the ESP32 boards also provide you BLE (Bluetooth low energy) and there’s loads of variety of boards you can use.  

With ESP you can easily make smart solutions for home automation. You can buy them for about 4-9 dollar/euro on [AliExpress](/recommends/esp8266-nodemcu-aliexpress/) or for a bit more with faster delivery on [Amazon](/recommends/esp-amazon-com/).

Read more about ESP boards in my [introduction to ESP boards article](/2020/12/14/introducing-esp-boards/)

This article is an updated version of the the [Compile and flash your ESP with ESPhome](/2020/12/14/flash-esp-chip-with-esphome-node-firmware/) article.

## Create a new ESPHome node from HomeAssistant

First we need to define a new ESP home node. With ESPhome being part of Home Assistant you can now do this directly from the Home Assistant UI.

-   Login to Home Assistant
-   In the main menu click ESPHome  
    

![Install the HomeAssistant Addon if you don't see the ESPhome menu item](/images/2021/12/image.webp)

Install the HomeAssistant Addon if you don't see the ESPhome menu item

-   In ESPhome click the "+ NEW DEVICE" button in the bottom left corner
-   Provide your node name and WiFi credentials for the WiFi network that the chip needs to connect with and click next.  
    

![Select the type of ESP that you're using. The version of the board is printed on the WiFi chip on the ESP. Click next.](/images/2021/12/image-2.webp)

-   Select the type of ESP that you're using. The version of the board is printed on the WiFi chip on the ESP. Click next.
    -   ESP32 is the board that contains Bluetooth
    -   ESP8266 is the generic board.
-   Congratulations you now created your first ESPhome Node. Click Install.

## Initial flash of ESPhome

We need to start by compiling the firmware that we’ll use to flash the ESP chip. You only need to do this once. As soon as ESPhome is installed on your chip you can update the firmware over the WiFi connection with so called "over the air" (OTA) installation.

ESPHome provides you with a number of options for you to install ESP:

-   **Wirelessly**  
    This will work after you have finalized the initial flash of ESPhome on the chip.
-   **Plugged-in directly from this computer**  
    This method is explained below.
-   **Plugged in from your Home Assistant Server**  
    Never used this method.
-   or **Manual Download**  
    This method is explained below.

Below I will describe the Plugged-in directly from this computer and the Manual Download Method.

### Install ESPhome using the Plugged-in directly from this computer Method

The easiest way for the initial flash is to by using the "plugged-in directly from this computer" method. With this method you attach the ESP with USB to your computer and flash it directly from the browser.

**Note:** This method only works if your Home Assistant runs secure (HTTP**S**) in the browser. If you use NabuCasa you can use the secure public URL that you can find in; Configuration -> Home Assistant Cloud under Remote Control.

Assuming you're accessing Home Assistant under HTTPS click "Plugged-in directly from this computer". A browser pop-up will show, you need to allow the browser to connect to the COM3 of the USB. Select the COM and click connect. ESPHome will now flash the chip from the browser. How amazing and how easy!

![Install ESPhome using the Manual Download method](/images/2021/12/image-3.webp)

### Install ESPhome using the Manual Download method

If you don't have your Home Assistant running under HTTPS you might want to use the Manual Download method. This method will allow you to compile and download the firmware to your computer. After that you can use the ESPFlasher tool to flash the ESP.

In the Install menu click Manual Download. The compilation will now start and the .bin file will download when ready.

#### Flash ESP with compiled firmware (.bin)

Now we need to flash the ESP chip with your compiled firmware.

-   Go to the esphome-flasher GitHub page and download the flasher for the OS you’re using. There is a esp home flash tool for macOS, Ubuntu and Windows :  
    [https://github.com/esphome/esphome-flasher/releases](https://github.com/esphome/esphome-flasher/releases)
-   Connect your ESP board with USB to your laptop.
-   Open the flasher tool
    
    -   **Serial port**: select COM port where the board is connected (there is probably only one option 😊).
    
    -   **Firmware**: Browse to the location where you downloaded your compiled firmware and select your firmware.
    
    -   Click **Flash ESP** and wait

![The ESP will be flashed now, you can follow the progress in the console window. When finished writing the firmware the ESP…](/images/2020/12/image-21.webp)

-   The ESP will be flashed now, you can follow the progress in the console window. When finished writing the firmware the ESP will restart and connect to your WiFi.  
    

![The ESP will be ready after it states that it’s ready for Over-The-Air Updates and that the API server is ready.](/images/2020/12/image-20.webp)

The ESP will be ready after it states that it’s *ready for Over-The-Air Updates* and *that the API server is ready*.

## Configure device in Home Assistant

Home Assistant will automatically recognize the ESP on the network and notify you about the new device found.  Click on the notification or click Configuration, Integrations. Find the new discovered device and click configure.

![Home Assistant will now add your ESP as a new device, there is not much you can do with the device as there are no…](/images/2020/12/image-19.webp)

Home Assistant will now add your ESP as a new device, there is not much you can do with the device as there are no entities to control.

> That's it! No matter which method you used. You are now ready to start tinkering and build amazing WiFi powered solutions using ESPHome. Don't forget to share your projects!

## My ESPHome Projects

Read more about how I use ESPHome in my smart home:

-   [Make your fireplace smart with ESPHome](/2020/12/14/bellfire-home-automation-project/)
-   [A cheap Air Quality sensor using ESPHome](/2021/02/03/build-a-cheap-air-quality-meter-using-esphome-home-assistant-and-a-particulate-matter-sensor/)
-   [Measure your water usage using ESPHome](/2022/02/02/build-a-cheap-water-usage-sensor-using-esphome-home-assistant-and-a-particulate-matter-sensor/)
]]></content:encoded>
      <author>pieter</author>
      <category><![CDATA[ESPHome]]></category>
      <category><![CDATA[Home Assistant]]></category>
      <category><![CDATA[Home Automation]]></category>
    </item>
    <item>
      <title><![CDATA[Guest speaker Sitecore Strategy Lunch]]></title>
      <link>https://www.pieterbrinkman.com/2021/11/22/sitecore-strategy-lunch/</link>
      <guid isPermaLink="true">https://www.pieterbrinkman.com/2021/11/22/sitecore-strategy-lunch/</guid>
      <pubDate>Mon, 22 Nov 2021 00:00:00 GMT</pubDate>
      <description><![CDATA[I've been invited to participate in Sitecore Strategy lunch of December. The strategy lunch is a informal table discussion. I will be talking about…]]></description>
      <content:encoded><![CDATA[
I've been invited to participate in Sitecore Strategy lunch of December. The strategy lunch is a informal table discussion. I will be talking about Sitecore's strategy and roadmap, see more details below. You can [register here](https://www.meetup.com/sitecore-strategy-lunch-north-america/events/282173538/).

"Sitecore has made a major pivot towards becoming a Composable DXP company over the past few months. Do you have questions about their roadmap and what it means for your investment in the Sitecore platform? Here's your chance to get some answers!

December 2nd at 12:00 EST, Sitecore Strategy MVP, Jaina Baumgartner host the Sitecore Strategy Lunch with special guest Pieter Brinkman, Sitecore's Senior Director of Technical Marketing in a Sitecore Product Roadmap AMA (ask me anything) session and discussion.

Bring you, colleagues, bring your decision-makers and get the straight goods right from the source! See you soon!"
]]></content:encoded>
      <author>pieter</author>
      <category><![CDATA[Composable DXP]]></category>
      <category><![CDATA[Public Speaking]]></category>
      <category><![CDATA[Sitecore]]></category>
    </item>
    <item>
      <title><![CDATA[Konaverse podcast]]></title>
      <link>https://www.pieterbrinkman.com/2021/11/15/konaverse-podcast/</link>
      <guid isPermaLink="true">https://www.pieterbrinkman.com/2021/11/15/konaverse-podcast/</guid>
      <pubDate>Mon, 15 Nov 2021 00:00:00 GMT</pubDate>
      <description><![CDATA[I had the pleasure and honor to participate in the Konaverse Podcast series. The Konaverse Podcast series is not all about work, technology and industry it a…]]></description>
      <content:encoded><![CDATA[
I had the pleasure and honor to participate in the [Konaverse Podcast](https://konaverse.konabos.com/) series.

[![pieter-brinkman-konaverse-podcast](/images/2021/11/pieter-brinkman-konaverse-podcast.webp)](https://konaverse.konabos.com/episode/pieter-brinkman-on-the-netherlands-sitecore-windsurfing-and-home-automation)

The Konaverse Podcast series is not all about work, technology and industry it a show about technology, work, careers and life. You get to know more about the people and what drives them.

We talked about everything from growing up in The Netherlands, family life, my career at Sitecore, windsurfing and home automation.

Listen the full podcast directly on [Spotify](https://open.spotify.com/show/5IwRcDd1ZxSohMFbwUoDjd), [Google Podcast](https://podcasts.google.com/feed/aHR0cHM6Ly9rb25hdmVyc2UubGlic3luLmNvbS9yc3M) or [Apple Podcast](https://podcasts.apple.com/us/podcast/konaverses-podcast/id1568217150?ls=1), or on the [Konaverse site](https://konaverse.konabos.com/episode/pieter-brinkman-on-the-netherlands-sitecore-windsurfing-and-home-automation).

I really enjoyed participating, it was a very pleasant experience. The questions around growing-up make you think back about how good a life you had so far and make you appreciate what you got. It's good to reflect and appreciate, this is something you forget sometimes with the busy life we all have.

Thanks to Akshay, Matthew and the [Konabos](https://www.konabos.com/) team.

[Pieter Brinkman on The Netherlands, Sitecore, Windsurfing, and Home Automation |Konaverse Podcast | Konaverse Podcast | Technology, Work, Career, And Life Related Podcasts (konabos.com)](https://konaverse.konabos.com/episode/pieter-brinkman-on-the-netherlands-sitecore-windsurfing-and-home-automation)
]]></content:encoded>
      <author>pieter</author>
      <category><![CDATA[Podcast]]></category>
      <category><![CDATA[Public Speaking]]></category>
      <category><![CDATA[Publications]]></category>
    </item>
    <item>
      <title><![CDATA[Sitecore Composable DXP - Digital Bytes]]></title>
      <link>https://www.pieterbrinkman.com/2021/08/29/sitecore-composable-dxp-digital-bytes/</link>
      <guid isPermaLink="true">https://www.pieterbrinkman.com/2021/08/29/sitecore-composable-dxp-digital-bytes/</guid>
      <pubDate>Sun, 29 Aug 2021 00:00:00 GMT</pubDate>
      <description><![CDATA[This week I had the pleasure to talk with Himadri from Nishtech about Sitecore's SaaS journey and how we are moving to the Composable DXP.…]]></description>
      <content:encoded><![CDATA[
This week I had the pleasure to talk with Himadri from Nishtech about Sitecore's SaaS journey and how we are moving to the Composable DXP.

https://vimeo.com/596431018

Nishtech Digital Bytes - Sitecore Composable DXP

## Nish Tech digital bytes outline

*When you think of a traditional digital experience platform, you probably think of a monolithic, tightly coupled full-stack suite from a single vendor that allows you to manage the entire digital experience, from content management to digital marketing and analytics. But maybe you don’t need all that. The concept of a composable DXP allows you to create a custom solution using technology that fits your needs and works with your existing processes and infrastructure.  
  
In this episode we’re excited to welcome Sitecore Senior Director of Technical Marketing Pieter Brinkman to discuss his thoughts on the composable DXP and how it fits into Sitecore’s roadmap.*

Thanks again to [Nishtech](https://www.nishtech.com/Blog/2021/September/digital-bytes-sitecore-composable-dxp) and [Himadri](https://mobile.twitter.com/himadric) for the lovely conversation.
]]></content:encoded>
      <author>pieter</author>
      <category><![CDATA[Composable DXP]]></category>
      <category><![CDATA[Portfolio]]></category>
      <category><![CDATA[Public Speaking]]></category>
      <category><![CDATA[Publications]]></category>
      <category><![CDATA[Sitecore]]></category>
    </item>
    <item>
      <title><![CDATA[Introduction to the Composable DXP]]></title>
      <link>https://www.pieterbrinkman.com/2021/07/13/introduction-to-the-composable-dxp/</link>
      <guid isPermaLink="true">https://www.pieterbrinkman.com/2021/07/13/introduction-to-the-composable-dxp/</guid>
      <pubDate>Tue, 13 Jul 2021 00:00:00 GMT</pubDate>
      <description><![CDATA[If you follow the Digital Experience Platform DXP and web industry you've seen that the industry is slowly moving away from Platform DXP’s to something…]]></description>
      <content:encoded><![CDATA[
If you follow the Digital Experience Platform (DXP) and web industry you've seen that the industry is slowly moving away from Platform DXP’s to something called a Composable DXP.

In this article I want to address a few questions that I get asked a lot, including:

1.  What is a Composable DXP?
2.  What is the difference between a Composable DXP and a platform DXP?
3.  What are the key benefits of a Composable DXP.

Here we go!

## What is Digital Experience Platform (DXP)?

Industry research and advisory firm [Gartner](https://www.gartner.com/en/marketing/glossary/digital-experience-platform-dxp-) defines DXP as:

“A digital experience platform (DXP) is an integrated set of core technologies that support the composition, management, delivery and optimization of contextualized digital experiences.”

A DXP consist out of three main pillars:

1.  **Content** is the main foundation of DXP.  
    It’s the starting point and the fuel of everything. You need content to drive Digital Experiences. The content pillar is all about content strategy, creation, collaboration and making the content available for consumption.
2.  The **Experience** pillar is where we build digital experiences.  
    This includes tools for Analytics, Marketing Automation, Personalization and optimization. The content is powering the experiences.
3.  The final pillar is **Commerce.  
    **With the commerce pillar we add the possibility to add conversion to experiences.

A good DXP solution combines Content, Experience and Commerce to maximize impact.

That brings us to the next questions; What is a Composable DXP and what is a Platform DXP and what are the differences?

## What is a Composable DXP? 

It is all in the name, **Composable** DXP. With a Composable DXP you COMPOSE a DXP that addresses current business requirements, challenges and opportunities by combining content sources with best of breed marketing stack, while not being restricted by technology or a single Vendor.

With a Composable DXP every feature or requirement can be individual standalone products.  Allowing the customer to tailor the DXP solution around existing processes, infrastructure, and marketing stack.  
The API’s of the cloud solutions, content sources and the integrations are expose the building blocks of the unique tailored DXP solution.

### Breaking down the Composable DXP

A composable DXP exist out of multiple parts; Content Sources, Cloud Solutions, the Solution and hosting.

![Composable DXP architecture](/images/2022/04/Composable-DXP-Architecture.webp)

Composable DXP architecture

#### Content Sources

The composable the DXP has multiple **content sources** this could be your headless CMS, your headless commerce solution, or any internal systems that contain content and has an API to expose and/or interact with the content.

> An application programming interface (API) is an interface that provides programmatic access to service functionality and data within an application or a database. It can be used as a building block for the development
> 
> Gartner

These content sources are the backbone of the content pillar of your digital experience.

#### Cloud solutions

The cloud solutions are best of breed API first SaaS solutions. These solutions can be from multiple vendors. Cloud solutions used in a Composable DXP can contain existing Marketing Stack and new solutions. You will combine multiple cloud solutions to cover the requirements the customer needs to drive success. For example, solutions for personalization, analytics, automation testing, forms, authentication, etc.

Some of these different DXP components might be provided by the same vendor, but typically in this approach the customer is choosing products from a variety of different vendors.

#### Solution

Then we have the solution. This is where you combine the content sources with best of breed cloud solutions. Composing the multiple sources and cloud solutions to achieve a Composable DXP with the feature set needed to drive success.

In this API first architecture you can build the Composable DXP in the content mesh with the programming language or framework you prefer. It is all about technology freedom.

#### Hosting

You can deploy your solution to your favorite hosting vendor and integrate with current deployment processes. Again, it is all about freedom of choice and removing technology restrictions.

## What is a Platform DXP?

With a Platform DXP all of the different functions & features are bolted on topof each other in a single architecture, tightly coupled together.

![Platform DXP architecture](/images/2022/04/platform-DXP-architecture.webp)

Platform DXP architecture

Of course, there may be microservices under the hood powering the different elements, but they are not separate stand-alone products that can work on its own.  
With this approach, you are buying the full stack of digital marketing solutions and all capabilities from a single vendor.

This is how Sitecore, with Sitecore XP and other Platform vendors works today.  
It is the simplicity and complete feature set that attracts a lot of customers towards a stack like this, but the complexity of hosting and lack flexibility what is holding others back.

With Sitecore XP we chose to build the features in the platform natively, others industry vendors choose to acquire and stitch products together.

## Benefits of a composable DXP

A composable DXP provides the perfect balance between ease of use for the business and technology flexibility.

![Benefits of the Composable DXP](/images/2022/04/benefits-of-the-composable-DXP.webp)

Benefits of the Composable DXP

It provides faster time to value**.** The API first implementation leads to faster implementations and deployments. The solution is based on industry standards, reducing the need for specific Vendor expertise. Making it easier to find talent for your development teams.

You only implement best of breed solution product covering the features that are needed to solve the business problems of today.

It’s fully customer centric. Instead of fitting your requirements to platform features you’re selecting the solutions that you need. 

Not having a full platform DXP provides agility to quickly adjust to changing requirements, trends and world changing events.

While building out your Composable DXP you do not need to start from scratch. You don’t need to replace marketing stack that drives business value. The only requirement is that the marketing stack has an API that can be used to integrate with.  This way you can slowly migrate to a Composable DXP and benefit from your efforts along the way.

In a composable DXP every solution is unique. A fully tailored solution using the unique combinations of content source, cloud applications, technologies, programming languages and hosting options.

Building out your composable DXP with your favourite Technologies and programming languages of your choice, without restrictions from vendors.

It is all about **complete technology freedom** and **vendor freedom**.
]]></content:encoded>
      <author>pieter</author>
      <category><![CDATA[Composable DXP]]></category>
      <category><![CDATA[Publications]]></category>
      <category><![CDATA[Sitecore]]></category>
    </item>
    <item>
      <title><![CDATA[Keynote Virtual SUGCON on Sitecore & SaaS]]></title>
      <link>https://www.pieterbrinkman.com/2021/04/30/keynote-virtual-sugcon-on-sitecore-saas/</link>
      <guid isPermaLink="true">https://www.pieterbrinkman.com/2021/04/30/keynote-virtual-sugcon-on-sitecore-saas/</guid>
      <pubDate>Fri, 30 Apr 2021 00:00:00 GMT</pubDate>
      <description><![CDATA[I had the honor of opening SUGCON together with Rob. We presented on the future Strategy of Sitecore and our SaaS offering. Also did an introduction to the…]]></description>
      <content:encoded><![CDATA[
I had the honor of opening SUGCON together with Rob. We presented on the future Strategy of Sitecore and our SaaS offering.

Also did an introduction to the Composable DXP and talked about the benefits. We ended up with a demo of the Sitecore products.

https://www.youtube.com/watch?v=by5HcZKcfNo

Please note that this presentation was 02:00 at night :)

Want to learn more about the Composable DXP, the benefits and the difference between platform and composable DXP? I also wrote an article that does a full [introduction of the Composable DXP](/2021/07/13/introduction-to-the-composable-dxp/).
]]></content:encoded>
      <author>pieter</author>
      <category><![CDATA[Composable DXP]]></category>
      <category><![CDATA[Public Speaking]]></category>
      <category><![CDATA[Sitecore]]></category>
    </item>
    <item>
      <title><![CDATA[Adding centerfire, increase and decrease to your smart Bellfire fireplace solution]]></title>
      <link>https://www.pieterbrinkman.com/2021/03/22/adding-centerfire-increase-and-decrease-to-your-smart-bellfire-fireplace-solution/</link>
      <guid isPermaLink="true">https://www.pieterbrinkman.com/2021/03/22/adding-centerfire-increase-and-decrease-to-your-smart-bellfire-fireplace-solution/</guid>
      <pubDate>Mon, 22 Mar 2021 00:00:00 GMT</pubDate>
      <description><![CDATA[In a previous article series I provided an overview and step by step instruction how to make your Bellfire fireplace with Mertik Maxitrol controller smart.…]]></description>
      <content:encoded><![CDATA[
In a previous article series I provided an overview and step by step instruction [how to make your Bellfire fireplace](/2020/12/14/bellfire-home-automation-project/) with Mertik Maxitrol controller smart. This project that was presented during the Home Assistant Conference 2020.

In the article series and presentation I focused on the essential controls, basically turning the fireplace on and off. I received a number of requests to extend the controls to support increasing and decreasing the fire and also activating the so called center burner.

![After doing some research and trial and error I've found the combinations for the relays to add these features to your…](/images/2020/12/fireplace.gif)

After doing some research and trial and error I've found the combinations for the relays to add these features to your smart fireplace.

Make sure you finished all the steps from t[he fireplace home automation series](/2020/12/14/bellfire-home-automation-project/) and that you can turn on and off your fireplace using Home Assistant.

## Centerfire

Some types of Bellfire fireplace have support for Centerfire. In short Centerfire disables the outside flames, making the actual fire smaller. The benefits of this is that you can have a high flame without burning too much gas and producing too much heat.

To control the centerfire fireplace we need to match the following sequences with the relays:

-   Centerfire\_on, close contacts 2 and 3 simultaneously for 1 seconds
-   Centerfire\_off, close contact 1 and 2 simultaneously for 1 second

We need to control the relays in these sequences with the ESP board. We can do this by extending the ESP configuration. We’ll add an centerfire switch that will execute sequences above when turned on and off.

Open ESP home and click edit on the node to go to the configuration editor.

![Scroll to the bottom of the editor and add following configuration.](/images/2021/03/image-15.webp)

Scroll to the bottom of the editor and add following configuration.

```
  - platform: template
    name: "Fireplace_centerfire"
    id: Fireplace_centerfire
    turn_on_action:
      - then:
        - switch.turn_on: IN2
        - switch.turn_on: IN3
        - delay: 1s
        - switch.turn_off: IN2
        - switch.turn_off: IN3
        - switch.template.publish:
            id: Fireplace_centerfire
            state: ON
    turn_off_action:
      - then:
        - switch.turn_on: IN1
        - switch.turn_on: IN2
        - delay: 1s
        - switch.turn_off: IN1
        - switch.turn_off: IN2
        - switch.template.publish:
            id: Fireplace_centerfire
            state: OFF
```

## Increase and decrease fire

Increasing and decreasing the fire works pretty similar. For increase we need to close relay 1 and for decrease we need to close relay 3.

We don't want to switch to keep the state, it needs to be a toggle and switch back to the off state. This way you can increase and decrease the fire with every press on the switch.

We're going to add two switches to the config.

```
  - platform: template
    name: "Fireplace_increase"
    id: Fireplace_increase
    turn_on_action:
      - then:
        - switch.turn_on: IN1
        - delay: 1s
        - switch.turn_off: IN1
        
  - platform: template
    name: "Fireplace_decrease"
    id: Fireplace_decrease
    turn_on_action:
      - then:
        - switch.turn_on: IN3
        - delay: 1s
        - switch.turn_off: IN3
```

You can tweak the the amount of the increase/decrease in the fire by changing the timing or build automation.

That's it. Press upload and the new entities for controlling fire height and centerfire will automagically show up in Home Assistant. Now you can control your centerfire and fire height of you Bellfire fireplace using Home Assistant and EspHome.

Let me know if you have any questions or extension requests. In the next article I'll share the Lovelace setup that I use within Home Assistant to control the fireplace.
]]></content:encoded>
      <author>pieter</author>
      <category><![CDATA[ESPHome]]></category>
      <category><![CDATA[Home Assistant]]></category>
      <category><![CDATA[Home Automation]]></category>
      <category><![CDATA[Make your Bellfire fireplace smart]]></category>
    </item>
  </channel>
</rss>