I had video links in my sources markdown files. Just plain URLs pointing to YouTube.
I wanted them to display properly on my sources page - with titles, durations, maybe channel names. Not manually typed. Automatically fetched.
That meant integrating the YouTube API. Which meant API keys. Which meant security. Which meant GitHub Secrets.
I'd set up GitHub Secrets once before - for my Azure deployment - but API keys felt different. More exposed. More ways to get it wrong.
I knew the basic pattern from work. Never commit credentials. Use environment variables locally. GitHub Secrets for CI/CD. But knowing the pattern and implementing it yourself are different things.
So I started in exploratory mode.
I opened a chat with Claude and started asking questions.
Not "build this feature for me." Questions to understand what I was actually trying to do and what the constraints were.
"I want to add video links to my sources. They're in frontmatter. I want titles to display automatically."
Claude asked me questions back. How's your site built? Static export or server-side? Where do the video links live? What format are they in?
This is the pattern from my last post about two modes of working. I'm not asking Claude to execute. I'm using it as a thinking partner to clarify what needs to happen.
Through the back-and-forth, the approach emerged:
Build-time fetching. My site is statically generated. No runtime API calls. Fetch everything during the build process.
Caching. Don't hit YouTube's API every single build. Cache the results. Only fetch new videos or refresh stale data.
YouTube Data API v3. Free tier gives 10,000 units per day. Video title fetches cost 1 unit each. More than enough headroom.
Security pattern. .env.local for local development. GitHub Secrets for CI/CD. Never commit the API key.
But I wasn't sure about the security. I'd done GitHub Secrets for Azure deployment, but that was following a template. This felt more exposed - a third-party API key that I was getting myself, not something handed to me by IT.
I asked Claude for the official documentation links:
Then I actually read them.
Not because Claude's suggestions seemed wrong. Because I was in new territory and needed to verify I understood what I was doing. The documentation confirmed the approach and filled in details Claude hadn't covered - like quota limits, rate limiting best practices, and how GitHub redacts secrets in logs.
Fifteen minutes of reading gave me confidence to proceed.
By the end of the exploratory phase, I had:
Time to switch modes.
I handed off to Claude Code with the prompt we'd developed through exploration. Then I worked in parallel:
Claude Code: Implementing the fetch script, types, cache system, UI components, comprehensive tests.
Me: Getting my YouTube API key, creating .env.local, verifying my .gitignore, setting up GitHub Secrets.
The parallel work was deliberate. Claude Code was executing on the technical implementation. I was handling the parts that required decisions and verification - getting credentials, checking security setup, making sure nothing sensitive would leak.
Minutes 15-30:
Minutes 30-60:
npm run fetch-youtube - verified it pulled titles from YouTubenpm run build - confirmed the site built without errorsnpm start - opened the sources page, clicked through, checked video links displayed correctlynpm test independently - 586 tests passing, >98% coverage maintainedThis is the verification step I've learned to always do. The automated tests are essential, but I also check the actual functionality myself. Click the buttons. Load the pages. Make sure it works the way a user would experience it.
Minutes 60-75:
YOUTUBE_API_KEY to GitHub repository secrets (building on my Azure deployment experience)git status - confirmed .env.local was not staged for commitMinutes 75-90:
Done. Exploration to execution: 90 minutes total.
1. The Two Modes Pattern
Exploratory thinking first. Don't jump straight into execution when you're uncertain.
I spent 15-20 minutes working through questions with Claude before writing any code. What are the constraints? What's the architecture? How does security work? What do the official docs say?
That exploratory phase gave me a clear execution plan. When Claude Code started implementing, it wasn't guessing what I wanted. We'd already worked it out.
2. Leaning Into Documentation
I read documentation more for this feature than I typically would. Not because I don't normally read docs - I do - but because I was in unfamiliar territory.
API keys. Third-party APIs. Security patterns I'd seen at work but never implemented myself.
The official documentation gave me confidence. YouTube's API docs explained quota limits and rate limiting. GitHub's security guide explained how secrets work and what appears in logs. Next.js docs confirmed the .env.local approach.
This wasn't about distrusting Claude's suggestions. It was about building my own understanding so I could trust the implementation.
3. Building on Previous Experience
I'd set up GitHub Secrets once before for Azure deployment. That experience gave me a reference point.
The Azure deployment was more guided - following specific documentation for Azure Static Web Apps. This time I was adapting the pattern for a different use case.
Same principles. Different application. Slightly more confident because I'd done something similar before.
4. Independent Verification
After Claude Code implemented everything, I didn't just trust the tests passed. I verified independently:
npm test myself - saw 586 tests passingnpm run build myself - saw the build succeedgit status to confirm .env.local wasn't trackedTrust the automated checks. But also verify manually. Both matter.
5. Following Existing Patterns
The blog already had build-time processing from the citation system. YouTube fetching followed the same pattern:
public/data/I didn't invent new architecture. I copied what already worked and adapted it for a different API.
The feature is live: lewisrogal.co.uk/sources
Sources that have video links now display them with metadata:
📺 Watch: The 4 Disciplines of Execution (15:52) →
by FranklinCovey
All fetched automatically. All cached for 30 days. All built at compile time so the live site doesn't depend on YouTube's API at runtime.
Exploratory mode reduces execution risk. Spending 15-20 minutes thinking through the problem before writing code meant Claude Code could execute confidently. No mid-implementation pivots. No architectural rework.
Documentation builds confidence in unfamiliar territory. I know how to read code. I know how to review implementations. But when I'm working with new tools or security-sensitive features, official documentation gives me the foundation to understand what I'm actually doing.
Parallel work multiplies speed. Claude Code implementing while I handle credentials and verification. Both working toward the same goal, each doing what they're best at.
Independent verification catches what automated tests miss. The tests verify correctness. Manual clicking verifies the user experience. Both are necessary.
Building on previous experience makes the next step easier. Azure deployment taught me GitHub Secrets basics. This project taught me API integration. The next API integration will be faster because I have a pattern now.
This feature demonstrated both modes clearly:
Exploratory (15-20 minutes): Working through questions, understanding constraints, reading documentation, developing a clear plan.
Execution (70 minutes): Implementing the plan, testing thoroughly, deploying confidently.
The exploratory phase wasn't wasted time. It was investment. By the time we switched to execution, the path was clear.
That's how the two modes work together. Think first. Build second. Verify throughout.
Timeline:
Technical Details:
.env.local, verified against official docsSources: