A person displays a smartphone and a business card, indicating a connection between digital and physical networking tools

Building with AI: A Practical Guide Beyond the Idea

Author of post Smiling

Part 2 of the "Building My AI Networking Tool" series

There's a seductive narrative around AI development: "Just describe your idea and watch it build itself."

I've built multiple apps over the past decade. I know better.

But I also know this: The work required to build with AI is different from traditional development—and in many ways, more accessible—if you approach it strategically.

This post is about that strategy. The planning that happens before code. The decisions that shape what AI builds. The editing mindset that turns AI's output from generic to yours.

Because here's the truth: AI won't ask questions. It will fill gaps with assumptions. Your job is to minimize those gaps through deliberate planning—or be prepared to edit ruthlessly.

The Temptation to Skip Planning

It's incredibly tempting to feed AI a broad idea and let it rip:

"Build me a networking app that scans business cards and generates follow-up emails."

AI will build you something. Probably something that technically works.

But what you won't get: an interface designed for your specific workflow, error handling for real-world edge cases, a visual style that matches your brand, or components organized for future iteration.

The gap between "AI generated something" and "I built something useful" is planning.

And planning with AI is different from planning alone. You're not just deciding what to build—you're teaching AI your vision through structured context.

Planning Is Where AI Becomes Your Expert Team

Before writing a single line of code for this networking tool, I spent time with AI having strategic conversations:

Talking through the problem:

  • What makes follow-up hard? (Not just "it takes time" but why)
  • What are the failure modes? (Blurry photos, missing data, no internet)
  • What's the minimum viable flow? (What can I cut and still solve the problem?)

Exploring technical approaches:

  • Which OCR service? (Google Vision vs Anthropic vs AWS Rekognition)
  • How to handle voice notes? (Real-time transcription vs batch processing)
  • Where does data live? (Client-side only vs server storage)

Mapping the experience:

  • What does this feel like to use at an actual networking event?
  • Where does friction appear in the happy path?
  • What happens when things go wrong?

This isn't me figuring things out alone. This is me working with AI as a thought partner—one that has expertise across OCR, mobile development, API design, UX patterns, cost optimization.

You're not just getting one expert. You're getting dozens of specialties in conversation with each other.

What Planning Actually Looks Like

For this networking tool, planning happened in layers:

Layer 1: Problem Validation

Confirmed this isn't just my pain point. Talked to other business owners who network. Validated the workflow interruption.

Layer 2: Feature Definition

Listed everything this could do. Ruthlessly cut to three essentials:

  1. Photo → Contact data
  2. Voice note → Context capture
  3. Context → Personalized email

Everything else? Future iteration.

Layer 3: Visual Design

Created mockups in Figma:

  • How does the scan button feel? (Big, centered, unmissable)
  • What does the form look like? (Clean, editable, obvious)
  • Where does branding appear? (Header, consistent across screens)

This isn't just aesthetics. This is defining what "done" looks like so I can tell when AI drifts.

Layer 4: Design System

Before building any screens, I defined the foundation:

Typography:

  • Prospectus for headers (a blocky serif with character—breaks the "apps must be sans-serif" rule)
  • Manrope for body text (sits between humanist and geometric without being harsh)

Colors: A deliberately minimal palette:

  • Light blue (#D4F7FF) for primary actions—chosen specifically to pop against the warm background and draw attention where it matters
  • Warm beige base (inviting, not stark white)
  • 5 shades of warm gray for hierarchy
  • Status colors (green for success, red for errors)

The intent: Make it feel approachable—techie enough to signal capability, but not so technical that it intimidates. This is an extension of my Hite Labs branding, which already walks that line between professional and human.

Spacing & Components:

  • Consistent 8px base unit
  • Essential components only: buttons, inputs, forms
  • No unnecessary complexity

What is a design system?

Think of it as the visual grammar of your app. Just like you wouldn't write a book where every chapter uses different fonts, margins, and punctuation styles, you don't build an app where every screen invents new button styles and colors.

A design system is your constraint set—the rules that make everything feel cohesive even as you add features.

Why it matters with AI:

Without a design system, AI will invent styles on every screen. Blues that don't quite match. Spacing that varies randomly. Components rebuilt from scratch each time.

With a design system, you can say: "Use the light blue. Use h2 styling. Use the button component." And AI has a reference point.

Layer 5: User Stories

Breaking the vision into buildable chunks:

  • "As a user, I want to take a photo of a business card"
  • "As a user, I want to see extracted contact information in an editable form"
  • "As a user, I want to record context about the conversation"

Each story becomes a clear instruction set for AI.

This planning isn't wasted work. This is the work. Code is just the artifact.

Day One: When AI Meets Your Design System

With planning complete, I started building. First up: the design system.

I gave Claude Code the user story with my deliberately minimal specifications:

"Create a design system:

  • Prospectus for headers, Manrope for body
  • Light blue #D4F7FF for primary actions
  • Warm beige background
  • 5 warm gray shades
  • Status colors (green, red)
  • 8px spacing scale
  • Components: buttons, inputs, forms"

What I provided: ~6-7 colors, 2 fonts, essential components only.

What AI generated:

  • ✅ Basic structure (CSS custom properties, organized sections)
  • ❌ Wrong fonts (generic sans-serif defaults)
  • ❌ Different blues (nowhere near #D4F7FF)
  • ❌ Cool grays instead of warm grays
  • ❌ Extra components (cards, modals, badges, alerts—things I never mentioned)
  • ❌ Spacing values that didn't match my 8px scale
  • ❌ Additional color variations I didn't ask for

To be generous, some of what AI added could be seen as "iterating" on what I provided—expanding the foundation with common patterns.

But most of it? Red herrings. Components I didn't invite to this party.

Where did those come from?

AI's training data. When you say "design system," it pulls from thousands of design systems it's seen—most of which are comprehensive component libraries for large applications.

My design system was intentionally minimal. AI assumed I wanted comprehensive.

The Editor's Mindset: From Creator to Curator

Here's the shift that makes AI development productive:

You're not writing code line by line anymore. You're editing.

When AI generated that design system with wrong colors and uninvited components, I didn't start from scratch. I switched mindsets:

From: "Build exactly what I specify"
To: "Generate material, then I'll cut and refine"

I went through the CSS file systematically:

  • Deleted entire sections: Cards, modals, badges, alerts (never asked for these)
  • Corrected colors: Replaced AI's blues with #D4F7FF, swapped cool grays for warm grays
  • Fixed typography: Removed generic fonts, added correct Prospectus and Manrope declarations
  • Adjusted spacing: Aligned everything to 8px base unit
  • Cut about 40-50% of what AI generated

This felt like editing a draft, not debugging broken code.

The structure was sound. The organization made sense. I just needed to remove the assumptions and align it to my actual vision.

And it was still 3-5x faster than writing from scratch.

The Screenshots Tell the Story

Design System - First iteration: Claude Code created an overly complex design system with a blocking modal as the default state. The fonts were generic, colors appeared random, and the modal prevented interaction with the rest of the interface. A hodgepodge of components that looked more like a UI kit demo than a cohesive design system.

Design System - Updated Fonts
After prompting to remove the blocking modal and improve typography, we got a cleaner layout. The fonts were updated to something more professional, and the modal was relegated to a button-triggered example. Still generic-looking with a standard design system feel, but more usable and organized.

Refined Design System tailored to my needs

The design system finally reflects Hite Labs Connect's brand identity. Correct background colors, proper header styling that connects to both the Connect product and Hite Labs branding, and a complete gray scale palette alongside primary and status colors. Clean, professional, and ready for development.

The visual difference is clear. But more importantly: the edited version is maintainable. Every color has a purpose. Every component serves the actual app. Nothing extra to confuse future me.

When Specificity Matters (And When It Doesn't)

Through building this, I've learned when to be fastidious and when to let AI improvise:

Be specific about:

  • Brand elements (exact colors, exact fonts)
  • User flow (which screens, what order)
  • Data handling (what gets stored, what doesn't)
  • Core functionality (how features actually work)

Let AI handle:

  • Boilerplate structure (folder organization, config files)
  • Standard patterns (form validation, error states)
  • Responsive breakpoints (mobile/tablet/desktop)
  • Accessibility attributes (ARIA labels, semantic HTML)

AI is excellent at "known good patterns." It's terrible at "your specific vision" unless you teach it that vision through context.

The Queue Feature: A Game-Changer for Iteration

Once I had the design system edited down to my vision, I started building the actual screens. This is where Claude Code's queue feature became invaluable.

Example: The form inputs

I knew I wanted form inputs with:

  • Clean white background (no visible border, just subtle)
  • Proper states (focus, hover, error, disabled)

Here's what's worth noting: I didn't specify every state in my mockups. I just designed the basic input structure, knowing that states like focus, hover, and error should exist.

And I was right. AI knows these states exist and created all of them for me automatically. That's the power of AI having expertise baked in—it knows form accessibility patterns.

But the nuances were off:

  • Inputs had a visible border (I wanted flat white)
  • Focus state was too subtle
  • Error state used the wrong red
  • Spacing inside inputs didn't match my 8px scale
  • Placeholder text color was too dark

Old workflow: Fix border. Wait. Fix focus state. Wait. Fix error color. Wait. Each change = stop, execute, review, repeat.

With queue: I spotted all 5 issues, typed all 5 corrections, and Claude Code processed them sequentially. No waiting between each one. No context switching. Just: identify problems → queue fixes → watch them resolve.

This transformed iteration from frustrating stop-start to fluid refinement.

When you're making 15-20 small adjustments to get something pixel-perfect, this isn't just convenient—it's the difference between spending 2 hours vs 30 minutes on polish.

What "Hard" Actually Means

Building apps isn't hard in the sense of requiring rare genius.

It's hard in the sense of requiring:

  • Clear thinking about what problem you're solving
  • Patience to iterate when the first version isn't quite right
  • Editing discipline to cut what doesn't serve the vision
  • Willingness to learn concepts like "design system" and "API" and "flexbox"

These are learnable skills. Not gatekept knowledge.

The barrier isn't your ability to code. It's your willingness to do strategic work before asking AI to execute.

Why This Matters for Non-Technical Builders

If you've never built an app before, here's what's changed:

Old barrier: You need to learn programming languages, frameworks, deployment, databases, APIs—before you can build anything useful.

New reality: You need to learn to think clearly about problems and articulate your vision—then AI handles the technical execution.

The work shifted from syntax to strategy.

You don't need to know how to write a React component. You need to know what that component should do and look like.

You don't need to understand database normalization. You need to decide what data matters and how it flows.

AI fills the technical gaps. You fill the vision gaps.

That's why planning matters. That's why mockups matter. That's why design systems matter.

Not because they're bureaucratic overhead—but because they're how you communicate your vision to AI in a language it can execute on.

The Development Loop That Actually Works

Here's the rhythm I've settled into:

1. Define the slice What's the smallest complete feature I can build and test?(Example: "Camera opens and captures photo")

2. Create the context

  • User story: What should this do?
  • Mockup: What should it look like?
  • Edge cases: What can go wrong?

3. Let AI generate Give it the context and let it build the scaffolding.

4. Edit ruthlessly Remove what doesn't fit. Refine what stays. Add what's missing.

5. Test in reality Not "does it work on my laptop" but "does it work on my phone at a networking event?"

6. Iterate based on what breaks Real-world usage reveals assumptions you missed.

This loop works whether you're experienced or not. The difference is experienced builders know what questions to ask in step 2.

But AI can teach you those questions if you're willing to learn.

What's Actually in Those Seven User Stories

Since user stories are the bridge between planning and building, here's what mine look like:

User Story 0: Development Standards

  • What principles guide decisions? (KISS, YAGNI, single responsibility)
  • What testing approach? (Unit tests, coverage thresholds)
  • What documentation? (README, inline comments, architectural decisions)

User Story 1: Project Setup

  • Which tools? (Vercel for deployment, Claude Code for development)
  • What structure? (Folders, files, configuration)
  • What dependencies? (Minimal—only what's needed)

User Story 2: Design System

  • What's the visual foundation? (Already covered above)

User Story 3: Header Component

  • What appears on every screen? (Logo, usage indicator, settings)
  • How does it behave? (Sticky on scroll, responsive)

User Story 4: Camera Capture

  • How does photo capture work? (Native camera, fallback to file picker)
  • What image quality? (Compressed but readable)

User Story 5: OCR Integration

  • Which API? (Anthropic Claude Vision)
  • What data structure? (Name, email, company, phone, LinkedIn)
  • How to handle errors? (Partial data, complete failure, retry logic)

User Story 6: Loading & Error States

  • What feedback during processing? (Loading indicator, progress)
  • What when things fail? (Clear messages, retry options, manual fallback)

Each story is comprehensive—not just "what" but "why" and "how to handle failure."

This level of detail is why AI can generate useful code instead of generic scaffolding.

The Real Cost of Skipping Planning

I've seen what happens when people skip straight to "Claude, build me an app":

What AI generates:

  • Generic design (looks like every other app)
  • Happy-path-only logic (breaks on edge cases)
  • No error handling (confusing failures)
  • Inconsistent patterns (every screen reinvents the wheel)
  • Hard to modify (unclear structure)

What you end up doing:

  • Rebuilding major pieces
  • Debugging mysterious failures
  • Wrestling with inconsistent styling
  • Explaining to AI what you actually meant (but vaguely, so it keeps missing)

Time "saved" by skipping planning gets spent 3x over in confused iteration.

Better approach:

  • Spend 20% of time planning deliberately
  • AI generates aligned-to-vision code
  • Spend 20% editing to perfection
  • Net result: Faster AND better

Where We Are Now

After the first development phase, here's what's working:

✅ Foundation Complete:

  • Design system (colors, typography, components)
  • Header (branding, consistent across screens)
  • Responsive layout (works on mobile, tablet, desktop)

✅ Core Flow Built:

  • Camera capture (opens native camera or file picker)
  • Image processing (sends to Anthropic Claude Vision API)
  • Form population (extracts name, email, company, phone, LinkedIn)
  • Editable fields (user can correct any mistakes)

✅ Polish Applied:

  • All form states (focus, hover, error, disabled)
  • Loading indicators
  • Error messaging (clear, actionable)

The Real Value Proposition

This isn't about replacing relationship-building with automation. It's about removing the friction between meaningful conversation and timely follow-up.

You still do the networking. You still have the conversations. You still build the relationships.

This just makes it effortless to:

  • Capture the details while they're fresh
  • Generate the first draft of a personal email
  • Actually send it before the moment passes

That's the promise: Turn "I should follow up with them" into "Done, email sent" in under 2 minutes.

The Takeaway

AI development isn't about typing less code. It's about thinking more strategically.

The planning—problem definition, mockups, design systems, user stories—isn't overhead. It's the work.

Code is just the artifact of clear thinking translated through AI.

Do the planning deliberately. Let AI execute. Edit ruthlessly.

That's the formula. And it works whether you've built ten apps or zero.

Next in series: Part 3 – Voice Notes & Context (Capturing What Actually Matters)

Missed Part 1? Read about why I'm building this and what makes it different from "just prompting AI"

I Want Your Input: What Features Matter to You?

I'm building this networking tool to solve my own follow-up friction. But I want to know what would make it valuable for you. Below are some questions that will help me decide the future roadmap.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.