AI-Driven Development: Modernizing a Decade-Old Website in 3 Days

It’s a familiar story for many technologists: the personal website, once a point of pride, gradually becomes a relic. Mine was no different – a trusty Jekyll site, faithfully serving content for over a decade, but increasingly feeling like a digital time capsule. The build process felt clunky, the development environment lagged behind modern practices, and the desire for a refresh was strong. But who has the time for a full rewrite?
That’s where things got interesting. I decided to embrace the wave of AI advancements and see if I could partner with AI, specifically Google’s Gemini models, to not only migrate my site to the modern Astro framework but to do it fast. My goal: a complete migration, a new local dev setup, and a fully automated CI/CD pipeline, all within about three days.
Spoiler alert: We did it. And it was a blast.
Phase 1: AI as the Architect – Crafting the Blueprint
Any successful project needs a plan. Instead of spending days researching Astro best practices, migration gotchas, and optimal configurations for my specific setup (Mac Studio M2 Ultra, GitHub Pages), I turned to AI.
Step 1: The Comprehensive Guide
My first move was to leverage the deep research capabilities of Gemini 2.5 Pro. I went to gemini.google.com and used the following prompt, asking it to take on the role of generating a detailed migration guide:
# Project: Migrate Personal Site/Blog from Jekyll to Astro
## Goal
Migrate my personal site/blog from Jekyll to Astro, set up a local development environment, and establish a continuous delivery system.
## Project Details
- **Source Code Management:** `git`- **Repository Host:** GitHub- **GitHub Username:** `cwest`- **Repository Name:** `cwest.github.io`- **Publishing Platform:** GitHub Pages- **Custom Domains:** `caseywest.com`, `geeknest.com` (configured via `CNAME`)- **URL Structure:** Blog posts hosted at root level (`/[slug]`).- **Content Type:** Technical content, including code snippets and examples.- **Code Highlighting Requirement:** Use a high-quality syntax highlighter with user-friendly features for code examples.
## Deployment Workflow
1. Work on features/posts in a topic branch locally.2. Push the topic branch to GitHub and create a Pull Request (PR).3. The PR should trigger automated checks (tests, linters, etc.).4. Upon merging the PR into the `main` branch, the site should be automatically built and deployed to GitHub Pages.
## Development Environment Setup (Local Machine: Mac Studio M2 Ultra)
- **IDE:** VS Code- **Shell:** `zsh`- **Terminal:** WezTerm- **Web Browser:** Chrome- **Node.js Version Management:** Need a solution (e.g., `nvm`, `fnm`).- **VS Code Extensions:** Required for efficient development with: - Astro (`.astro` files) - Strict TypeScript (`.ts`, `.tsx`) - CSS (including SASS/SCSS - `.css`, `.sass`, `.scss`) - SVG (`.svg`) - HTML (`.html`) - Markdown (`.md`) - Vibe Coding with Roo Code - Google’s Gemini 2.5 Pro model- **Additional Tools:** Open to suggestions for other VS Code extensions, command-line tools, automations, etc., to enhance productivity.
## Continuous Delivery (CD)
- **Platform:** GitHub Actions- **Target:** Deploy the built Astro site to GitHub Pages.
## Daily Workflow Example Request
Provide a step-by-step guide for a typical daily workflow:
1. Creating a new blog post.2. Writing content locally with live preview.3. Publishing the content (following the defined deployment workflow).4. Resetting the local environment for the next task.
## Request for AI
Research and provide a detailed, step-by-step guide for setting up this modern development environment and workflow on my specified Mac Studio M2 Ultra. The guide should aim to be actionable within a few hours.
The result? A remarkably thorough, step-by-step guide (the ‘Astro Migration and Deployment Guide’ attached to this project!) covering everything from Node.js version management (comparing FNM and Volta) to VS Code extension recommendations, Astro configuration nuances, content migration strategies (hello, Content Collections!), and even detailed GitHub Actions workflow YAML for CI/CD. It felt like having a senior architect draft a personalized implementation manual just for me.
Step 2: The Actionable Project Plan
With the ‘what’ and ‘how’ documented in the guide, I needed a ‘to-do’ list. I took the generated guide and headed over to Google AI Studio. There, I used Gemini 2.5 Pro with Google Search enabled (to ensure up-to-date accuracy) and provided the entire guide as context along with this prompt:
Based on this research and guide, create a detailed step-by-step project plan broken down into Goals comprised of one or more Tasks. In your output structure prefer headings to delineate Goals and Tasks instead of multi-level bullet point lists or simple bold text.
I will use this project plan to execute the steps necessary to create my new blogging development environment, deployment process, and writing process from scratch.
If necessary to write a detailed, up to date, and accurate task use Google Search to do further research and/or validate your plan.
Gemini returned a beautifully structured project plan (the ‘Astro Migration and Deployment Project Plan’!), complete with task breakdowns, specific commands, and even []
placeholders for checkmarks. It transformed the comprehensive guide into an executable sequence, ready for me to tackle. This AI-generated plan became my roadmap for the next couple of days.
Phase 2: AI as the Coding Companion – “Vibe Coding” the Migration and Beyond
With a solid plan in hand, it was time for execution. This is where the real fun began, pairing my development work with an AI coding assistant directly within VS Code. I configured the fantastic Roo Code extension (shoutout to the RooVetGit team!) to use the gemini-2.5-pro-preview-03-25
model directly via its API. I also set Roo Code to auto-approve reading files and retrying actions, streamlining the interaction.
My workflow settled into a rhythm I call “Vibe Coding”:
- Consult the Plan: Pick the next task from the AI-generated project plan.
- Execute Simple Tasks: If it was a straightforward command (
brew install fnm
,git checkout -b ...
), I’d run it myself in the terminal. - Delegate Complex Tasks to Roo: For anything requiring code generation, file manipulation, refactoring, or content conversion, I’d prompt Roo Code.
The key was treating the AI like a pair programmer. I provided context, clear instructions, and source material (like old Jekyll posts or existing code), and let it handle the heavy lifting or the first draft.
Example: Content Conversion
Migrating old Jekyll posts required converting frontmatter and ensuring path compatibility with Astro’s content collections. I’d give Roo Code a prompt like this:
I want to integrate this old Jekyll post into my new site as a content piece. The end-point URL must be `/your-software-is-made-of-people`. Below is the original Jekyll source in Markdown. Convert it into a Markdown source for Astro and my site in this project. Use the attached image as the hero image. Follow conventions to name the hero image with the same filename base as the post.
Here's the post:---# ... (Jekyll frontmatter and content) ...---
Roo, powered by Gemini, would read the original file, understand the context of my Astro project (thanks to its file-reading capability), parse the request, and generate the new Markdown file with updated frontmatter (including the slug
field based on my URL requirement) and correctly formatted content.
Example: Code Refactoring
AI isn’t just for generating new code; it’s fantastic for improving existing code. After getting some initial functionality working, like a script to generate hero images (more on that later!), I used Roo Code to elevate the quality:
Refactor @/scripts/generate-hero-image.ts like an expert TypeScript programmer who cares about writing beautiful code. Consider the following:
- The highest standards of idiomatic programming- Don't Repeat Yourself (DRY)- No useless comments- Self documenting code- Best practices in TypeScript for documeting functions, interfaces, etc- Easy to maintain code that's not complex or clever unless it absolutely must be- Extremely readable code that new TypeScript programmers would be able to understand
The results were consistently impressive, transforming functional code into cleaner, more maintainable, and idiomatic TypeScript.
Beyond the Plan: AI-Assisted Tooling
What truly highlighted the power of this “Vibe Coding” approach was how quickly I could build new tooling that wasn’t even in the original migration plan. The AI wasn’t just helping me execute predefined steps; it was enabling rapid development to improve my ongoing workflow.
Tool 1: Scaffolding New Posts (new-post.ts
)
I realized I needed a quick way to create new blog post files with the correct frontmatter structure. Instead of manually copying and pasting, I prompted Roo Code to help build a simple Node.js script using TypeScript and libraries like fs/promises
and gray-matter
.
The resulting scripts/new-post.ts
script takes a title string, automatically generates a URL-friendly slug, creates the Markdown file (e.g., src/content/post/a-great-new-post.md
), and pre-fills the frontmatter with the title, a placeholder description, the current date, and a placeholder for the hero image path.
36 collapsed lines
import fs from 'fs/promises' // Use promises API for async operationsimport path from 'path'import { exit } from 'process' // Explicit import for exit
import matter from 'gray-matter' // Re-import gray-matter
/** * Defines the structure for the front matter of a blog post. */interface FrontMatter { title: string description: string pubDate: string // ISO 8601 date string heroImage: string}
/** * Generates a URL-friendly slug from a given text string. * Converts to lowercase, replaces spaces with hyphens, removes non-alphanumeric characters (except hyphens), * and trims leading/trailing hyphens. * * @param text - The input string to slugify. * @returns The generated slug string. */function slugify(text: string): string { return text .toString() // Ensure input is a string .toLowerCase() .trim() .replace(/\s+/g, '-') // Replace spaces and consecutive whitespace with a single hyphen .replace(/[^\w-]+/g, '') // Remove characters that are not word characters, digits, or hyphens .replace(/--+/g, '-') // Collapse multiple consecutive hyphens into one .replace(/^-+/, '') // Remove hyphens from the start .replace(/-+$/, '') // Remove hyphens from the end}
/** * Creates a new markdown file for a blog post with predefined front matter. * * @param title - The title of the new blog post, provided as a command-line argument. */async function createNewPost(title: string): Promise<void> { const slug = slugify(title) const filename = `${slug}.md` const targetDir = path.join('src', 'content', 'post') const filePath = path.join(targetDir, filename)
// Prepare front matter data const frontMatterData: FrontMatter = { title: title, // Let gray-matter handle quoting description: '# Add a brief description here', pubDate: new Date().toISOString(), heroImage: `# Add path to hero image, e.g., /images/posts/${slug}/hero.jpg`, }
// Define the initial content body const mainContent = 'Write your post content here...'
// Combine front matter and content using gray-matter const fileContent = matter.stringify(mainContent, frontMatterData) // Use default stringify
try { // Ensure the target directory exists before writing the file await fs.mkdir(targetDir, { recursive: true })
// Write the file, using 'wx' flag to prevent overwriting existing files await fs.writeFile(filePath, fileContent, { flag: 'wx' }) console.log(`✅ Created new post: ${filePath}`)28 collapsed lines
} catch (error) { // Type guard to check if the error is a Node.js file system error if (error instanceof Error && 'code' in error) { if (error.code === 'EEXIST') { console.error(`❌ Error: File already exists at ${filePath}`) } else { console.error(`❌ Error creating file: ${error.message} (Code: ${error.code})`) } } else { // Handle unexpected error types console.error('❌ An unexpected error occurred:', error) } exit(1) // Exit with error code }}
// --- Script Execution ---
// Get the post title from command line argumentsconst postTitle = process.argv[2]
// Validate inputif (!postTitle) { console.error('❌ Error: Please provide a post title as the first argument.') console.log('Usage: pnpm new-post "Your Post Title"') exit(1) // Exit with error code}
// Run the main functioncreateNewPost(postTitle).catch((err) => { // Catch any unhandled promise rejections from createNewPost console.error('❌ An unexpected error occurred during script execution:', err) exit(1)})
I added it to my package.json
:
"scripts": { // ... other scripts "new-post": "tsx scripts/new-post.ts"}
Now, starting a new post is as simple as:
pnpm run new-post "A Great New Post Title"
Tool 2: Generating Hero Images (generate-hero-image.ts
)
Then came the really cool part. I wanted unique, AI-generated hero images for each post. This was a more complex task involving external API calls, file handling, and updating existing files. Again, I collaborated with Roo Code, iterating on prompts to build scripts/generate-hero-image.ts
.
This script does several things:
- Takes a post slug and a text prompt as arguments.
- Uses the
@google-cloud/aiplatform
Node.js client library to call the Google Cloud Vertex AI Prediction API. - Specifically targets the
imagegeneration@006
model endpoint, which provides access to Google’s incredibly powerful Imagen 3 text-to-image model. (Seriously, the quality is phenomenal!) - Receives the generated image data (as base64).
- Saves the image to the correct directory (e.g.,
public/images/posts/a-great-new-post/hero.jpg
). - Parses the corresponding Markdown post file using
gray-matter
and updates theheroImage
frontmatter field with the correct path to the newly saved image.
179 collapsed lines
/* eslint-disable no-console */ // Allow console logs for CLI script feedback
import * as fs from 'fs/promises'import * as path from 'path'
import { PredictionServiceClient, helpers } from '@google-cloud/aiplatform'import { status as GrpcStatus } from '@grpc/grpc-js' // Rename for clarityimport dotenv from 'dotenv'import matter from 'gray-matter' // Used for frontmatter parsing
import type { protos } from '@google-cloud/aiplatform'
// --- Type Aliases & Interfaces ---
// Specific protos types for request/response claritytype IPredictRequest = protos.google.cloud.aiplatform.v1.IPredictRequesttype IPredictResponse = protos.google.cloud.aiplatform.v1.IPredictResponse
/** Defines the structure for script configuration settings. */interface ScriptConfig { readonly postsDir: string readonly imageOutputDirRoot: string readonly imagePublicPathRoot: string readonly googleProjectId: string readonly googleLocation: string readonly googleAiModel: string readonly heroImageFilename: string readonly defaultAspectRatio: string}
/** Defines the structure for parsed command-line arguments. */interface CliArguments { readonly postSlug: string readonly userPrompt: string}
/** Represents the result of parsing a Markdown file with frontmatter. */type ParsedPost = matter.GrayMatterFile<string>
// --- Constants ---const DEFAULT_GOOGLE_LOCATION = 'us-central1'const DEFAULT_HERO_FILENAME = 'hero.jpg'const DEFAULT_ASPECT_RATIO = '16:9'const GOOGLE_AI_MODEL = 'imagegeneration@006' // Imagen model identifier
const ADC_ERROR_MESSAGE = 'Could not load the default credentials'const ADC_HELP_MESSAGE = `❌ Authentication Error: Ensure Application Default Credentials (ADC) are valid. Run: gcloud auth application-default login`
// --- Configuration Loading & Validation ---
/** * Loads configuration from environment variables and defaults. * @returns The script configuration object. * @throws {Error} If essential configuration (GOOGLE_PROJECT_ID) is missing. */function loadConfig(): ScriptConfig { dotenv.config() // Load .env file
const googleProjectId = process.env.GOOGLE_PROJECT_ID if (!googleProjectId) { console.error( 'Error: GOOGLE_PROJECT_ID environment variable is not set.', 'Please ensure it is defined in your .env file or environment.', ) process.exit(1) // Exit early if critical config is missing }
return Object.freeze({ // Make config immutable postsDir: path.resolve(process.cwd(), 'src/content/post'), imageOutputDirRoot: path.resolve(process.cwd(), 'public/images/posts'), imagePublicPathRoot: '/images/posts', googleProjectId, googleLocation: process.env.GOOGLE_LOCATION || DEFAULT_GOOGLE_LOCATION, googleAiModel: GOOGLE_AI_MODEL, heroImageFilename: DEFAULT_HERO_FILENAME, defaultAspectRatio: DEFAULT_ASPECT_RATIO, })}
// --- Utility Functions ---
/** * Checks if a file exists at the given path. * @param filePath - The path to the file. * @returns True if the file exists, false otherwise. */async function fileExists(filePath: string): Promise<boolean> { try { await fs.access(filePath) return true } catch { return false }}
/** * Parses command-line arguments for post slug and prompt. * Exits the process with an error message if arguments are invalid. * @returns An object containing the postSlug and userPrompt. */function parseArguments(): CliArguments { const args = process.argv.slice(2) // Skip node executable and script path
if (args.length !== 2 || !args[0]?.trim() || !args[1]?.trim()) { console.error('Usage: pnpm run generate-hero-image <post-slug> "<prompt>"') console.error('Example: pnpm run generate-hero-image my-new-post "A futuristic cityscape"') process.exit(1) }
return Object.freeze({ // Make args immutable postSlug: args[0], userPrompt: args[1], })}
/** * Handles common script errors, logs informative messages, and exits. * @param error - The error object or message. * @param context - Optional context message (e.g., "while generating image"). */function handleError(error: unknown, context?: string): never { // 'never' indicates function exits console.error(`\n❌ An error occurred${context ? ` ${context}` : ''}:`)
if (error instanceof Error) { console.error(` Message: ${error.message}`) // Check for specific known error types or messages if (error.message.includes(ADC_ERROR_MESSAGE)) { console.error(ADC_HELP_MESSAGE) } else { // Check for gRPC status codes if available (often indicates API issues) const grpcError = error as any // Use 'any' cautiously for type casting if (grpcError && typeof grpcError.code === 'number') { console.error(` gRPC Code: ${grpcError.code} (${GrpcStatus[grpcError.code] || 'Unknown'})`) if (grpcError.details) console.error(` Details: ${grpcError.details}`) if (grpcError.code === GrpcStatus.UNAUTHENTICATED || grpcError.code === GrpcStatus.PERMISSION_DENIED) { console.error(ADC_HELP_MESSAGE) } } else if (error.stack) { // Provide stack trace for other errors if available console.error(` Stack: ${error.stack}`) } } } else { // Handle non-Error types console.error(' Error details:', error) }
process.exit(1)}
// --- Core Logic Functions ---
/** * Reads a post file and parses its frontmatter. * @param postFilePath - Path to the markdown post file. * @returns The parsed GrayMatterFile object. * @throws {Error} If the file doesn't exist or cannot be parsed. */async function readAndParsePost(postFilePath: string): Promise<ParsedPost> { if (!(await fileExists(postFilePath))) { throw new Error(`Post file not found at ${postFilePath}`) }
try { const postFileContent = await fs.readFile(postFilePath, 'utf-8') return matter(postFileContent) } catch (error) { throw new Error( `Failed to read or parse post file ${postFilePath}: ${error instanceof Error ? error.message : String(error)}`, ) }}
/** * Builds the prediction request object for the Google AI Platform. * @param prompt - The text prompt for image generation. * @param config - The script configuration. * @returns The constructed IPredictRequest object. * @throws {Error} If prompt or parameters cannot be converted. */function buildPredictRequest(prompt: string, config: ScriptConfig): IPredictRequest { const endpoint = `projects/${config.googleProjectId}/locations/${config.googleLocation}/publishers/google/models/${config.googleAiModel}`
const instanceValue = helpers.toValue({ prompt }) if (!instanceValue) { throw new Error('Failed to convert prompt instance to IValue') }
const parametersObj = { sampleCount: 1, aspectRatio: config.defaultAspectRatio, // Add other parameters like negativePrompt, seed, etc., here if needed } const parametersValue = helpers.toValue(parametersObj) if (!parametersValue) { throw new Error('Failed to convert parameters object to IValue') }
return { endpoint, instances: [instanceValue], parameters: parametersValue, }}
/** * Calls the Google AI Platform Prediction Service. * @param request - The prediction request object. * @param modelName - The name of the AI model being used (for logging). * @returns The prediction response object. * @throws {Error} If the API call fails. */async function callPredictionService(request: IPredictRequest, modelName: string): Promise<IPredictResponse> { console.log(`\nSending request to AI Platform Prediction Service (Model: ${modelName})...`) // Instantiate the client just before the call const predictionServiceClient = new PredictionServiceClient()
// The predict method returns a tuple: [response, request, options] const [response] = await predictionServiceClient.predict(request)
if (!response) { // This case should ideally be handled by the SDK throwing an error, // but adding a check for robustness. throw new Error('Received undefined response from AI Platform predict call.') } console.log('Received response from AI Platform.') return response}124 collapsed lines
/** * Extracts the base64 encoded image data from the prediction response. * @param response - The prediction response object. * @returns Base64 encoded string of the generated image. * @throws {Error} If the response structure is unexpected or lacks image data. */function extractImageData(response: IPredictResponse): string { // Safely access the prediction data using optional chaining const imageBase64 = response.predictions?.[0]?.structValue?.fields?.bytesBase64Encoded?.stringValue
if (!imageBase64) { console.error('Unexpected response structure:', JSON.stringify(response, null, 2)) throw new Error('No valid base64 image data found in the AI Platform response.') }
console.log('Image data extracted successfully.') return imageBase64}
/** * Generates an image using the Google AI Platform Prediction Service. * Orchestrates request building, API call, and response parsing. * @param prompt - The text prompt for image generation. * @param config - The script configuration. * @returns Base64 encoded string of the generated image. */async function generateImage(prompt: string, config: ScriptConfig): Promise<string> { try { const request = buildPredictRequest(prompt, config) const response = await callPredictionService(request, config.googleAiModel) const imageBase64 = extractImageData(response) return imageBase64 } catch (error) { // Use the centralized error handler handleError(error, 'while generating image') }}
/** * Saves the generated image to the specified path. * Creates the output directory if it doesn't exist. * @param imageBase64 - Base64 encoded image data. * @param outputPath - The full path where the image should be saved. * @param outputDir - The directory where the image will be saved. */async function saveImage(imageBase64: string, outputPath: string, outputDir: string): Promise<void> { try { await fs.mkdir(outputDir, { recursive: true }) await fs.writeFile(outputPath, imageBase64, 'base64') console.log(`Image saved to: ${outputPath}`) } catch (error) { // Use the centralized error handler for saving errors handleError(error, `while saving image to ${outputPath}`) }}
/** * Updates the post's frontmatter with the new hero image path. * @param postFilePath - Path to the markdown post file. * @param parsedPost - The parsed frontmatter and content. * @param imagePublicPath - The public URL path for the hero image. */async function updateFrontmatter(postFilePath: string, parsedPost: ParsedPost, imagePublicPath: string): Promise<void> { try { // Create a mutable copy of data for modification const updatedData = { ...parsedPost.data, heroImage: imagePublicPath } const updatedPostFileContent = matter.stringify(parsedPost.content, updatedData) await fs.writeFile(postFilePath, updatedPostFileContent, 'utf-8') console.log(`Updated frontmatter in: ${postFilePath}`) } catch (error) { // Log a warning instead of exiting, as the image was generated. console.warn(`\n⚠️ Warning: Failed to update frontmatter for ${postFilePath}.`) console.warn(` Error: ${error instanceof Error ? error.message : String(error)}`) console.warn(' The image was generated and saved, but the post file needs manual updating.') }}
// --- Main Execution ---
/** * Main function to orchestrate the hero image generation process. */async function main() { const config = loadConfig() // Load and validate config first const { postSlug, userPrompt } = parseArguments()
console.log(`Generating hero image for post: ${postSlug}`) console.log(`User prompt: "${userPrompt}"`)
// Construct paths using the validated config const postFilePath = path.join(config.postsDir, `${postSlug}.md`) const imageOutputDir = path.join(config.imageOutputDirRoot, postSlug) const imageOutputPath = path.join(imageOutputDir, config.heroImageFilename) // Ensure consistent path separators for URLs const imagePublicPath = [config.imagePublicPathRoot, postSlug, config.heroImageFilename] .join('/') .replace(/\/+/g, '/') // Normalize slashes
// 1. Read and parse the post file const parsedPost = await readAndParsePost(postFilePath).catch((error) => handleError(error, 'while reading post file'), )
// Note: Add logic here if you want to enhance the prompt with post data const fullPrompt = userPrompt // Keep it simple for now
// 2. Generate the image via AI Platform const imageBase64 = await generateImage(fullPrompt, config) // Error handled within generateImage
// 3. Save the generated image await saveImage(imageBase64, imageOutputPath, imageOutputDir) // Error handled within saveImage
// 4. Update the post's frontmatter await updateFrontmatter(postFilePath, parsedPost, imagePublicPath) // Logs warning on failure
console.log(`\n✅ Successfully generated and linked hero image: ${imagePublicPath}`)}
// --- Script Entry Point ---main().catch((error) => { // Catch any unexpected errors not handled by specific try/catch blocks or handleError handleError(error, 'during script execution')})
I added this script to package.json
too:
"scripts": { // ... other scripts "generate-hero-image": "tsx scripts/generate-hero-image.ts"}
Now, generating a hero image is a single command:
pnpm run generate-hero-image a-great-new-post "Hyperrealistic apple on a park bench in the sunset"
Building these helper scripts manually might have taken significant time, potentially derailing the core migration. With AI assistance, they became quick, iterative additions that substantially improved the final workflow.
This iterative “vibe coding” process – human direction, AI execution/drafting, human review/refinement – allowed me to move through the project plan and beyond at an incredible pace.
The Result: A Modern Foundation in Record Time
In roughly three days of focused effort (interspersed with regular life, of course!), the migration was complete:
- Modern Astro Site: The old Jekyll blog was reborn as a performant Astro site.
- Streamlined Dev Environment: A clean setup using FNM for Node version management, VS Code dialed in with essential extensions like Astro Language Support, Prettier, ESLint, and crucially, Roo Code connected to Gemini.
- Automated CI/CD: A robust GitHub Actions workflow automatically linting, type-checking, building, and deploying the site to GitHub Pages on every merge to
main
. - Custom Domain: Correctly configured DNS for
caseywest.com
pointing to the GitHub Pages instance. - New Workflow: A defined, efficient process for writing and publishing new content, enhanced by custom AI-generated tooling.
Doing this manually likely would have been a multi-week project, minimum. The AI partnership compressed that timeline dramatically.
Reflections and The Road Ahead
This experience solidified my belief that AI is fundamentally changing software development. It’s not about replacing developers but augmenting them, acting as an incredibly powerful coding companion, architect, and assistant.
- Prompting is Key: The quality of the AI’s output (both the initial guide/plan and the code/content generated during vibe coding) was directly proportional to the clarity and detail of my prompts.
- AI Accelerates, You Steer: AI handled the tedious, the repetitive, and the initial drafts, freeing me up to focus on the strategic decisions, the tricky integrations, and the final polish.
- Iterative Collaboration: The back-and-forth felt natural, like working with a very fast, knowledgeable, tireless collaborator.
The journey doesn’t end here. The next steps involve further exploring Astro’s features, refining the site’s design, and, of course, writing more content using this new, supercharged workflow.
Your Turn!
Feeling inspired? Have an old project gathering digital dust? I highly encourage you to try replicating my process:
- Define Your Goal: What do you want to modernize or build? Be specific about your current setup and desired outcome.
- Generate Your Guide: Use a powerful model with deep research capabilities (like Gemini 2.5 Pro via the Gemini website) and provide it with a detailed prompt outlining your project, environment, and requirements, asking it to create a comprehensive guide.
- Create Your Action Plan: Take the guide generated in the previous step and use it as context in a tool like Google AI Studio. Use a model with search capabilities (like Gemini 2.5 Pro with Google Search) and prompt it to create a step-by-step project plan based on that specific guide.
- Try “Vibe Coding”: Set up an AI assistant like Roo Code in your editor, connect it to your preferred model (like Gemini), and start executing your project plan. Delegate coding, conversion, and refactoring tasks to your AI partner. Don’t be afraid to go beyond the plan and build new helper tools along the way!
The future of development is collaborative, and AI is ready to be your coding companion. Give it a try – you might be surprised how quickly you can bring your ideas to life!