The Future of Coding: AI Agents, Vibe Coding, and the Rise of the Developer-Manager

By Chris White, SoCal Tech Forum Member
May 2025

I started my AI journey with the belief that AI coding tools were at best a gimmick, but at worst, out to steal my job. I tried Copilot when it first launched, but felt that it added very little value. The output was mediocre, the experience was clunky, and I walked away feeling like AI was just hype. That was early 2022.

A lot can change in a couple of years.

In 2025, most of my code is generated by AI; through agents that feel more like collaborators than tools. Not just code snippets or boilerplate, but full-scale APIs, data-driven dashboards, context-aware code generation, and design-level planning with intelligent agents that remember, reason, and respond to intent.

This practice is controversially dubbed Vibe Coding. Pedantic techies refer to vibe coding as AI Assisted Development, AI Driven Development, or Prompt Driven Development. Some people love it (✋ put me in jail) and some people hate it. Name aside, it reflects something real; a shift from developers grinding out lines of code to becoming something akin to a composer. The vibe coder plans architecture, orchestrates output, tests results, and deploys functional software projects with the assistance of AI agents. It’s not about surrendering to machines. It’s about working with them to create powerful efficiencies that supercharge individual builders.

Let me walk you through how I got here, what’s changed in my workflow, and what I’ve learned along the way.


Coding in the AI Era

Your first vibe coding results will prove wholly underwhelming. The outputs will be off, the codebase will turn to spaghetti, and you’ll have a nightmare project on your hands. This might lead you to believe that coding agents are immature. Over time you’ll begin to realize that your perception of maturity is deeply tied to your inputs. The difference between failure and success is your individual mastery of your toolkit.

My philosophy on expertise is that behind every great artist is a body of failed work. Going from amateur to expert in any field requires an enormous investment of time and energy. Learning to vibe code requires a serious amount of seat time measured in number of prompts sent. I started by sending a few hundred exploratory prompts to ChatGPT, asking ridiculous questions and diving into rabbit holes. I then decided I wanted to push the limits and see if I could break the model in some way. I would ask for large collections of data formatted in JSON, Python scripts that generate SQL migrations from JSON, DBML diagrams, movie reviews, product comparison, doomsday evaluations, the list goes on. I ran thousands of these exercises to help me to understand the boundaries of a frontier model like GPT-4. I then moved to agentic coding with VS Code + Copilot, Cursor, and eventually Windsurf.

I’ve lived in VS Code for most of my career, but the Copilot experience just didn’t feel native. Credit to Microsoft because Copilot has improved a lot in the past few months, but initially they missed the mark. These pain points gave way to Cursor where I felt empowered by AI for the first time. That’s when I started to really shift from being skeptical of AI to integrating it into every aspect of my workflow. Cursor is an amazing tool, but their agent refused to follow my rules and the models felt stifled. I found my home in Windsurf when it was suggested as the cure all to my agentic coding ails. The UX, context management, rule following, tool calling, reasoning capabilities, responsiveness etc. I was no longer using an IDE, but a powerful paired-programmer generating code at breakneck speed. 

In these fleeting moments, I found nirvana. I’d reached the point of no return.

A note about AI credits: Tools like ChatGPT and Gemini are nearly limitless and should be used accordingly. Tools like Windsurf and Cursor have credits. My philosophy on using these credits has shifted over time. I threw credits at absolutely everything when I first started. My goal was to learn how to code with an AI agent. I got stingy after a while. I started running alternate models to conserve my credits and generally trying to preserve value. At a certain point I realized that my desire to control costs led to losses in efficiency. I have since reasoned that credits are not a valuable resource to be hoarded. Credit expenditure creates the valuable resource. As they say, it takes money to make money.

My Day-to-Day Stack and Workflow

Windsurf is my primary editor, handling deep agentic tasks like generating API’s, managing architecture, and tricky debugging. Gemini is my thinking partner. I use voice mode every day while walking my dogs to ask questions about architecture, system design, product thinking, or deeper learning. Gemini’s chat interface provides an infinite one-on-one session with a senior level expert where I regularly request assistance for almost every task in my life. I vibe coded a speech to text app using Faster Whisper for lightning fast prompts and dictation. I use Ollama with open source models to run local inference on a custom-built AI server (high-end consumer grade). I’m even using Windsurf to build a group of agents that consume my AI server API and act on my behalf 😎.

Windsurf works well for me because I’ve created my own system to manage context. Within every project I create an architecture.md that defines the shape and structure of the project. It includes patterns, frameworks, naming conventions, requirements, feature set, environment, and other project-level context. I use rule files to define hard constraints like, “strive for a sensible level of code coverage as close to 100% as possible using unit tests”. Next are prompts, which play an enormous role in every AI use case. To do well with prompting requires a thoughtful approach as outlined in Prompt Engineering by Lee Boonstra. My prompting technique plays the role of layering in variable context that’s difficult to define in architecture.md

Note: each new prompt bloats the agent's context window. This eventually causes latency and instability in your agent. Open a new chat when you see reductions in code quality, high latency, or doom loops. Ask your agent to update architecture.md and write a summary of where you’re at before opening new chats. Feed this summary to the next chat.

An area where LLMs struggle is historical context. Every model has a cutoff date and is totally unaware of recent changes. Additionally, they’re trained on enormous volumes of content where historical context isn’t always available. This leads to code generation that is often outdated. I started manually feeding docs to the agent which vastly improved accuracy, but came with a time, effort, and a personal context window penalty. 

Eventually, I built an MCP server called SushiMCP that helps manage this shortcoming. It makes feeding up-to-date documentation trivial. Prompt your agent, “Ask Sushi MCP for Hono docs”. The agent calls the fetch tool and consumes the documentation in an LLM friendly format. This enables the agent to keep in context what is necessary and throw the rest away. Fresh docs are a single tool call away. This sort of modular context-sharing is the future. MCP is the current shape of modular toolkits, but one thing is certain: agents will eventually learn to self-organize around reusable knowledge. Right now, we have to build those bridges ourselves.

From Developer to Agent Orchestrator

Tooling wasn’t the biggest shift for me, it was mindset. One might ramble for hours in an attempt to explain, but page 2, The Way of Code, by Rick Rubin provides the most wonderful explanation:


“When we recognize code as elegant,

other code becomes sloppy.

When we praise efficiency,

the notion of waste is born.

Being and non-being create each other.

Simple and complex define each other.

Long and short determine each other.

High and low distinguish each other.

Front-end and back-end follow each other.

Therefore The Vibe Coder builds without laboring

and instructs by quiet example.

Things arise and he accepts them.

Things vanish and he lets them go.

He holds without claiming possession.

Creates without seeking praise.

Accomplishes without expectation.

The work is done and then forgotten.

That is why it lasts forever.”


Code quality is ultra-important, but we should carefully consider the guardrails and boundaries we place on our agents. They perform better when instructed with declarative intent than with imperative instruction. I’ve found it necessary to let go, much of my control over the codebase. Donnie Van Zant sang, “Just hold on loosely. But don't let go. If you cling too tightly. You're gonna lose control.”

I used to be a builder, someone who translated requirements into working code. My work has shifted. I still read every line of code, refactor for performance and quality, perform advanced debugging, and write a lot of front-end code (where LLMs tend to perform poorly). But I’m now more of an architect who guides agents. I coordinate effort in a way similar to middle management. My job is to keep tasks organized, and make sure the work aligns with goals, but without standups, hour-long meetings, or weekly check-ins. I just write a prompt and the agent gets to work. 

My philosophy on the future of work: we’re all management.


Strengths and Weaknesses

A breakdown of where I’ve seen AI really shine and where it falls short:

Where AI shines:

  • Backend (APIs, workers, utilities)

  • Scripts and automations

  • System design and ideation

  • Rapid Prototyping, R&D, greenfield projects

  • Legal language parsing and structured writing

Where AI struggles:

  • Front-end, UX design, and layout

  • Projects with heavy technical debt

  • Tasks requiring visual nuance or subjective judgment

  • Navigating large legacy codebases without context

For greenfield projects, AI can write 80–85% of the code with little need for modification, especially with tools like Claude 3.7 Thinking. Existing systems with lots of baggage will need extensive context and manual cleanup. At the moment, frontend work will always need manual intervention, often finish work, but sometimes a full rewrite.

Advice for Junior Engineers

Here’s what you need to know: don’t skip the fundamentals. You need to know how to code. AI can easily write a thousand lines of code. In fact, agents will confidently generate enormous amounts of soup and finish by exclaiming for the fifth time that the feature certainly works now!!! Then you find out those changes broke other parts of your project because they’re tightly coupled with this part of your project… and also your error messages are sloppily rewritten, your api is now mounted at /api/api, and the same helper function is duplicated in all of your handlers. You need to be able to read every line of code, reason about your project, and you must understand what the agent is generating. Otherwise, you’ll never catch the bugs and inconsistencies.

An underrated function of LLMs is their ability to scaffold your learning. Ask your chatbox to explain code you don’t fully understand. Ask questions about architecture, project structure, coding practices, common patterns, etc. You can learn an enormous amount in a very short time by using AI as a tutor. Just last night, I was digging into Microsoft’s BitNet. I didn’t understand the concepts, but I dropped them into Gemini and asked for explanations. Gemini helped me scaffold my knowledge in a crawl, walk, run format. This helped me understand BitNet really fast.

This is a new way of learning. Follow this pattern: prompt, read, reason, repeat. You’ll never know everything, but insatiable curiosity and hungry learning are the skills that will separate the wheat from the chaff in the agentic coding era. The more things change, the more they stay the same.


Final Thoughts: We're Not Going Back

I’d turn down a job offer that bars the use of AI without hesitation. Agentic coding has changed how I think, how I learn, and how I build. I used to say that coding is a superpower. Agentic coding is more like omnipotence. We’re entering an era where developers are no longer just code authors. We’re orchestrators, architects, and instructors to our coding agents. It’s an exciting shift and it’s precisely what gets me out of bed every morning.

The future is prompted.


About the SoCal Tech Forum

The SoCal Tech Forum: Building a Community of Innovators

The SoCal Tech Forum was founded with a vision to create a dynamic and inclusive space for technology enthusiasts, innovators, entrepreneurs, and students. Based in the Inland Empire, a region brimming with untapped potential, the forum serves as a centralized platform where individuals can come together, gain valuable experience presenting their ideas, and learn from seasoned speakers. Our mission is to amplify technology growth and foster entrepreneurship within the Inland Empire and beyond.

With over 400 members and growing, the SoCal Tech Forum is the premier hub for tech-minded individuals from across Southern California. Our community thrives on the hunger for knowledge, collaboration, and innovation. Each month, we host engaging meetups that feature diverse topics ranging from blockchain and artificial intelligence to business strategies for startups and beyond. By adapting our format regularly, we ensure our events remain fresh, relevant, and tailored to the interests of our members.

At the SoCal Tech Forum, we value feedback from our community to guide the topics we explore. Whether you're delving into highly technical subjects or engaging with non-technical content, our events promise something valuable for everyone. Together, we aim to elevate education, spark meaningful discussions, and tackle challenges that drive the tech ecosystem forward.

Next
Next

Powering the Future: Lessons from Two Decades in the Utility Industry