🚀 My latest project 🚀

AI Dev Roundup Newsletter

→
Ryan Fitzgerald

Ryan Fitzgerald

Senior Full-Stack Engineer from Canada specializing in scalable web applications, AI integration / workflows, and modern development practices

Becoming an AI-native Software Engineer

Posted on June 10, 2025


I’m confident I’m not alone in saying I’ve spent a lot of time over the past couple of years thinking about AI as an engineer / developer and what it means for our field and for our careers. For a long time, software engineering felt like a safe bet. After all, how could we be automated away when we’re the ones writing the automation? Or so we thought.

Like many, I’ve run the full gamut of emotions on the topic: excitement, uncertainty, fear, frustration, more excitement, more uncertainty, and so on. If there's an emotional response to AI to be had, I’ve probably felt it. But lately, one emotion has consistently stuck, and that’s excitement.

It took a while to get here (and arguably even longer to learn to stay here) but I can now say with confidence: the rise of AI excites me more than anything else right now as an engineer. The opportunities seem endless. Things have started to click.

I’ve always considered myself a fairly effective engineer by traditional metrics...shipping consistently with quality, solving challenging technical problems, mentoring others. But with AI, I’m producing more than ever before, with a level of speed, quality, and understanding I’ve never experienced. In many ways, it’s 10x’d me as an engineer.

The next sections of this post are my take on what it takes to become an AI-native software engineer and build fluency with AI at every level. I’m not claiming to have mastered it all (far from it) but I wanted to put my thoughts down in the hope that they might help another engineer or two get off the emotional rollercoaster and land on excitement.

This post is less about specific tools and frameworks (though I’ll sprinkle in a few I've found) and more about the mental models needed to approach AI effectively. As I keep learning and refining my thinking (and as the space continues to evolve), I’ll aim to share more. For now, think of this as a brain dump of random thoughts I have on how I currently view AI’s role in modern software engineering.

It starts with a mindset shift

The best way I can explain my mindset is that I see AI as a multiplier of myself, not just a tool that does work for me on command (though I'll admit “vibe coding” can be fun occasionally).

Put another way: a junior engineer might only have the experience to ask AI questions that yield intermediate or senior-level responses. But a senior engineer, with deeper context and stronger instincts and understanding, can frame questions in ways that push the AI to think more like a Staff or Principal engineer, bringing a much sharper lens to the same problem.

As your experience grows, so does your ability to prompt effectively and with that, the multiplier effect of AI scales alongside you. That’s why I often say I feel like I've 10x’d myself as an engineer. It’s like having a far more seasoned engineer in my corner with far more knowledge and context than I do, ready to answer any question or help debug any issue I run into. And the best part? I can't ever catch up to them; they stay steps ahead.

The power of a good prompt

Building on the previous point about how experience shapes AI output, it’s worth emphasizing that effective prompting is critical at every level. Even if you're more junior, you might not always know what to ask but that doesn't mean you can't learn how to ask it well with the experience you have.

We’ve all seen it: you ask an LLM for code, and it spits out something completely unusable...maybe even the worst code you’ve ever seen. Is that the model’s fault? Maybe. But more often than not, it’s a sign that the prompt wasn’t clear or specific enough. With better framing, you can guide the model toward a much more useful and accurate response.

Let’s walk through a quick example to illustrate this. Say you’ve built a simple React LoginForm component and want to save time by having AI write the tests for it. So you type:

Can you write tests for my React component?

Sure, the LLM will likely produce something, but chances are, it won’t be very good. Why? Because it has almost no context. It doesn’t know what the component does (it has to figure it out based on the code), what you’re trying to test, or what your expectations are. It’s forced to guess, and the result will reflect that. It might technically “work” but it definitely won’t be optimal. You can do better and save time spent reworking it later.

Now compare that to a more thoughtful prompt:

I built this LoginForm React component that includes an email field, a password field, and a submit. It also includes a success and error state based on the result after calling onSubmit. Can you write a test that: 1. renders the form, 2. fills in valid and invalid data 3. submits the form, 4. asserts that onSubmit was called with the correct payload, and 5. checks both the success and error state render correctly as a result.

This takes only a bit more time to write, but the quality of the response will be exponentially better. The takeaway? Don’t cut corners on your prompt. A little extra effort upfront goes a long way.

Another tip is if you're using an IDE like Cursor, take advantage of things like .cursorrules to define project specific instructions. These give the LLM important context about your codebase and conventions you're looking to follow. Also, make use of the Ask vs. Agent modes. Starting in Ask mode helps ensure the LLM fully understands your intentions before it starts generating or editing code. It may take a bit more time upfront, but it can save you from a lot of rework, confusion, and low-quality code suggestions that you ultimately end up rejecting.

Trust, but verify

This one might seem like common sense and in some sense contradictory to previous statements, but it’s worth repeating: AI-generated output, especially code, should never be blindly trusted. You’ve probably heard stories (more often than you’d like) of someone who "vibe coded" their authentication layer, only to have it exploited later by a third party.

More often, the risk is subtler: the AI generates a block of code, you give it a quick skim, and it seems fine...so you ship it. But now you’ve got a production bug in code you don’t fully understand because you didn’t actually write it.

AI-generated code is not bulletproof. Far from it. As previously discussed, the quality of the output is heavily dependent on the quality of your prompt. That’s why it’s critical to review everything carefully. Make sure you fully understand what the code is doing. Don’t hesitate to ask the LLM follow-up questions or have it review its own output for potential bugs or optimizations. Treat fully generated code (no matter how complex) like it was produced by a junior engineer: helpful, but in need of oversight.

Also, don’t be afraid to push back and provide feedback based on your own experience. Just because AI can act as a multiplier and draws from a broader base of knowledge, doesn’t mean your expertise isn’t valuable. Trust your instincts and experience.

It’s not uncommon for AI-generated code to be unnecessarily over-engineered. For example, I recently asked it to help with DNS lookups in Node.js, and I wanted the operation to timeout after a set number of seconds. The AI’s response was mostly correct, but for the timeout, it built a fully manual workaround. Fortunately, I had used the library before and knew it supported a built-in timeout option. I shared that with the model, and it simplified the solution accordingly.

Code generation vs. deeper understanding

The power of AI goes far beyond just generating code for engineers. While that’s certainly one of its most impressive capabilities, it shouldn’t be the only way you benefit from it. One of the most underrated use cases is using AI to help you understand complex topics...breaking down ideas, explaining unfamiliar concepts, or walking through problems step-by-step.

For example, want to support custom domains for your customers but aren’t sure where to start with DNS configuration or SSL certificate generation? Or maybe you're trying to set up scalable email infrastructure? Or manage large volumes of data efficiently? These are all tasks that, in the past, would have required extensive research, lots of trial and error, or a more experienced engineer to walk you through. With AI, those learning curves can shrink from days or weeks to potentially hours.

AI is an incredible tool not just for generating code, but for helping engineers understand architectural patterns, validate existing systems, and solidify their understanding of complex topics. Use it for those purposes just as much as you use it to write code.

That said, the same caution applies: don’t take architectural advice at face value, especially in areas where your own experience is limited. Just like you’d review AI-generated code before shipping it, review architectural suggestions carefully and make sure you truly understand the tradeoffs before adopting them.

Never stop learning

As engineers, continuous learning tends to come naturally but in the context of AI, it’s absolutely essential. The landscape is evolving at an incredible pace, and the tools, frameworks, and capabilities available today might look completely different a year from now. Staying current with the latest developments, trends, and techniques is key to remaining effective and fluent.

To keep up, I rely on a mix of resources: newsletters, blogs, X (Twitter) accounts, and even Reddit threads. These help surface what's new, what’s gaining traction, and what’s worth exploring further.

There are also plenty of excellent resources available for deeper learning...whether you want to explore specific topics, understand the inner workings of LLMs, or sharpen your skills with real-world examples. Whether you're casually exploring or diving deep, there’s no shortage of high-quality content to learn from.

📬 Quick plug: If you're looking for a curated source to stay up-to-date with the latest AI news, trends, and tools for busy developers, I publish a weekly AI Dev Roundup newsletter. No fluff, just the good stuff. It's effectively a collection of what I find throughout the week.

Check it out →

Finding the right tools

Finding the right tools for your AI workflows is a personal process and it’s highly subjective. It depends on how you like to work. My best advice: try as many tools as you can and see what fits your style and needs best.

At the IDE level, there are some excellent options like Cursor, Windsurf, and GitHub Copilot in VS Code. Each has its own strengths, and it’s worth exploring what makes each one unique. There are also cost components to each if you want to leverage all features.

Beyond IDEs, there are also powerful standalone AI agents built for software engineering tasks. Tools like Claude Code, Cline, OpenAI Codex, and others. These can be great companions for deeper problem-solving, architecture planning, or even long-form code generation.

Once you find a tool that clicks, become a power user. Learn its strengths, shortcuts, and workflows. Personally, I’ve landed on Cursor. It’s an incredibly capable AI IDE that covers everything I need and then some. I also appreciate how active their team is in the community and how committed they are to continually improving the product for engineers.

Another important factor to consider is the models themselves. It’s well worth experimenting with different models and staying up to date on which ones perform best for software engineering tasks because this changes often.

Some models are stronger at reasoning, others at code generation, and some handle multi-step tasks more effectively. Knowing which model excels in which area can give you a serious edge especially in tools like Cursor, where you can choose which model powers your agent.

Build. Build. Build some more.

This one almost goes without saying: build. Then build some more. The best way to understand what’s possible with AI as an engineer is to experiment. Try building new things, use AI in different ways, create powerful workflows, and explore not only its strengths but begin to understand its limitations.

The more you apply it hands-on, the better you'll understand where it excels, where it falls short, and how to get the most out of it. That’s how you truly become AI-native as an engineer.

Building with AI isn’t just about generating code, it’s also about building products that integrate AI at their core. Some of my biggest leaps in understanding came from working directly with AI inside real products, especially when implementing agentic workflows using frameworks like LangGraph. The more you're in the weeds, the more you're forced to learn, experiment, and debug.

Working with AI in this way forces you to think about system design, user interaction, reliability, and how AI fits into real-world use cases, which is where the deeper learning really happens. It also very quickly reinforces the power of a strong prompt.

It's not always all about the code

Being an AI-native engineer isn’t just about generating code. Sure, you might already use it to draft emails, write pull request descriptions, or even respond to Slack messages, but there are many powerful use cases that often get overlooked.

For example, imagine you’re in a meeting discussing a major architectural decision, and you notice some serious flaws in the proposed approach. Maybe you're experiencing some difficulty trying to articulate your concerns in a way that resonates with the team. That’s a perfect moment to leverage AI: give it context, explain your concerns, and let it help you dig deeper, explore the implications, and even frame your argument more clearly and persuasively.

The point is, AI doesn’t just have to help you write or understand code, it can support you in every facet of your role as an engineer.

Wrap up

I know this was more of a high-level piece, but I hope it offered some useful mindsets or mental models for applying AI as a software engineer. If you found it helpful or have any feedback, I’d love to hear from you. I plan to continue writing on this topic, talking about AI workflows as an engineer, and may dive deeper into specific areas in future posts as well.


Join My Newsletter

Get my latest posts, straight to your inbox.

Copyright © 2025 rfitz.io