Illustration of a person standing at the edge of a cliff facing a wide gap where the other side is unreachable

AI is reshaping entry into software development

Our Director of Software Development, Steve Ly, explores a paradox facing the software industry: we're automating away the entry-level work that builds expertise, but we haven't figured out what replaces it.
Share
FacebookLinkedInEmailCopy Link

I recently had coffee with a former colleague and we chatted about AI’s impact on our industry. The software development industry is facing a structural problem that isn’t being discussed enough. We’re automating away the entry-level work that has historically served as the training ground for developing expertise, and we haven’t figured out what replaces it. Organizations want developers who can work effectively with AI, but the traditional pathway to becoming that kind of developer is disappearing. It’s a paradox with real consequences for how this profession renews itself.

The discussions I’m seeing in developer communities like Reddit and Hacker News reveal the uncertainty about how this transition will play out. Some see it as just another automation wave. Others worry we’re fundamentally changing how expertise develops in this field.

The central problem is that the work that AI can automate overlaps significantly with the work that junior developers have traditionally done to learn. We’re optimizing for productivity in ways that may be undermining the pipeline of expertise development.

What’s changing

The productivity gains from AI coding assistants are real. Code completion is faster. Boilerplate generation is nearly instantaneous. Test writing is less tedious. Documentation gets produced with less friction. These are meaningful improvements in developer velocity for certain kinds of work.

Research from Anthropic shows that human developers remain central to building software. They’re not being replaced; their time is being reallocated. Less time on boilerplate, more time on architectural decisions. Less time writing tests, more time thinking about what needs testing. Less time on syntax, more time on systems thinking.

This reallocation represents a shift in where work happens along the software development value chain. I’ve mapped this shift in a previous post, examining how execution work (coding, testing, deployment) is moving towards commoditization while upstream judgment work (problem framing, architecture, technical direction) remains firmly in the human domain.

But the execution work being automated is precisely the work that has traditionally served as the learning ground for developing that judgment. The junior developer role has traditionally functioned as an apprenticeship—you did the grunt work while learning pattern recognition, building mental models, and developing judgment about what makes software good or bad in context. That work was tedious, but it wasn’t just about completing tasks. It was about building cognitive frameworks that enable higher-level work later.

A software developer works at a dual-monitor setup with code on the screens, overlaid with abstract shapes flowing between her brain and the screen, representing the knowledge transfer and absorption.

The learning problem

Research from MIT looked into the concept of cognitive debt. The researchers studied what happens when people complete tasks with AI assistance versus without it. They found that AI help can improve immediate performance while degrading learning. You can use AI to complete work you don’t actually understand, and that gap in understanding compounds over time. Like technical debt, but in human cognition rather than codebases.

This cognitive debt manifests in what’s become known as “vibe coding”: using AI to generate code based on a general sense of what’s needed, without fully understanding the problem or the solution. When you can get working code fast—without developing the underlying mental models—you’re building on a foundation you don’t actually comprehend. And without that foundational experience, they can’t assess whether the code they’re generating is actually good. Stack Overflow’s article on Gen Z developers shows we’re now seeing a generation learning software engineering with AI from the start, never experiencing development without it. 

Some argue that as AI improves, line-by-line understanding will matter less. But architectural decisions require knowing what you’re building with. You don’t necessarily need to write every line yourself, but you need to understand what that code actually does and what trade-offs it represents. Faster code generation doesn’t eliminate the need for that judgement.

So how do you develop that judgement if you never struggled through the fundamentals?

The economic reality

“Every senior developer was once a junior developer who someone invested in, despite the short-term cost.”

CIO magazine reports that demand for junior developers is softening as AI takes over entry-level tasks. From a short-term productivity perspective, it makes sense: why hire a junior developer to write boilerplate and tests when AI can do it faster?

The pattern is visible in Canadian labor market data. According to The Logic, junior-level tech positions were down 25% from five years prior while senior-level roles were actually tracking 5% ahead of pre-pandemic levels. At the same time, specialized AI roles more than doubled while general software engineer postings dropped 51%. The hiring slowdown isn’t hitting all levels equally. It’s specifically hitting entry-level positions.

This creates a long-term problem. Where will senior developers come from in five or ten years if we’re not developing junior developers now? You can’t hire exclusively senior developers indefinitely. Eventually those people retire, change careers, or burn out. Every senior developer was once a junior developer who someone invested in despite the short-term cost.

This isn’t just about individual organizations making rational decisions. It’s a profession-wide coordination problem. When everyone optimizes for short-term productivity by reducing junior hiring, we collectively undermine the pipeline that produces the expertise we all depend on.

What the industry is saying

The optimistic perspective, represented by GitHub and Formation.dev, argues that junior developers aren’t becoming obsolete—they’re becoming different. The role is evolving to focus more on systems thinking, problem-solving, and AI collaboration rather than syntax and boilerplate.

The pessimistic view, captured in WeAreDevelopers’ “endangered species” framing, suggests we’re closing the door behind us. Those who learned to code before generative AI developed foundational skills that the next generation won’t have opportunity to build.

Both perspectives contain truth. We still need people entering the profession. We’re not being sufficiently intentional about how those people will develop the expertise we need them to have.

The path forward

“Will AI replace junior developers?” isn’t the right question. What matters more is how do people develop foundational understanding when the foundational work is being automated. What does that learning pathway look like now? What do organizations think about the trade-off between immediate productivity and long-term capability?

I don’t have obvious answers yet, though some approaches are emerging.

OXD Icon showing a stylized person

GitHub’s guidance offers a practical approach: use AI for comparison rather than generation. Write your solution first, then see how AI would approach it, then think critically about the differences. The learning happens in the gap between approaches. This treats AI as a learning tool rather than a replacement for learning. 

Document decision-making processes. Demonstrate the ability to explain the “why” behind code. Show that you can work both with and without AI assistance.

OXD Icon showing a stylized org chart

Organizations face a different challenge. Are they creating environments where junior developers can actually learn? Are they measuring growth alongside output? Are they maintaining mentorship structures? The expertise has to come from somewhere. Individual organizations optimizing for productivity makes sense, but collectively we need to ensure we’re still developing the expertise the profession depends on.

OXD Icon showing a stylized educational book

Ontario Tech’s approach suggests one possibility: acknowledge AI as permanent while refusing to let it shortcut foundational learning. Teach fundamentals without AI first, then gradually introduce AI tools as understanding deepens. Focus on building judgment and critical thinking alongside technical skills.

Navigating the transition

We’ve been through tool transitions before. Compilers replaced assembly language. High-level languages replaced manual memory management. Frameworks replaced writing everything from scratch. Stack Overflow changed how we look up solutions.

Each time, people worried that developers would become less skilled. Each time, what actually happened was more nuanced. The tools changed what we needed to know but didn’t eliminate the need for deep understanding.

But AI is different. Previous tools required you to understand what you were building. AI doesn’t. It can generate code you don’t comprehend, and that gap compounds.

The transition is happening whether we’re ready or not. But we can choose how thoughtfully we navigate it. We can choose to ensure the next generation develops necessary expertise. We can choose to treat AI as a tool that augments human capability rather than replaces human understanding.

That requires action from everyone. Aspiring developers building understanding alongside using AI tools. Organizations investing in learning environments, not just productivity metrics. Educators evolving curriculum while maintaining rigor. The industry sharing what actually works.

I don’t think failure is inevitable. But it requires more deliberate effort than we’re currently making. The question isn’t whether AI will change how people become developers. It already has. The question is whether we’ll be thoughtful enough to ensure we’re still developing the expertise we need.