Imagine it's 1965. You're standing in front of an IBM System/360 mainframe. It fills an entire room. It costs millions. You submit your punch cards and wait hours for results. The potential is obvious. The accessibility is not.

That's where we are with AI today.

When Andrej Karpathy called this "the 1960s of AI" in his June 2025 AI Startup School keynote, he wasn't being pessimistic. He was being realistic. And as someone who's spent the past few years building AI systems—and the 20 years before that building software that actually shipped—I can tell you: this historical perspective changes everything.

Because if we're in the 1960s, we're not building for 2025. We're building for 2035.

But here's the thing: you need to start now. Not next year. Not when the tools get better. Now.

Why Engineers Need to Go All-In on AI (Especially You)

Let me be blunt: if you're an engineer and you're not using AI tools daily, you're already behind.

I don't mean "playing around with ChatGPT occasionally." I mean writing 80-90% of your code with LLMs—and having it ship successfully without tons of issues. If you're not there yet, you should reset course.

I write almost everything with LLMs now. Code, documentation, research notes, architecture decisions. Not because it's trendy. Because it works. Things that used to take me days now take hours. Problems I'd procrastinate on because they were tedious? Done in minutes.

Engineering is where the biggest AI changes are hitting first. Not marketing. Not sales. Engineering. We're the ones with the most to gain—and the most to lose if we don't adapt.

This isn't "AI will replace you" fear-mongering. It's "AI will replace the engineer who doesn't use AI" reality. And that's actually good news, because the barrier to entry is just... starting.

The Iron Man suit is already here. You just need to put it on.

Engineers empowered by AI - Iron Man transformation

The 1960s Weren't About Waiting—They Were About Building

Here's what people get wrong about the 1960s computing analogy: they think it means "sit back and wait for things to get better."

No. The 1960s were when the foundations were built.

Those clunky mainframes? They established the fundamental patterns. Time-sharing. Operating systems. High-level programming languages. Databases. The stuff we still use today, 60 years later, was figured out by people working with those room-sized behemoths.

The same is happening now with AI.

We're figuring out the fundamentals. The patterns. The architecture. What works. What doesn't. What looks impressive in a demo but falls apart in production. What seems boring but actually ships.

And here's the fun part: we're all figuring it out together.

Nobody has the answers yet. Not OpenAI. Not Anthropic. Not Google. Not the Stanford PhD who just raised $50M. We're all stumbling around in the dark, trying stuff, learning what works.

This requires a fundamental mindset shift.

You need to start thinking differently. Not just "how do I write this function?" but:

You're combining classic engineering best practices with something entirely new: teaching a baby AI to grow into a working engineer. Not by programming it in the traditional sense, but by communicating with it. Guiding it. Building guardrails. Evaluating its output. Iterating.

It's really magical and mysterious, but super exciting. I feel genuinely honored that I get to take part in this world change. We're not just using new tools—we're developing an entirely new craft.

The Hard Lessons from the Frontier (Or: How I Lost a Month and Built a Rocketship to Nowhere)

Okay, storytime. Because I want to save you from my mistakes—or at least entertain you with them.

I started researching AGENTS.md files and how to control context for LLMs. The goal was to understand how to get AGENTS.md files to yield better code. It was very interesting, and I intend to get back to that topic—I will vanish into research again. I'll also bring deeper value to you and open source the tools I write so you can see what I've done.

But first, I got... distracted.

I started building an evaluation system for AGENTS.md and RULES.md files. It measured how these context files affect the generation of production-grade code. Super stuff—the benchmarks are hard, no one is sure yet how to do this right.

Not just any eval system—an amazing eval system. It was so much fun. It was magical. It ran experiments on remote machines. It had a cool CLI UI. It generated amazing-looking interactive pages of results. It measured wonderful KPIs. It was a rocketship.

One problem: it didn't actually run evaluations that were meaningful.

But I was having too much fun to notice. I kept adding features. I made it prettier. I made it faster. I made it run distributed workloads. I built this incredible piece of engineering with AI—using LLMs to write most of the code, architect the system, debug issues.

And then I went deeper. I started reading about how LLMs actually work. I tried building my own chat interface with Andrej Karpathy's nanoGPT. I dove into research papers. Wow, that was intense. Attention mechanisms. Transformer architectures. Token embeddings.

I got lost. I got sidetracked from creating content, from shipping actual value.

I built an amazing system that didn't solve the problem I actually had.

Here's what that month taught me:

1. Building with AI is intoxicating

When you can go from idea to working code in hours instead of weeks, it's addictive. You can explore rabbit holes that used to be too expensive to investigate. You can build "just to see if it works."

This is powerful. It's also dangerous. Just because you can build something fast doesn't mean you should.

2. The real challenge isn't building—it's knowing what to build

I spent a month building the wrong thing beautifully. The eval system worked. It was impressive. It solved problems I didn't have.

AI makes you faster. It doesn't make you wiser about what's actually important. That's still on you.

3. Context control is everything

The AGENTS.md research that started this whole adventure? That was actually the important part. How do you structure information so an LLM understands what you're trying to accomplish? How do you communicate goals, constraints, priorities?

For those who don't know, AGENTS.md is a file format where you document your project structure, goals, constraints, and context—specifically formatted for LLMs to read and understand. It's like a README, but for AI agents working on your codebase.

Writing one forces you to think clearly. And when you think clearly, the AI can actually help you build the right thing, not just build things right.

4. It's okay to get lost sometimes

Look, I'm not going to pretend that month was wasted. I learned a ton about frontier AI engineering. I understand LLMs at a much deeper level now. I built something complex with AI assistance and saw what works and what doesn't.

But I also could have shipped 10 pieces of useful content in that time.

So here's where I am now: Bare with me, I'm trying my best to give you value, the Dory way. I hope this works. I'd love to help the world learn about AI. Even if it means admitting when I spend a month building a rocketship that goes nowhere.

This is what "Iron Man suit" actually means in practice. AI gives you superpowers. But you still need to decide where to point them.

Workflow comparison: Traditional vs AI Amplify

The Two-Expert Problem Still Matters

Here's something my 20+ years of shipping software taught me: you need two experts to build great software.

to build.
The engineering expert who understands

Every successful project I've worked on—from enterprise systems at IBM and Microsoft to scrappy startups raising tens of millions—had both. Every disaster? Missing one or both.

AI doesn't replace either expert. Not yet.

What it does is amplify both.

The domain expert can now prototype their ideas without waiting weeks for an engineering team. The engineering expert can now explore 10 architectural approaches in the time it used to take to build one.

But you still need the domain expert to know if the solution actually solves the problem. You still need the engineering expert to know if the code will actually work in production, scale, not cause a security nightmare, integrate with existing systems...

AI gives you superhuman speed. It doesn't give you superhuman judgment. Not yet.

What "1960s of AI" Really Means for Your Career

So we're in the 1960s of AI. What does that actually mean for you?

Short term (2025-2026): Iron Man Suits

This is where we are now. Tools that amplify human capability. You're inside the suit making every decision. The suit makes you faster, stronger, more capable—but it's not autonomous.

Practical action: Start using AI tools daily. Claude, ChatGPT, Copilot, Cursor, whatever works. Get comfortable in the suit. Build fluency.

Medium term (2027-2030): Better Suits

The tools will get dramatically better. More context. Better reasoning. More reliable. Easier to use. Think mainframe → minicomputer → personal computer.

Practical action: Build your mental models now. Learn the patterns. Understand the limitations. Figure out your workflow. When the tools get better, you'll be ready to leverage them fully.

Long term (2030s): Actually Autonomous Agents

Maybe. Possibly. We'll see. This is where autonomous agents might actually deliver on the hype. Systems that can truly work independently. Take ambiguous goals and turn them into working software.

Practical action: Stay in the game. Keep learning. Keep building. Keep adapting. The engineers who thrive in the 2030s are the ones who started in the 2020s.

The Gap Between Hype and Reality Is Your Opportunity

Everyone's talking about AI. VCs are throwing money at anything with "AI" in the pitch deck. LinkedIn is full of people claiming AI will replace all engineers by Tuesday.

The gap between hype and reality has never been bigger.

This is your opportunity.

While everyone else is either panicking or blindly believing the hype, you can be the engineer who actually understands the technology. Its capabilities. Its limitations. How to build with it. How to ship with it. How to evaluate it. How to improve it.

You can be the domain expert who knows how to work with AI. The engineering expert who knows how to build AI systems. The leader who makes smart decisions about AI adoption because you understand what's real and what's not.

Start Building. Start Learning. Start Now.

I started this essay with a 1960s mainframe. Let me end with what happened next.

The people who learned to work with those mainframes? They built minicomputers. Then microcomputers. Then personal computers. Then the internet. Then mobile. Then cloud.

The fundamentals they learned with punch cards and batch processing? Those patterns still matter. They shaped everything that came after.

We're at that moment again.

The LLMs we're using today—clunky, expensive, sometimes frustrating—are teaching us the patterns that will matter for decades. How to communicate with AI. How to structure context. How to evaluate outputs. How to build guardrails. How to ship systems that actually work.

You don't need to be perfect. You just need to start.

Write your next function with an LLM. Document your architecture with AI assistance. Build a small tool. Break something. Fix it. Learn.

Get in the suit. The humans who learn to fly these things are the ones who'll build the next decade.

And here's the secret nobody's telling you: it's actually kind of fun.

About This Post

This essay came from a video I made about AI replacing engineers, a LinkedIn post about Iron Man suits, and countless hours building AI systems (and occasionally over-engineering eval systems). I'm building in public and sharing what I learn. If you want more, subscribe to my newsletter or follow along on LinkedIn.

The 1960s of AI are going to be a wild ride. Let's figure it out together.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

More posts

June 10, 2020

Managing Tech Debt

Read

May 24, 2017

REST Endpoints Design Pattern

Read

Get the latest updates