AI didn't take my job

When is the other shoe going to drop?

I thought that AI was going to take all our jobs by now. Especially us developers. Yet here I sit, over a year past GPT-3.5's debut, still tapping away at my keyboard, still writing code, still very much employed. Sure, I use CoPilot, which is a GPT-based autocomplete, but to call that AI is a bit of a stretch. By now AI was supposed to do at least half of my work. Instead, not only is that not the case, but I find myself using AI tools less and less each day. Perhaps I'm unknowingly morphing into some kind of a hipster, the kind who chooses tools for their aesthetic and novelty rather than their utility. And let's face it, the novelty of AI has worn off. Or maybe, just maybe, the hype is winding down and now that the smoke is beginning to clear, we see these AI developer tools for what they truly are - a mere evolution of autocomplete.

The problem is that an evolution of autocomplete is nothing but a shadow of the grand promises we were sold. Remember, not too long ago, the future for developers looked grim. The only ones who seemed unconcerned were those using these tools on a daily basis. To everyone else, we looked like terminal patients, unknowingly walking our last steps.

I've had more than a few people look me in the eye and tell me, with the utmost sincerity, how tragic it was that AI had taken my job. The higher you went up the corporate ladder, especially in software-related companies, the more pessimistic the outlook became. C-level executives were practically convinced that soon only the managers would remain, running the entire operation, orchestrating a team of AI developers. Not a human developer in sight. We, the actual developers, would be phased out like elevator operators. Sure, a few specialized developers might stick around, probably to build the next AI. But the vision of offices packed with developers at their MacBooks, standing desks and all? That would be a quaint memory like the rows of accountants with their green visors and adding machines.

Yet here we are, still writing code. Still employed. What happened? Why hasn't the AI revolution begun? Are the tools not good enough? GPT, in its current form, is heralded as a genius, acing bar exams, understanding code better than the most senior developers in every language known to man, and churning out prose that would make Hemingway envious. Think for a second if a person truly had all these capabilities and somehow you managed to hire them for your startup. This would be a once-in-a-lifetime teammate who could lift the tide for the whole company. Think of the impact on your product - no more bugs, features rolling out daily, pristine code fully tested. But that's not the reality, is it? You still have bugs in production. Compared to last year, your features aren't rolling out any faster. And now that you take a closer look, your once-impressive marketing material is starting to seem a bit suspect. There sure is a lot of "tapestry" and "certainlys" crammed in there. It reads like an overly eager HR manager trying to get you excited about a policy change.

Were we deceived? Do GPT tools fall short of the capabilities they were hyped to have? Are they like a developer who only glanced at the Kubernetes docs and parroted them back in an interview? "Hmm yes, pods, self-healing, replication..."

Actually, no, not at all. GPT is incredibly capable. The problem is more complex than that.

First, there are technical issues with AI assistants. And believe it or not, this is the most minor issue. But for the sake of clarity, let's go over that first incase you're unaware: GPT-based coding assistants have a terrible memory. Even when advertised as having "expanded context" or enhanced with a database meant to help GPT remember, this doesn't work in practice. Sure, with "memory" or a larger context window, the AI assistant can recall what you wrote an hour or even days ago, but it has no grasp of how that fits into the grand scheme of your work. It's constantly chasing the very last thing you said. To the AI, the last instruction you gave is its entire universe. Everything else you mentioned, like the overall purpose of the code, is just background noise. To the AI assistant, the task at hand is what you said last, and it interprets that in the most literal way and executes it with absolute fervor with complete disregard for anything else. Sure, in that one task it just completed, there might be a hint of correctness. Some parts are right, but the overall idea is off. So, you ask it to fix it. Now, that correction becomes its new obsession, undoing any accuracy from the previous task. You end up in an endless loop - fixing one thing, breaking another. With each iteration, the goal slips further away. Eventually, you give up and write it yourself.

Then there's the plain reality that the code just isn't that great. Sure, it can figure out LeetCode answers like a developer three months into their job hunt, but when it comes to novel solutions to novel problems, it completely falls flat. Confronted with an unfamiliar situation, GPT tries to shoehorn an existing solution into a new problem. So what do you do? You break up the problem into smaller pieces and get GPT to cycle through solutions for individual lines. Sound familiar? That's autocomplete with sophisticated pattern matching.

Finally, let's talk about the real issue - it goes against how developers like to work.

To grasp this, you need to understand a fundamental truth for developers - it takes more time to read code than to write it. This is why they are so eager to tear down and start from scratch. To the non-technical eye, this looks like an epic undertaking, as if the developer were choosing to shoulder a colossal burden rather than simply applying a fix. "We want to get it right," they'll tell you. But the truth is quite different. More often than not, developers do this because making a fix requires an understanding of the system at hand. This includes studying it and truly understanding how it all fits together. And then, and only then, making a precise incision and inserting a fix. This is a much harder task, and plainly spoken, isn't as fun. So very often, developers choose the path of least resistance and bulldoze and rewrite from scratch.

With this in mind, think back to the promise of what a fully capable AI assistant is supposed to do - write huge swaths of code as you, the developer, go over it with a fine-tooth comb looking for faults. That's precisely the opposite of what any developer wants to do. Good luck with that.

That's exactly why I don't see AI coding tools advancing much beyond their current state - sophisticated autocompletes. For them to evolve further would demand a fundamental shift in the psyche of every developer. A complete overhaul of the developer's mentality. The thing is, even now, after many, many years of being a developer, a seasoned developer sometimes will choose to rewrite something instead of patching it. Of course, the more seasoned, the less they will succumb to this temptation. But occasionally, like a cigarette handed to them at 2 AM, they think, "What the hell," and go ahead and rewrite it anyway. They'll say to themselves, "This time, I'll get it right."

Of course, I'm not naive enough to think some won’t try. Undoubtedly, a few product-minded shops will attempt to wring every bit of promise out of AI. But I guarantee you, just as the sun is going to rise tomorrow, the developers in charge of monitoring and analyzing that code will do no such thing. Reading and verifying lines of code day in, day out, instead of writing it would be quite literally a final boss level to Dante's ninth circle of hell for a developer. So here’s what will really happen: The AI assistant will churn out code that no one's going to bother reading. The systems will rapidly devolve into an unfathomable mess. Critical bugs and security holes will proliferate.. When the AI system is asked to patch it, it will attempt to do so, but due to the technical issues mentioned previously, it will do so but at the sacrifice of creating even more bugs and security issues. And so forth.

So, if that's the problem, what would an ideal AI assistant look like? From a manager's perspective, it would be an AI that could handle all those tasks independently, solving new problems with original solutions. We don't have that. That would be AGI - a true artificial general intelligence. nd if we had that, we wouldn’t need the manager, or maybe even the company. That's such a far-out sci-fi idea that if we had that, this whole conversation wouldn't be even worth having. So let's forget that and focus on reality for now.

So what would make a great AI assistant for a developer? For starters, it would follow instructions while keeping an eye on the big picture. We’re not there yet, and even when we get close, it’ll still be a tool for writing small chunks of code, not extensive swaths. Developers aren’t going to sift through all of that output. So, essentially, we’re looking at a slightly better version of what we already have.

To put some perspective on this, initially when we built Dava, the idea was that Dava could write large parts of a dashboard - connecting to a database, inserting a graph here, adding an interaction there. But the more we worked with it, the more we realized that AI, in its current state, only really shines when you give it tiny tasks. Currently, we use it to fix up code or make small alterations, such as changing a color or going over a component to see why its not compiling. Anything more than that, it just doesn't work. And honestly, its all for the best because, while getting an AI to write everything for you seems appealing at first, it only remains so until you have to go and try to fix it and realize you barely understand the mess it made. Its like working with a black box. Instead, its better to utilize it as a tool alongside many others. Its always there if you need it, but as a tool, not as a substitute.

So, in essence, software development is going to stay a human endeavor for the foreseeable future, at least until AGI arrives. And despite what some clickbait YouTube videos might have you believe, we’re about as close to AGI as we are to practical quantum computers - absolutely nowhere near. So what’s with all the fear?

Two reasons. First, GPT was a revelation. A leap in how computers understand human language. We assumed that it would continue growing at the same rate. We conveniently ignored the years of slow, incremental progress that came before. The graph in our collective mind started just before GPT-3 emerged and was extrapolated to the following years.

Second, there's a perverse thrill in talking about doom and gloom, even when we know better. Casual chatter at the watercooler spiraled out of control. Like a snowball rolling down the hill, picking up size and momentum from a seemingly minuscule start to something wholeheartedly out of proportion. A few sensational articles capitalized on the fear because it drives clicks. Then podcasts picked it up, grifters jumped on the bandwagon, and before we knew it, the narrative had everyone believing we'd all be out of a job by next year.

But things are settling now and we're all starting to realize that AI won’t take my job or yours any time soon. At least not in this form. It will work alongside us, like many other tools.

Maybe the panic was never about AI taking jobs. Maybe it was about the unknown, the fear that something out there could render us obsolete with a flick of the switch. But here we are, still coding, still creating, still very much employed. The world hasn’t ended, and the horizon looks familiar.