NHacker Next
login
▲Programming Deflationtidyfirst.substack.com
95 points by dvcoolarun 7 hours ago | 67 comments
Loading comments...
rkozik1989 6 hours ago [-]
This article is really only useful if LLMs are actually able to close the gap from where they are now to where they want to be in a reasonable amount of time. There are plenty of historical examples of technologies where the last few milestones are nearly impossible to achieve: hypersonic/supersonic travel, nuclear waste disposal, curing cancer, error-free language translation, etc. All of which have had periods of great immediate success, but development/research always gets stuck in the mud (sometimes for decades) because the level complexity to complete the race is exponentially higher than it was at the start.

Not saying you should disregard today's AI advancements, I think some level of preparedness is a necessity, but to go all in on the idea that deep learning will power us to true AGI is a gamble. We've dumped billions of dollars and countless hours of research into developing a cancer cure for decades but we still don't have a cure.

dakiol 5 hours ago [-]
In software we are always 90% there. Is that 10% the part that gives us jobs. I don’t see LLMs that different from, let’s say, the time compilers or high level languages appeared.
bpt3 4 hours ago [-]
Until LLMs become as reliable as compilers, this isn't a meaningful comparison IMO.

To put it in your words, I don't think LLMs get us 90% of the way there because they get us almost 100% of the way there sometimes, and other times less than 0%.

Jensson 4 hours ago [-]
Compilers reliably solves 90% of your problems, LLM unreliably solves 100% of your problems.

So yeah, very different.

bluefirebrand 2 hours ago [-]
If it's not reliable then the problem is not solved

You've just moved the problem from "I can't solve this" to "I can't trust if the LLM solved this properly"

nomel 2 hours ago [-]
There is a level of reliability that is sufficient, as proven by us humans, the existence of issue trackers, and the entire industry that is software QA.

And, further, the existence of offshore, low quality, contractors that are in such frequent use.

xp84 2 hours ago [-]
Precisely. The code I would get from that type of contractor had a similar reliability as what I generate today with nothing but the $20 a month level of AI stuff. Of course, we have the option of making the AI rewrite it in 2 minutes or so to fix its mistakes without waiting for it to be day there again.

AI replacing outsourcing and (sadly) junior SWEs is more likely than it just eliminating coding jobs across the board. Lord help them when our generation of senior SWEs retires, though!

bluefirebrand 2 hours ago [-]
> AI replacing outsourcing and (sadly) junior SWEs is more likely than it just eliminating coding jobs across the board. Lord help them when our generation of senior SWEs retires, though

Not them

It's on current software devs to make sure this doesn't happen! People in senior positions need to be loud and aggressive about telling the money people that we cannot rely on AI to do this work!

Every time you shrug and say "yeah the LLM does ok junior level work" you are part of the goddamn problem

xp84 1 hours ago [-]
What problem? It's not my problem if my employer is screwed after my generation retires. That's the shareholders or owners' problem. The people making 2-10x my salary upstream of me in the org chart are being paid that presumably because they have such greater wisdom and foresight than I do. If I'm the CTO or have very significant equity, maybe I'll talk about restarting hiring of juniors. Otherwise I'll just sit and wait for the desperate consulting offers. It'll be like the COBOL boom in the late 90s.

Note: That isn't my retirement plan, but it'll just be a fun source of extra money if I'm right.

CamperBob2 2 hours ago [-]
If it makes money, the problem is solved. At least from the perspective of the people with the money.

Less cynically, it doesn't matter whether some code was written by a human or an LLM -- it still has to be tested and accepted. That responsibility ultimately must end up on a human's desk.

2 hours ago [-]
2 hours ago [-]
derefr 5 hours ago [-]
I would argue that "augmented programming" (as the article terms it) both is and isn't analogous to the other things you mentioned.

"Augmented programming" can be used to refer to a fully-general-purpose tool that one always programs with/through, akin in its ubiquity to the choice to use an IDE or a high-level language. And in that sense, I think your analogies make sense.

But "augmented programming" can also be used to refer to use of LLMs under constrained problem domains, where the problem already can be 100% solved with current technology. Your analogies fall apart here.

A better analogy that covers both of these cases, might be something like grid-scale power storage. We don't have any fully-general grid-scale power storage technologies that we could e.g. stick in front of every individual windmill or solar farm, regardless of context. But we do have domain-constrained grid-scale power storage technologies that work today to buffer power in specific contexts. Pumped hydroelectric storage is slow and huge and only really reasonable in terms of CapEx in places you're free to convert an existing hilltop into a reservoir, but provides tons of capacity where it can be deployed; battery-storage power stations are far too high-OpEx to scale to meet full grid needs, but work great for demand smoothing to loosen the design ramp-rate tolerances for upstream power stations built after the battery-storage station is in place; etc. Each thing has trade-offs that make it inapplicable to general use, but perfect for certain uses.

I would argue that "augmented programming" is in exactly that position: not something you expect to be using 100% of the time you're programming; but something where there are already very specific problems that are constrained-enough that we can design agentive systems that have been empirically observed to solve those problems 100% of the time.

derektank 3 hours ago [-]
There are no technical hurdles remaining with respect to nuclear waste disposal. The only obstacle is social opposition
golemotron 2 hours ago [-]
..and regulation. The same with supersonic.
BinaryIgor 5 hours ago [-]
100%; Exactly as you've pointed out, some technologies - or their "last" milestones - might never arrive or could be way further into the future than people initially anticipated.
bgroat 5 hours ago [-]
We're 90%... we're almost half way there!
HumblyTossed 5 hours ago [-]
It costs 10% to get 90% of the way there. Nobody ever wants to spend the remaking 90% to get us all the way there.
strbean 4 hours ago [-]
I think parent was making an (apt) reference to Old School RuneScape, where level 92 represent half of the total XP needed to reach the max level of 99.
germandiago 5 hours ago [-]
Exactly this.
jrecyclebin 5 hours ago [-]
I'm already not going back to the way things were before LLMs. This is fortunately not a technology where you have to go all-in. Having it generate tests and classes, solve painful typing errors and help me brainstorm interfaces is already life-changing.
xp84 2 hours ago [-]
I am in a similar place, I think, to you. Caveat: I don't spend a lot of my day-to-day coding anyway, so that helps, but I've never even tried Cursor or Windsurf. I just poke copilot to write whole functions, or ask ChatGPT for things that seem like they'd be tedious or error-prone for me. Then I spend 3-5 minutes tying those things together and test. It saves about 80% of the time, but I still end up knowing exactly how the code works because I wrote most of the function signatures and reviewed all the code.

I know in the very right circumstances the "all-in" way could be faster, but it's already significant that I can do 5x as many coding tasks or do one in a fifth the time. Even if it never ever improves at all.

CuriouslyC 5 hours ago [-]
LLMs are noisy channels. There's some P(correct|context). You can increase the reliability of noisy channels to an arbitrary epsilon using codes. The simplest example of this in action is the majority decoding logic, which maps 1:1 to parallel LLM implementation and solution debate among parallel implementers. You can implement more advanced codes but it requires being able to decompose structured LLM output and have some sort of correctness oracle in most cases.
djoldman 6 hours ago [-]
> Will this lead to fewer programmers or more programmers?

> Economics gives us two contradictory answers simultaneously.

> Substitution. The substitution effect says we'll need fewer programmers—machines are replacing human labor.

> Jevons’. Jevons’ paradox predicts that when something becomes cheaper, demand increases as the cheaper good is economically viable in a wider variety of cases.

The answer is a little more nuanced. Assuming the above, the economy will demand fewer programmers for the previous set of demanded programs.

However. The set of demanded programs will likely evolve. So to over-simplify it absurdly: if before we needed 10 programmers to write different fibonacci generators, now we'll need 1 to write those and 9 to write more complicated stuff.

Additionally, the total number of people doing "programming" may go up or down.

My intuition is that the total number will increase but that the programs we write will be substantially different.

amonith 4 hours ago [-]
> now we'll need 1 to write those and 9 to write more complicated stuff.

Or simpler :) I'd argue that in the past we needed more programmers for more complicated stuff (more hand-rolled databases, auth solutions etc. - a lot stuff was reinvented in each company), now we need many more people to glue some libraries and external solutions together.

The future could look similar, a lot of LLM vibe coders and a handful of specialized fixers.

Who knows though. Real life has a lot of inertia. One will probably do just fine writing just enterprise Java or React (or both!) for the next 30 years. I plan to be dead or retired in the next 30 years.

xp84 2 hours ago [-]
I plan to retire in the next 10-15 years and then 'grudgingly' accept very large checks to teach noobs to be competent enough to replace the recently-retired seniors, since companies are and will continue to, foolishly not hire and develop junior engineers anymore. It seems to me like most firms have almost involuntarily placed an enormous unhedged bet on LLMs being able to do the job of senior and staff software engineers by 2030-2035, which I'd say is certainly a possibility but boy howdy is it going to be an expensive bet to lose if they're wrong.
pmontra 3 hours ago [-]
That's what happened once with PCs in the 80s/90s and then again with the web in the 90s/2000s. The number of developers went up. Maybe the number of developers on the previous technology went down, but I'm not sure about it. Example: there are still developers for Windows native apps. Are they more or less than in 1995? I would bet on less, but I won't bet anything of valuable.
recroad 2 hours ago [-]
Yeah it sounded like Kent just discovered Jevons' paradox and decided to shoehorn it into the article. Nothing here became cheaper, and if by cheaper he means that paying a programmer was more expensive than paying for an AI, even that's not necessarily true once you account for re-work and a host of other things.

If we're going to go with economic/strategy models, I think the Laffer Curve is more relevant. Seriously extrapolating here: AI is optimal for many tasks which if used in those contexts can maximize productivity. Over-using it on unsuitable tasks destroys productivity.

michaelfeathers 2 hours ago [-]
There's something with the same shape as Jevon's paradox - the Peltzman effect. The safer you make something the more risks people will take.

Applied to AI I think it would be something like - ease of development increases the complexity attempted.

Yoric 55 minutes ago [-]
This actually sounds like Rust development (or both FP and OOP development before that, or compilers before that).

By making things simpler and/or more robust, you make some very complex algorithms much more feasible. And you end up with algorithms such as HTTPS or even raft being part of everyday life, despite their complexity.

recroad 2 hours ago [-]
...and complexity created.

I think "How can this code be made simpler and any complexity either isolated or eliminated (preferably eliminated)?" should be the ensuing prompt after we generate things.

gbacon 3 hours ago [-]
This is insightful. Which programs will the new tech make profitable (be it cash, psychic/emotional, or some other form) to write?

The Keynesian bogeyman of the deflationary spiral ignores intertemporal effects. Cell phones and laptops are getting cheaper all the time, but no one drops into an infinite wait because of time-preference. In the context of producing software becoming cheaper, people at a definite point value having a usable system today over a marginally cheaper version tomorrow.

beefnugs 1 hours ago [-]
Too much optimism from everyone, the truth is managers and business owners are controlling everything and they always make the dumbest decisions: "Quick fire everyone! dont hire any seniors we can get em cheaper. Dont hire any juniors they cant afford $10,000 graphics cards to already be good at it. Lets create a new system to exploit illegals and churn through them as they get deported, now thats business boyz"
virgilp 6 hours ago [-]
> Don’t bother predicting which future we'll get. Build capabilities that thrive in either scenario.

I feel this is a bit like the "don't be poor" advice (I'm being a little mean here maybe, but not too much). Sure, focus on improving understanding & judgement - I don't think anybody really disagrees that having good judgement is a valuable skill, but how do you improve that? That's a lot trickier to answer, and that's the part where most people struggle. We all intuitively understand that good judgement is valuable, but that doesn't make it any easier to make good judgements.

thenanyu 4 hours ago [-]
Make lots of predictions and write down your thought process (seriously write them down!) once the result is in, analyze whether you were right. Were you right for the right reasons? Were you wrong but had the right thought process mostly?

Do it every day for years.

gbacon 3 hours ago [-]
The role of the entrepreneur is predicting future states of the market and deploying present capital accordingly. Beck is advocating a game-theory optimal strategy.

Judgment is a skill improved through reps. Sturgeon’s law (ninety percent of everything is crap) combined with vibe code spewage will create lots of volume quickly. What this does not accelerate is the process of learning from how bad choices ripple through the system lifecycle.

ramesh31 5 hours ago [-]
It's just experience, i.e. a collection of personal reference points against seeing how said judgements have played out over time in reality. This is what can't be replaced.

I think the current state of AI is absolutely abysmal, borderline harmful for junior inexperienced devs who will get led down a rabbit hole they cannot recognize. But for someone who really knows what they are doing it has been transformative.

MagicMoonlight 12 minutes ago [-]
Oh kent, you almost got away with using chatgpt to write the whole thing. You mostly camoflaged it, but then I got to:

>We're not just experiencing technological change. We're watching the basic economics of software development transform in real time.

and I knew.

alphazard 5 hours ago [-]
This mindset that the value of code is always positive is responsible for a lot of problems in industry.

Additional code is additional complexity, "cheap" code is cheap complexity. The decreasing cost of code is comparable to the decreasing costs of chainsaws, table saws, or high powered lasers. If you are a power user of these things then having them cheaply available is great. If you don't know what you're doing, then you may be exposing yourself to more risk than reward by having easier access to them. You could accidentally create an important piece of infrastructure for your business that gives the wrong answers, or requires expensive software engineers to come in and fix. You accidentally cost yourself more in time dealing with the complexity you created than the automation ever brought in benefit.

germandiago 5 hours ago [-]
Well, this has happened to me with pieces of code directly in front of an AI. You go 800% faster or more and now you have to go and finish it. All the increase in speed is lost in debugging, fixing, fitting and other mundane tasks.

I believe the reason for this is that we still need judgement to do those tasks, AIs are not perfect at it and they spit a lot of extra code and complexity at times. Then now you need to reduce that complexity. But to reduce it, you need to understand the code in the first place. Now you cut here and there, you find a bug, but you are diving in code you do not understand fully yet.

So the human cognition has to go on par with what the AI is doing.

What ended up happening to me (not all the time, for example this for one-off scripts or small scripts is irrelevant, or to author a well-known algorithm that is short enough without bugs) is that I have a sense of speed that ends up not being really true once you have to complete the task as a whole.

On top of that, you tend to lose more context if you generate a lot of code with AI, as a human, and the judgement must be yours anyway. At least, until AIs get really brilliant at it.

They are good at other things. For example, I think they do decently well at reviewing code and finding potential improvements. Bc if they say bullsh*t, as any of us could say in a review, you just go ahead to the next comment and you can always find something valuable from there.

Same for "combinatoric thinking". But for tasks they need more "surgery" and precision, I do not think they are particularly good, but just that they make you feel like they are particularly good, but when you have to deal with the whole task, you notice this is not the case.

holri 3 hours ago [-]
Based on my experiences with LLMs and the hype around it, we will need more experienced programmers. Because they will have to clean up the huge mess that will come.
happyPersonR 3 hours ago [-]
In my experience if you look at what effect democratizing code actually has, this is exactly the case. People are generating code, but that’s always been the easy part. The mess this is going to make, gonna need a lot of mops.
bluefirebrand 2 hours ago [-]
If programmers had any self respect we would refuse to be slop janitors and instead just build competing tools and eat the lunches of AI "coders"

But time and time again programmers continue to demonstrate we have no self respect and we're happy to dance to the tune of capital for money. That means we're definitely going to be downgraded to "slop janitors" in the near future

lukaslalinsky 4 hours ago [-]
The IT industry has been trying to find ways to cheaply produce low quality code for decades. AI might be the final chapter of that , I'm not even sure about that, but the low quality code is not what programming is about. Even if the context windows and models are scaled 10x, they will be forgetful, they will try to cheat their way to some kind of success. If you are building software because you care about the craft and the result, AI will not replace you in the next decades. You will be just more in the architectural position, not hands on coding. I personally see that as the core of what programming is.
bob1029 58 minutes ago [-]
> Understanding. Judgment. The ability to see how pieces fit together. The wisdom to know what not to build.

It's really hard to get an LLM to assist you when you don't know the right questions to ask. If your vision and convictions about the world are not strong enough, one plausible hallucination can take you into Narnia for an entire week.

visarga 2 hours ago [-]
> We're not just experiencing technological change. We're watching the basic economics of software development transform in real time.

> We're not just <A> ... We're <B>

Is this proof of LLM writing or are people subconsciously picking up LLM patterns?

MagicMoonlight 10 minutes ago [-]
It's 100% a chatgpt article. The kind of person who writes about how chatgpt can do everything is the kind of person that replaces their own writing with chatgpt.

There's some other tells too, but we don't want to give the game away and let these people keep the ruse going.

harimau777 2 hours ago [-]
What advice would you all have for programmers who aren't compatable with AI augmented programming?

I'm somewhat neurodivergent and got into tech precisely because it was a career where hyperfocus, compulsive systems building, passion for finding difficult solutions, etc. where valued. However, now it feels like those skills are no longer values; or even liabilities. As the article points out, companies now want me to "embrace commodity" and focus on plumbing code. However, those are precisely the areas that I'm not good at.

gchamonlive 3 hours ago [-]
The article reduces programming to its economic and utilitarian components to make this analysis. It's coherent and valuable for analysing decision-making in the context of programming as a means to an end, where the end is to make money.

However, there are other aspects to programming that can't be quantified, subjective components that are stripped away when delegating coding to machines.

The first most immediate effect I think is loss of the sense of ownership with code. The second which takes a bit of time to sink in and is at first buried by the excitement of making something work that is beyond your technical capability is enjoyment.

You take both of these out, you create what I could only describe as soul-less code.

The impact of soul-less code is not obvious, not measurable but I'd argue quite real. We will need time to see these effects in practice. Will companies that go all-in on machine generates code have the upper hand, or those that value traditional approaches more?

hvb2 3 hours ago [-]
> Will companies that go all-in on machine generates code have the upper hand, or those that value traditional approaches more?

One thing I'm curious about is if those companies that go all in her to the state where they have the source but they now have vendor lock in with the ai vendor. Since no dev understands the code anymore

logicchains 3 hours ago [-]
Feelings of ownership also cause problems in software engineering, i.e. people being unwilling to make changes to their code that a reasonable person would see as improvements, just because implying that the existing code isn't perfect threatens their ego.
charliea0 2 hours ago [-]
I'd only really be worried about barriers to entry and composition.

One limit on wages is $ of value / hour. If AI makes existing programmers more efficient, then you would expect total wages to go up.

If AI makes it easier for folks to become programmers, then the value produced could be split over more people. Alternatively, if you need fewer programmers then more value could be captured by a few superstar winners.

5 hours ago [-]
bigstrat2003 3 hours ago [-]
It kind of weakens the author's argument significantly when he leads with "let's take as given (very controversial claim that absolutely should not be taken as given)". If you disagree with that claim (and I do), then what's the point of the rest of the article which follows from it?
BinaryIgor 5 hours ago [-]
Interesting, but way too optimistic and biased towards the scenario that infinite progress of LLMs and similar tools is just given, when it's not.

"Every small business becomes a software company. Every individual becomes a developer. The cost of "what if we tried..." approaches zero.

Publishing was expensive in 1995, exclusive. Then it became free. Did we get less publishing? Quite the opposite. We got an explosion of content, most of it terrible, some of it revolutionary."

If it only were the same and so simple.

recroad 5 hours ago [-]
> AI isn't just redistributing the same pie; it's making the pie-making process fundamentally cheaper

Not if you believe most other articles related to AI posted here including the one from today (from Singularity is Nearer).

mlhpdx 5 hours ago [-]
A related idea is sub-linear cost growth where the unit cost of operating software gets cheaper the more it’s used. This should be common, right? But it’s oddly rare in practice.

I suspect the reality around programming will be the same - a chasm between perception and reality around the cost.

basfo 5 hours ago [-]
I’ve been thinking about the impact of LLMs on software engineering through a Marxist lens. Marx described one of capitalism’s recurring problems as the crisis of overproduction: the economy becomes capable of producing far more goods than the market can absorb profitably. This contradiction (between productive capacity and limited demand) leads to bankruptcies, layoffs, and recessions until value and capital are destroyed, paving the way for the next cycle.

Something similar might be happening in software. LLMs allow us to produce more software, faster and cheaper, than companies can realistically absorb. In the short term this looks amazing: there’s always some backlog of features and technical debt to address, so everyone’s happy.

But a year or two from now, we may reach saturation. Businesses won’t be able to use or even need all the software we’re capable of producing. At that point, wages may fall, unemployment among engineers may grow, and some companies could collapse.

In other words, the bottleneck in software production is shifting from labor capacity to market absorption. And that could trigger something very much like an overproduction crisis. Only this time, not for physical goods, but for code.

philipallstar 4 hours ago [-]
> Marx described one of capitalism’s recurring problems as the crisis of overproduction: the economy becomes capable of producing far more goods than the market can absorb profitably

Is that capability a problem? We don't tend to do this unless the state subsidises things or labour unions protect things that don't have customers.

hollowonepl 5 hours ago [-]
Some valid questions asked in the article but I don’t like the terminology used from title to content to assess situation and options. I’d rather call it Commoditization of Software Engineering.
AfterHIA 2 hours ago [-]
The public was sold early on the idea that if you bought a personal computer you could write your own programs and games for the thing and up until now that's more-or-less been a kind of convenient half-truth. What's, "the market" when people really can command their computer interface to do, "what they'd like it to do?"

I see it as a correction rather than deflation. We proved the shitty salesman of the 70s and 80s right. The computer really can be a bicycle for the mind; whether or not that bicycle actually fucking goes anywhere is still left to human wills.

afpx 5 hours ago [-]
"Understanding. Judgment. The ability to see how pieces fit together. The wisdom to know what not to build."

How would one even market oneself in a world where this is what is most valued?

mjr00 5 hours ago [-]
> How would one even market oneself in a world where this is what is most valued?

That's basically the job description of any senior software development role, at least at any place I've worked. As a senior pumping out straightforward features takes a backseat to problem analysis and architectural decisions, including being able to describe tradeoffs and how they impact the business.

jjk166 4 hours ago [-]
This is (at least in theory) the current job of executives. You market yourself with a narrative that explains why projects have succeeded and failed such that you can project high confidence that you will make future projects successful.
mangamadaiyan 5 hours ago [-]
Question 1: is this indeed what is most valued at the moment?

Question 2: Do you think this will ever become valuable?

sublinear 6 hours ago [-]
I think this is a bit like attempting your own plumbing. Knowledge was never the barrier to entry nor was getting your code to compile. It just means more laypeople can add "programming" to their DIY project skills.

Maybe a few of them will pursue it further, but most won't. People don't like hard labor or higher-level planning.

Long term, software engineering will have to be more tightly regulated like the rest of engineering.

BinaryIgor 5 hours ago [-]
I agree with the first part of your comment, but don't follow the rest - why SE you should be more tightly regulated? It doesn't need to be; if anything, it will just stifle its progress and evolution
sublinear 5 hours ago [-]
I think AI will make more visible where code diverges from the average. Maybe auditing will be the killer app for near-future AI.

I'm also thinking about a world where more programmers are trying to enter the workforce self-taught using AI. The current world is the continued lowering of education standards and political climate against universities.

The answer to all of the above from the perspective of who don't know or really care about the details may be to cut the knot and impose regulation.

Delegate the details to auditors with AI. We're kinda already doing this on the cybersecurity front. Think about all the ads you see nowadays for earning your "cybersecurity certification" from an online-only university. Those jobs are real and people are hiring, but the expertise is still lacking because there aren't clearer guidelines yet.

With the current technology and generations of people we have, how else but AI can you translate NIST requirements, vulnerability reports, and other docs that don't even exist yet but soon will into pointing someone who doesn't really know how to code towards a line of code they can investigate? The tools we have right now like SAST and DAST are full of false positives and non-devs are stumped how to assess them.

Even before all this latest round of AI stuff it's been a concern that we overwork and overtrust devs. Principle of least privilege isn't really enough and is often violated in any scenario that isn't the usual day-to-day work.

thiago_fm 6 hours ago [-]
Literally all new products nowadays come with a great degree of software and hardware. Whether they are a SaaS or a kitchen product.

Programming will still exist, it will be just different. Programming has changed a lot of times before as well. I don't think this time is different.

If programming became suddenly too easy to iterate upon, people would be building new competitors to SAP, Salesforce, Shopify and other solutions overnight, but you rarely see any good competitor coming around.

The necessary involvement behind understanding your customers needs, iterating on it between product and tech is not to be underestimated. AI doesn't help with that at all, at maximum is a marginal iteration improvement.

Knowing what to build has been for a long time the real challenge.

JustExAWS 4 hours ago [-]
> Value Migration: Writing code becomes like typing—a basic skill, not a career. Value moves to understanding what to build, how systems fit together, and navigating the complexity of infinite cheap software pieces.

I saw this happening way before LLMs came on the scene in 2015. Back then, I was an ordinary journeyman enterprise developer who had spent the last 7 years both surviving the shit show of the 2008 recession and getting to the other side of being an “expert beginner” after staying at my second job out of college for 9 years until 2008.

I saw that as an enterprise dev in a second tier tech city, no matter what I learned well - mobile, web, “full stack development”, or even “cloud”, they were all commodities that anyone could learn “well enough@ so I wouldn’t command a premium and I was going to plateau at around $150-$160K and it was going to be hard to stand out.

I did start focusing on just what the author said and took a chance on leaving a full time salaried job the next year for a contract to perm opportunities that would give me the chance to lead a major initiative by a then new director of a company [1].

I didn’t learn until 5 years later at BigTech about promotions were about “scope”, “impact” and “dealing with ambiguity”.

https://www.levels.fyi/blog/swe-level-framework.html

I had never had a job before with real documented leveling guidelines.

Long story short, left there and led the architecture of B2B startup and then a job working (full time) in the cloud consulting department of AWS fell into my lap.

After leaving AWS in 2023, I found out how prescient I was, the regular old enterprise dev jobs I was being offered even as a “senior” or “architect” were still topping out at around $160K-$175K and hadn’t kept up with inflation. I have friends who are making around that much with 20 years of experience in Atlanta.

Luckily, I was able to quickly get a job as a staff consultant at a third party consulting company. But I did have to spend 5 years honing my soft skills to get here. I still do some coding. But that’s not my value proposition.

[1] Thanks to having a wife in the school system part time with good benefits I could go from full time to contract to permanent in 2016.