I pay $20 for OpenAI and codex makes me incredibly productive. With very careful prompts aimed at tiny tasks, I can review, fix and get a lot of things done.
I’ll happily pay up to $2k/month for it if I was left with no choice, but I don’t think it will ever get that expensive since you can run models locally and it could have the same result.
That being said, my outputs are similarish in the big picture. When I get something done, I typically don’t have the energy to keep going to get it to 2x or 3x because the cognitive load is about the same.
However I get a lot of time freed up which is amazing because I’m able to play golf 3-4 times a week which would have been impossible without AI.
Productive? Yes. Time saved? Yes. Overall outputs? Similar.
Same? Not quite as good as that. But google’s Gemma 3 27B is highly similar to their last Flash model. The latest Qwen3 variants are very good, to my need at least they are the best open coders, but really— here’s the thing:
There’s so many varieties, specialized to different tasks or simply different in performance.
Maybe we’ll get to a one-size fits all at some point, but for now trying out a few can pay off. It also starts to build a better sense of the ecosystem as a whole.
For running them: if you have an Nvidia GPU w/ 8GB of vram you’re probably able to run a bunch— quantized. It gets a bit esoteric when you start getting into quantization varieties but generally speaking you should find out the sort of integer & float math your gpu has optimized support for and then choose the largest quantized model that corresponds to support and still fits in vram. Most often that’s what will perform the best in both speed and quality, unless you need to run more than 1 model at a time.
To give you a reference point on model choice, performance, gpu, etc: one of my systems runs with an nvidia 4080 w/ 16GB VRAM. Using Qwen 3 Coder 30B, heavily quantized, I can get about 60 tokens per second.
I get tolerable performance out of a quantized gpt-oss 20b on an old RTX3050 I have kicking around (I want to say 20-30 tokens/s, or faster when cache is effective). It's appreciably faster on the 4060. It's not quite ideal for more interactive agentic coding on the 3050, but approaching it, and fitting nicely as a "coding in the background while I fiddle on something else" territory.
Yeah, tokens per second can very much influence the work style and therefore mindset a person should bring to usage. You can also build on the results of a faster but less than SOTA class model in different ways. I can let a coding tuned 7-12b model “sketch” some things at higher speed, or even a variety of things, and I can review real time, and pass off to a slower more capable model to say “this is structural sound, or at least the right framing, tighten it all up in the following ways…” and run in the background.
The run at home was in the context of $2k/mo. At that price you can get your money back on self-hosted hardware at a much more reasonable pace compared to 20/mo (or even 200).
Well theres an open source GPT model you can run locally. I dont think running models locally is all that cheap considering top of the line GPUs used to be $300 now you are lucky if you get the best GPU for under $2000. The better models require a lot more VRAM. Macs can run them pretty decently but now you are spending $5000 plus you could have just bought a rig with a 5090 with mediocre desktop ram because Sam Altman has ruined the RAM pricing market.
Fully aware, but who the heck wants to spend nearly 10 grand, and that's with just a 1TB hard drive (which needs to be able to fit your massive models mind you). Fair warning not ALL the RAM is fully unified. On my 24GB RAM Macbook Pro I can only use 16GB of VRAM, but its still better than me using my 3080 with only 10 GB of RAM, but I also didn't spend more than 2 grand on it.
I got some decent mileage out of aider and Gemma 27B. The one shot output was a little less good, but I don’t have to worry about paying per token or hitting plan limits so I felt more free to let it devise a plan, run it in a loop, etc.
Not having to worry about token limits is surprisingly cognitively freeing. I don’t have to worry about having a perfect prompt.
Marx in his wildest nightmare couldn’t have anticipated the level selling short the working class does with the advent AI. Friend, you should be doing more than golf…
Some stats are trickling out in my company. Code heavy consulting projects show about 18% efficiency gains but I have problems with that number because no one has been able to tell me how it was calculated. Story points actual vs estimated is probably how it was done but that’s nonsensical because we all know how subjective estimates and even actuals are. It’s probably impossible to get a real number that doesn’t have significant “well I feel about x% more efficient…”
More interesting imo would be a measure of maintainability. I've heard that code that's largely written by AI is rarely remembered by the engineer that submitted even a week after merging
You're almost "locked in" to using more AI on top of it then. It may also make it harder to give estimates to non-technical staff on how long it'd take to make a change or implement a new feature
I don’t know how to measure maintainability but the AI generated code I’ve seen in my projects is pretty plain vanilla standard patterns with comments. So less of a headache than a LOT of human code I’ve seen. Also, one thing the agents are good at, at least in my experience so far, is documenting existing code. This goes a long ways in maintenance, it’s not always perfect but as the saying goes documentation is like sex, when it’s good it’s great when it’s bad it’s better than nothing.
Something I occasionally do is ask it to extensively comment a section of code for me, and to tell me what it thinks the intent of the code was, which takes a lot of cognitive load off of me. It means I'm in the loop without shutting off my brain, as I do have to read the code and understand it, so I find it a sweet spot of LLM use.
by "maintainability" and "rarely remembered by the engineer" i'm assuming the bigger concern (beyond commenting and sane code) is once everyone starts producing tons of code without looking - and reading(reviewing) code is, to me at least, much harder than writing - then all of this goes unchecked:
* subtle footguns
* hallucinations
* things that were poorly or incompletely expressed in the prompt and ended up implemented incorrectly
* poor performance or security bugs
other things (probably correctable by fine-tuning the prompt and the context):
* lots of redundancy
* comments that are insulting to the intelligence (e.g., "here we instantiate a class")
* ...
not to mention reduced human understanding of the system and where it might break or how this implementation is likely to behave. All of this will come back to bite during maintenance.
It's never been the consensus. As far back as I can remember, the wisdom was always to comment why the code does what it does if needed, and to avoid saying what the code does.
Saying that function "getUserByName" fetches a user by name is redundant. Saying that a certain method is called because of a quirk in a legacy system is important.
I regularly implement financial calculations. Not only do I leave comments everywhere, I tend to create a markdown file next to the function, to summarise and explain the context around the calculation. Just plain english, what it's supposed to do, the high level steps, etc.
I'd describe that as a trend, rather than a consensus.
It wasn't an entirely bad idea, because comments carry a high maintenance cost. They usually need to be rewritten when nearby code is edited, and they sometimes need to be rewritten when remote code is edited - a form of coupling which can't be checked by the compiler. It's easy to squander this high cost by writing comments which are more noise than signal.
However, there's plenty of useful information which can only be communicated using prose. "Avoid unnecessary comments" is a very good suggestion, but I think a lot of people over-corrected, distorting the message into "never write comments" or "comments are a code smell".
yeah that was weird, it was like a cult and some coworkers of mine were religiously hunting down every comment in other people's MR's, just kinda assumed that "no comments" is a hard rule. Very strange, i had to fight many battles for my sanity. There are many cases where you may want to explain why this is coded the way this is coded, not just how.
chasd00 did mention that this was for consulting projects, where presumably there's a handover to another team after a period of time. Maintainability was never a high priority for consultants.
This is a poor metric as soon as you reach a scale where you've hired an additional engineer, where 10% annual employee turnover reflects > 1 employee, much less the scale where a layoff is possible.
It's also only a hope as soon as you have dependencies that you don't directly manage like community libraries.
Hint: Make sure the people giving you the efficiency improvement numbers don't have a vested interest in giving you good numbers. If so, you can not trust the numbers.
Reminds me of my last job where the team that pushed React Native into the codebase were the ones providing the metrics for "how well" React Native was going. Ain't no chance they'd ever provide bad numbers.
The title is a bit misleading. Reading the article, the argument seems to be that entry-level applicants (are expected to) have the highest AI literacy, so they want them to drive AI adoption.
At least today, I expect this will fail horribly. The challenge today isn't AI literacy in my experience, its domain knowledge required to keep LLMs on the rails.
It certainly feels that way. I was there. Fortunately had just waltzed into the tech side of things and scurried off back to my professional career for a couple of years.
I watched a lot of stuff burn. It was horrifying. We are nearly there again.
Yeah similar story here. I had to spend a couple of years painting houses before the local market recovered enough that tech jobs were a thing again. Shit was surreal. There was one guy I knew that went from building multi-million dollar server and networking projects for IBM to literally working as unskilled labor on a fencing crew just to make rent.
I just run sub agents in parallel. Yesterday I used Codex for the first time yesterday. I spun up 350,640 agents and got 10 years of experience in 15 minutes.
Is this for their in-house development or for their consulting services?
Because the latter would still be indicative of AI hurting entry level hiring since it may signal that other firms are not really willing to hire a full time entry level employee whose job may be obsoleted by AI, and paying for a consultant from IBM may be a lower risk alternative in case AI doesn't pan out.
And if it is for consulting, I doubt very serious they will based in the US. You can’t be priced competitive hiring an entry level consultant in the US and no company is willing to pay the bill rate for US based entry level consultants unless their email address is @amazon.com or @google.com.
Source: current (full time) staff consultant at a third party cloud consulting firm and former consultant (full time) at Amazon.
I worked internally at AWS Professional Services - their internal consulting department - every AWS ProServe employee is a “blue badge” employee with the same initial four year offer structure of base + prorated signing bonus + RSUs (5/15/40/40). Google also has a large internal consulting department for GCP.
I can’t fault you for not knowing AWS ProServe exists. I didn’t know either until a recruiter reached out to me.
No that’s not what I meant at all. Amazon Professional Services are made up of full time “blue badge” employees who get the same type of base + bonus + RSUs that all other blue badge employees get.
One might ask what value seniors hold if their expertise of the junior stage is obsolete. Maybe the new junior will just be reigning in llm that does the work and senior level knowledge and compensation rots away as those people retire without replacement.
People seem to think LLMs killing the cs career means companies will still pay senior salaries to shepherd agentic LLM style development. I think it is the senior that is the dinosaur here. As we speak cs curriculums are changing to teach people to cruch along with ai. The next batch of juniors will be taking these jobs. There won’t be seniors anymore at least at the salaries we’ve come to assume with that. The skill is getting removed from the profession and replaced with a framework with a far lower barrier of entry.
Another one? What is it with IBM, they must really save lots of money in a way no one else has figured out by firing people at 50yo. This is like the 3rd or 4th one i've heard from them.
They don't have to keep giving people raises, why wait until the guy is 50, why not when he is 30 and making $100k? It's not like they have people doing manual labor, it's office jobs. People's faculties don't decline until their late 60's at the earliest. Why don't other multinationals do this and get sued also, what makes IBM special?
No - it's that they fired their vets in high cost areas and kept them in low cost areas.
A large number of vets can now choose to reapply for their old job (or similar job) at a fraction of the price with their pension/benefits reduced and the vets in low cost centers now become the SMEs. In many places in the company they were not taken seriously due to both internal politics, but also quite a bit of performative "output" that either didn't do anything or had to be redone.
Nothing to do with AI - everything to do with Arvind Krishna. One of the reasons the market loves him, but the tech community doesn't necessarily take IBM seriously.
You know when someone is singing the praises about AI and they get asked "if you're so much more productive with AI, what have you built with it"? Well I think a bunch of companies are asking this same question to their employees and realising that the productivity gains they are betting on were overhyped.
LLM's can be a very useful tool and will probably lead to measurable productivity increases in the future, at their current state they are not capable of replacing most knowledge workers. Remember, even computers as a whole didn't measurably impact the economy for years after their adoption. The real world is a messy place and hard to predict!
Which measure? Like when folk say something is more "efficient" it's more time-efficient to fly but one trades other efficiency. Efficiency, like productivity needs a second word with it to properly communicate.
Whtys more productive? Lines of code (a weak measure). Features shipped? Bugs fixed? Time by company saved? Time for client? Shareholders value (lame).
I don't know the answer but this year (2026) I'm gonna see if LLM is better at tax prep than my 10yr CPA. So that test is my time vs $6k USD.
Time could be very expensive as mistakes on taxes can be fraud resulting in prison time. Mostly they understand people make mistakes - but they need to look like honest mistakes and llm may not. remember you sign your taxes as correct to the best of your knowledge - your CPA is admitting you outsourced understanding to an expert, something they accept. However if you sign alone you are saying you understand it all even if you don't.
These days productivity at a macroeconomic scale is usually cited in something like GDP per hour worked.
Most recent BLS for the last quarter ‘25 was an annualized rate of 5.4%.
The historic annual average is around 2%.
It’s a bit early to draw a conclusion from this. Also it’s not an absolute measure. GDP per hour worked. So, to cut through any proxy factors or intermediating signals you’d really need to know how many hours were worked, which I don’t have to hand.
That said, in general macro sense, assuming hours worked does not decrease, productivity +% and gdp +% are two of the fundamental factors required for real world wage gains.
If you’re looking for signals in either direction on AI’s influence on the economy, these are #s to watch, among others. The Federal Reserve, the the Chair reports after each meeting, is (IMO) one of the most convenient places to get very fresh hard #s combined with cogent analysis and usually some q&a from the business press asking questions that are at least some of the ones I’d want to ask.
If you follow these fairly accessible speeches after meetings, you’ll occasionally see how lots of the things in them end up being thematic in lots of the stories that pop up here weeks or months later.
Economy-wide productivity can be measured reasonably well, although there are a few different measures [1]. The big question I guess is whether AI will make a measurable impact there. Historically tech has had less impact than people thought it would, as noted in Robert Solow's classic quip that "You can see the computer age everywhere but in the productivity statistics". [2]
Number of features shipped. Traction metrics. Revenue per product. Ultimately business metrics. For example, tax prep effectiveness would be a proper experiment tied to specific metrics.
I hear this every day, and I'm sure its true sometimes, but where is the tsunami of amazing software LLM users are producing?
Where are the games that make the old games look like things from a bygone era?
Where are the updates to the software that I currently use that greatly increase it capabilities? I have seen none of this.
I get that it takes a long time to make software, but people were making big promises a year ago and I think its time to start expecting some results.
Reddit and GitHub are littered with people launching new projects and appear to be way more feature-rich than new tool/app launches from previous years. I think it is a lot harder to get noticed with a new tool/app new because of this increase in volume of launches.
Also weekend hackathon events have completely/drastically changed as an experience in the last 2-3 years (expectations and also feature-set/polish of working code by the end of the weekend).
I’d be interested where you’re getting your data. SteamDB shows an accelerating trend of game releases over time, though comparing January 2026 to January 2025 directly shows a marginal gain [0].
This chart from a16z (scroll down to “App Store, Engage”) plots monthly iOS App Store releases each month and shows significant growth [1].
> After basically zero growth for the past three years, new app releases surged 60% yoy in December (and 24% on a trailing twelve month basis).
It’s completely anecdotal evidence but my own personal experience shows various sub-Reddit’s just flooded with AI assisted projects now, so much so that various pages have started to implement bans or limits of AI related posts (r/selfhosted just did this).
As far as _amazing software_ goes, that’s all a bit subjective. But there is definitely an increase happening.
I got the numbers swapped. Turns out there was an increase of about 40 games between last January and this. Which is exactly what you wouldn’t expect if the 5-10x claims are true.
Also the accelerating trend dates back to 2018 if you remove the early COVID dip. Which is exactly my point. You can look at the graph and there is no noticeable impact correlated to any major AI advancements.
The iOS data is interesting. But it’s an outlier because the Play Store and Steam show nothing similar. And the iOS App Store is weird because they’ve had numerous periods of negative growth follow by huge positive growth over the years. My guess is that it probably has more to do with all of the VC money flowing into AI startups and all the small teams following the hype building wrappers and post training existing models. If you look at a random sample of the iOS new apps that looks likely.
Seriously go to the App Store,
search AI and scroll until you get bored. There are literally thousands of AI API wrappers.
Specifically about custom CUDA kernels, I’ve implemented them with AI that significantly sped up the code in this project I worked on. Didn’t know how to code these kernels at all, but I implemented and tested a couple of variations and got it running fast in just two days. Basically impossible for me before AI coding (well not impossible but it would have taken me many weeks, so I wouldn’t have tried it).
The one thing AI is consistently better at than humans is shipping quickly. It will give you as much slop as you want right away, and if you push on it for a short period of time it will compile and if you run it a program will appear that has a button for each of the requested features.
Then you start asking questions like, does the button for each of the features actually do the thing? Are there any race conditions? Are there inputs that cause it to segfault or deadlock? Are the libraries it uses being maintained by anyone or are they full of security vulnerabilities? Is the code itself full of security vulnerabilities? What happens if you have more than 100 users at once? If the user sets some preferences, does it actually save them somewhere, and then load them back properly on the next run? If the preferences are sensitive, where is it saving them and who has access to it?
It's way easier to get code that runs than code that works.
Or to put it another way, AI is pretty good at writing the first 90% of the code:
"The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time." — Tom Cargill, Bell Labs
Look somewhere outside of the AI hype space. You’re seeing more AI competitors because it’s easy to build on top of someone’s existing model or API and everyone is trying to cash in. You saw the same thing with new crypto currency.
That’s an incredibly niche area. From their website it looks like there are 4k modules available. Is there a way to see historical data. Also is number of users available, so that you can rule out popularity growth?
I bet you the predictions are largely correct but technology doesn't care about funding timelines and egos. It will come in its own time.
It's like trying to make fusion happen only by spending more money. It helps but it doesn't fundamentally solve thr pace of true innovation.
I've been saying for years now that the next AI breakthrough could come from big tech but it also has just a likely chance of comming from a smart kid with a whiteboard.
Well, the predictions are tied to the timelines. If someone predicts that AI will take over writing code sometime in the future I think a lot of people would agree. The pushback comes from suggesting it's current LLMs and that the timeline is months and not decades.
> I've been saying for years now that the next AI breakthrough could come from big tech but it also has just a likely chance of comming from a smart kid with a whiteboard.
It comes from the company best equipped with capital and infra.
If some university invents a new approach, one of the nimble hyperscalers / foundation model companies will gobble it up.
This is why capital is being spent. That is the only thing that matters: positioning to take advantage of the adoption curve.
I think for a lot of folks it basically comes down to just using AI to make the tasks they have to do easier and to free up time for themselves.
I’d argue the majority use AI this way. The minority “10x” workers who are using it to churn through more tasks are the motivated ones driving real business value being added - but let’s be honest, in a soulless enterprise 9-5 these folks are few and far between.
Because very few knows how to use AI. I teach AI courses on the side. I've done auditing supervised fine tuning and RLHF projects for a major provider. From seeing real prompts, many specifically from people who work with agents every day, people do not yet have the faintest clue how to productively prompt AI. A lot of people prompt them in ways that are barely coherent.
Even if models stopped improving today, it'd take years before we see the full effects of people slowly gaining the skills needed to leverage them.
You'd be surprised how low the bar is. What I'm seeing is down to the level of people not writing complete sentences.
There doesn't need to be any "magic" there. Just clearly state your requirements. And start by asking the model to plan out the changes and write a markdown file with a plan first (I prefer this over e.g. Claude Code's plan mode, because I like to keep that artefact), including planning out tests.
If a colleague of yours not intimately familiar with the project could get the plan without needing to ask followup questions (but able to spend time digging through the code), you've done pretty well.
You can go over-board with agents to assist in reviewing the code, running tests etc. as well, but that's the second 90%. The first 90% is just to write a coherent request for a plan, read the plan, ask for revisions until it makes sense, and tell it to implement it.
Not surprising. Many folks struggle with writing (hence why ChatGPT is so popular for writing stuff), so people struggling to coherently express what they want and how makes sense.
But the big models have come a long way in this regard. Claude + Opus especially. You can build something with a super small prompt and keep hammering it with fix prompts until you get what you want. It's not efficient, but it's doable, and it's much better than having to write a full spec not half a year ago.
This is exactly it. A lot of people use it that way. And it's still a vast improvement, but they could also generally do a lot better with some training. I think this is one of the areas where you'll unfortunately see a big gap developing between developers who do this well, and have the models work undisturbed for longer and longer while doing other stuff, and those who ends up needing a lot more rework than necessary.
> Claude + Opus especially. You can build something with a super small prompt and keep hammering it with fix prompts until you get what you want.
LOL: especially with Claude this was only in 1 out of 10 cases?
Claude output is usually (near) production ready on the first prompt if you precisely describe where you are, what you want and how you get it and what the result should be.
You're right, they should know better, but I think a lot of them have gotten away with it because most of them are not expected to produce written material setting out missing assumptions etc. and breaking down the task into more detail before proceeding to work, so a lot have never gotten the practice.
Once people have had the experience of being a lead and having to pass tasks to other developers a few times, most seem to develop this skill at least to a basic level, but even then it's often informal and they don't get enough practice documenting the details in one go, say by improving a ticket.
Because ai doesnt work like this “make me money” or “make stardew valley in space”. The hard part is the painful exploration and necessary taste to produce something useful. The number of these kind of people did not increase with ai.
Eg, ai is a big multiplier but that doesnt mean it will translate to “more” in the way people think.
It doesn’t need to be useful or a good game to launch on steam. Surely if it was a “big multiplier” 5-10x, it would be noticeably impacting steam launches.
Now if it’s something closer to 20%, we’re seeing exactly what you’d expect.
It comes down back to that whole discussion around intelligence becoming cheaper and more accessible but motivation and agency remaining stable.
I’ve worked with a few folks who have been given AI tools (like a designer who never coded in his life, a or video/content creator) who have absolutely taken off with creating web apps and various little tools and process improvements for themselves thanks by just vibecoding what they wanted. The key with both these individuals is high agency, curiosity, and motivation. That was innate, the AI tooling just gave them the external means to realise what they wanted to do with more ease.
These kinds of folks are not the majority, and we’re still early into this technological revolution imo (models are improving on a regular basis).
In summary, we’ve given the masses to “intelligence” but creativity and motivation stay the same.
My guess is that the true impact of this will be difficult to measure for a while. Most "single-person start-ups" will probably not be high-visibility VC-backed, YC affairs, and rather solopreneurs with a handful of niche moonlighted apps each making 3-4 digit monthly revenue.
95% of all new startups have the word AI in the description, so of course there are lots of new API wrappers and people trying to build off of existing models.
There aren’t noticeably more total startups or projects though.
Sorry, I swapped the numbers. It's actually 1447 this year vs 1413 last year so 34 more games this year. So essentially now growth. Despite there being a clearly accelerating growth trend since 2018.
No. They're firing high paid seniors and replacing them with low pay juniors. This is IBM we're talking about.
The "limits of AI" bit is just smokescreen.
Firing seniors:
> Just a week after his comments, however, IBM announced it would cut thousands of workers by the end of the year as it shifts focus to high-growth software and AI areas. A company spokesperson told Fortune at the time that the round of layoffs would impact a relatively low single-digit percentage of the company’s global workforce, and when combined with new hiring, would leave IBM’s U.S. headcount roughly flat.
New workers will use AI:
> While she admitted that many of the responsibilities that previously defined entry-level jobs can now be automated, IBM has since rewritten its roles across sectors to account for AI fluency. For example, software engineers will spend less time on routine coding—and more on interacting with customers, and HR staffers will work more on intervening with chatbots, rather than having to answer every question.
Where does it say those cuts were senior software developers?
Obviously they want new workers to use AI but I don't really see anything to suggest they're so successful with AI that they're firing all their seniors and hiring juniors to be meatbags for LLMs.
This just doesn't make any sense. Juniors + AI just does not equal seniors, except for prototyping greenfield projects. Who knows about 2 months from now, it moves fast and stuff, but not right now.
I suspect the gap is that you don't know enough about IBM's business model.
When something doesn't make sense, a very common cause is a lack of context: many things can be extremely sensible for a business to do; things which appear insane from an outsider's point of view.
No one has built business AI that is flat correct to the standards of a high redundancy human organization.
Individuals make mistakes in air traffic control towers, but as a cumulative outcome it's a scandal if airplanes collide midair. Even in contested airspace.
The current infrastructure never gets there. There is no improvement path from MCP to air traffic control.
“AI will steal your job” never made sense. If your company is doing bad, sure maybe you fire people after automating their job. But we’re in a growth oriented economic system. If the company is doing good, and AI increases productivity, you actually will hire more people because every person is that much more of a return on investment
As a senior engineer sometimes the system shows I did nothing because I was helping others. sometimes I get the really hard problem -'the isn't speller teh' type bugs are more common than thread race conditions - but a lot faster to solve.
Historically in a lot of niches such as search marketing etc, people would not name their successful projects because the barrier to entry is low.
It someone can use AI to make a $50,000/year project in three months, then someone else can also do so.
Obviously some people hype and lie. But also obviously some people DID succeed at SEO/Affiliate marketing/dropshipping etc. AI resembled those areas in that the entry barrier is low.
To get actual reports you often need to look to open source. Simon Willison details how he used it extensively and he has real projects. And here Mitchell Hashimoto, creator of Ghostty, details how he uses it: https://mitchellh.com/writing/my-ai-adoption-journey
Update: OP posted their own project however. Looks nice!
This is definitely the case. I have a project that while not wildly profitable yet, is producing real revenue, but that I will not give details of because the moat is so small. The main moat is that I know the potential is real, and hopefully not enough other people do, yet. I know it will disappear quickly, so I'm trying to make what I can of it while it's there. I may talk about it once the opportunity is gone.
It involves a whole raft of complex agents + code they've written, but that code and the agents were written by AI over a very short span of time. And as much as I'd like to stroke my own ego and assume it's one of a kind, realistically if I can do it, someone else can too.
I only started charging customers in September. Super-linear growth. I launched annual subscriptions and within less than a week > 15% of customers switched.
I'm with you. I own a business and have created multiple tools for myself that collectively save me hours every month. What were boring, tedious tasks now just get done. I understand that the large-scale economic data are much less clear about productivity benefits, in my individual case they could not be more apparent.
I run an eComm business and have built multiple software tools that each save the business $1000+ per month, in measurable wage savings/reductions in misfires.
What used to take a month or so can now be spat out in less than a week, and the tools are absolutely fit for purpose.
It's arguably more than that, since I used to have to spread that month of work over 3-6 months (working part time while also doing daily tasks at the warehouse), but now can just take a week WFH and come back with a notable productivity gain.
I will say, to give credit to the anti-AI-hype crowd, that I make sure to roll the critical parts of the software by hand (things like the actual calculations that tell us what price an item at, for example). I did try to vibecode too much once and it backfired.
But things like UIs, task managers for web apps, simple API calls to print a courier label, all done with vibes.
The only thing the comments told me is that people lake judgement and taste to do it themselves. It's not hard, identify a problem that's niche enough for a problem you can solve.
Every hype AI post is like this. “I’m making $$$ with these tools and you’re ngmi”
I completely understand the joys of a few good months but this is the same as the people working two fang jobs at the start of Covid. Illusionary and not sustainable.
I built and debugged an embedded stub loader for
Rp2350 to program MRAM and validate hardware status for a satellite. About 2.5 hours of my time, a lot of it while supervising students/doing other things.
This would have been a couple day+ unpleasant task before; possibly more. I had been putting it off because scouring datasheets and register maps and startup behavior is not fun.
It didn’t know how to troubleshoot the startup successfully itself, though. I had to advise it on a debugging strategy with sentinel values to bisect. But then once explained it fixed the defects and succeeded.
LLMs struggle in large codebases and the benefit is much smaller now. But that capability is growing fast, and not everything software developers do is large.
I'm not doubting of you or anything, but you just proved point above by saying you have a successful project without even mentioning which project is that.
Thanks! I used to own a Tesla and there were similar platforms out there. Bought a Rivian and wanted something like that. I started building this before AI-assisted coding was very popular. But it greatly increased my productivity.
There is that quote "there are cathedrals everywhere for those with the eyes to see". I feel like there is a solid variation with solid business opportunities instead of cathedrals haha.
I've found AI to be a big productivity boost for myself, but I don't really use it to generate much actual code. Maybe it could do more for me, idk, but I also don't feel like I'm being left behind. I actually enjoy writing code, but hate most other programming tasks so it's been nice to just focus on what I like. Feels good to have it generate a UI skeleton for me so I can just fill out the styles and stuff. Or figure out stupid build config and errors. Etc etc.
Anyways congrats on the product. I know a lot of people are negative about productivity claims and I'm certainly skeptical of a lot of them too, but if you asked most programmers 5 years ago if a super-autocomplete which could generate working code snippets and debug issues in a project would boost productivity everyone would say yes lol. People are annoyed that its overhyped, but there should still be room for reasonable hype imo.
First of all, thank you. I've always been told I have a back for seeing opportunities others don't.
For me, I always had the ideas and even as a competent engineer, the speed of development annoyed me.
I think folks get annoyed when their reality doesn't match other people's claims. But I have friends who aren't engineers who have launched successful SaaS products. I don't know if it's jealousy or what but people are quite passionate about how it doesn't have productivity gains.
Hell, I remember Intellisense in Visual Studio being a big boon for me. Now I can have tasks asynchronous, even if not faster, it frees up my time.
Fair. I've had super-linear growth since launching in September. Zero marketing outside of a referral program. People genuinely love what I'm building. I get multiple emails per week about how people appreciate the software and how I send out weekly emails about everything I've launched.
Perhaps I'm being cynical, but could they be leaving out some detail? Perhaps they're replacing even more older workers with entry level workers than before? Maybe the AI makes the entry level workers just as good-- and much cheaper.
> In the HR department, entry-level staffers now spend time intervening when HR chatbots fall short, correcting output and talking to managers as needed, rather than fielding every question themselves.
The job is essentially changing from "You have to know what to say, and say it" to "make sure the AI says what you know to be right"
I always though the usual 'they only hire seniors now' was a questionable take. If anything, all you need is a semi warm blooded human to hit retry until the agents get something functional. It's more likely tech will transform into an industry of lowly paid juniors imho, if it hasn't already started. Senior level skill is more replacable, not just because it's cheaper to hire juniors augmented with mostly AI but because they are more adaptable to the new dystopia since they never experienced anything else. They are less likely to get hung up on some code not being 'best practice' or 'efficient' or even 'correct'. They will just want to get the app working regardless of what goes in the sausage, etc.
Exactly, that's why counting job postings is a terrible proxy for gauging market conditions. Companies may hire anywhere from 0 to 100s of people through the same JD.
The article said they called for triple junior hire but cut 1000 jobs a month later, “so the number of jobs stay roughly the same”.
Certainly they didn’t mean 1000 junior positions were cut. So what they really want to say is that they cut senior positions as a way of saving cost/make profit in the age of AI? Totally contrary to what other companies believe? Sounds quite insane to me!
IBM is one of those companies that measures success by complexity. Meaning if it's complicated, they make money with consultants. If it's simple, they bundle it with other complex solutions that require consulting.
I had the chance to try a IBM internal AI. It was a normal chat interface where one could select models up to Sonnet 4.5. I have not seen anything agentic. So there is that.
Not because it's wrong, but because it risks initiating the collapse of the AI bubble and the whole "AI is gonna replace all skilled work, any day now, just give us another billion".
To a non-technical individual IBM is still seen as a reputable brand (their consulting business would've been bankrupt long ago otherwise) and they will absolutely pay attention.
Agree, They could have owned the home computer market, but were out-manvoured by a couple of young programmers. They are hardly the company you want to look to for guidance on the future.
Doubt it. Unless we go through another decade of ZIRP tied to a newly invented hyped technology that lacks specialists, and discovering new untapped markets, there's not gonna be any massive demand spike of junior labor in tech that can't be met causing wages to shoot up.
The "learn to code" saga has run its course. Coder is the new factory worker job where I live, a commodity.
The title could be dead wrong; the tripling of junior jobs might not be due to the limits of AI, but because of AI increasing the productivity of juniors to that of a mid or senior (or at least 2-3x-ing the output of juniors), thus making hiring juniors an appealing prospect to increase the company's output relative to competitors who aren't hiring in response to AI tech improvements. Hope this is the case and hope it happens across broadly across the economy. While the gutter press fear mongers of job losses, if AI makes the average employee much more useful (even if its via newly created roles), it's conceivable there's a jobs/salaries boom, including among those who 'lose their job' and move into a new one!
And those people probably aren’t developers by trade, just power users who superficially understand the moving parts but who cannot write code themselves.
Technologies entire job is to make it less work to accomplish something and therefore easier and cheaper. In some cases that will make it possible to do things you couldn't do before but in many cases it'll just end up causing the value of said labor to fall. The problem isn't change, but the rate of change and the fact it's affecting our own field rather than someone else's.
They hire juniors, give them Claude Code and some specs and save a mid/senior devs salary. I believe coding is over for SWE's by end of 2027, but will take time to diffuse though the economy hence still need some cheap labour for a few years, given the H1-B ban this is one way without offshoring.
IBM has practiced ageism for decades with the same playbook. AI is just the latest excuse. Fire a wide enough swath so it isn’t all old employees and then only hire entry level positions. Often within the same year. Repeat.
An AI model has no drive or desire, or embodiment for that matter. Simply put, they don't exist in the real world and don't have the requirements or urgency to do anything unless prompted by a human, because, you know, survival under capitalism. Until they have to survive and compete like the rest of us and face the same pressures, they are going to be forever be relegated as mere tools.
> The "AI will replace all junior devs" narrative never accounted for the fact that you still need humans who understand the business domain, can ask the right questions, and can catch when the AI is confidently wrong.
You work with junior devs that have those abilities? Because I certainly don't.
Tbh, getting good results from ai requires senior level intuition. You can be rusty as hell and not even middling in the language being used, but you have to understand data structures and architecture more than ever to get non-shit results. If you just vibe it, you’ll eventually end up with a mountain of crap that works sort of, and since you’re not doing the coding, you can’t really figure it out as you go along. Sometimes it can work to naively make a thing and then have it rewritten from scratch properly though, so that might be the path.
100% accurate. The architect matters so much more than people think. The most common counter argument to this I've seen on reddit are the vibe coders (particularly inside v0 and lovable subreddits) claiming they built an app that makes $x0,000 over a weekend, so who needs (senior) software engineers and the like?
A few weeks later, there's almost always a listing for a technical co-founder or a CTO with experience on their careers page or LinkedIn :)))
But the argument is not about market validation, the argument is about software quality. Vibe coders love shitting on experienced software folks until their code starts falling apart the moment there is any real world usage.
And about the pulling in devs - you can actually go to indeed.com and filter out listings for co-founders and CTOs. Usually equity only, or barely any pay. Since they're used to getting code for free. No real CTO/Senior dev will touch anything like that.
For every vibe coded product, there's a 100 clones more. It's just a red ocean.
Like, I'm sure it's just laundering gcc's source at some level, but if Claude can handle making a compiler, either we have to reframe a compiler as "not serious", or, well, come up with a different definition for what entails "serious" code.
Vibe coding doesn’t work for the imbedded system code that I am working on, which includes layered state machines, hardware drivers, and wire level protocol stacks. But supervised AI code generation definitely does work.
You need a highly refined sense of “smell” and intuition about architecture and data design, but if you give good specifications and clear design goals and architectural guidance, it’s like managing a small team but 12x faster iteration.
I sometimes am surprised with feature scope or minor execution details but usually whenever I drill down I’m seeing what I expected to see, even more so than with humans.
If I didn’t have the 4 decades of engineering and management experience I wouldn’t be able to get anything near the quality or productivity.
It’s an ideal tool for seasoned devs with experience shipping with a team. I can do the work of a team of 5 in this type of highly technical greenfield engineering, and I’m shipping better code with stellar documentation… and it’s also a lot less stressful because of the lack of interpersonal dynamics.
But… there’s no way I would give this to a person without technical management experience and expect the same results, because the specification and architectural work is critical, and the ability to see the code you know someone else is writing and understand the mistakes they will probably make if you don’t warn them away from it is the most important skillset here.
In a lot of ways I do fear that we could be pulling up the ladder, but if we completely rethink what it means to be a developer we could teach with an emphasis on architecture, data structures, and code/architecture intuition we might be able to prepare people to step into the role.
Otherwise we will end up with a lot of garbage code that mostly works most of the time and breaks in diabolically sinister ways.
The ones I've thought of, and the one's you've thought of, and the ones Ancalagon has in their mind are three partially disjoint sets, but there's probably some intersection, which we can then use as a point of discussion. Given that "serious code" isn't a rigorously defined industry term, maybe you could be less rude?
just to be clear: from my standpoint it's the worst period ever being a junior in tech, you are not "fucked" if you are junior, but hard times are ahead of you.
This case has always been made for juniors but it's almost always the opposite that's true. There's always some fad that the industry is over-indexing on. Senior developers tend to be less susceptible to falling for it but non-technical staff and junior developers are not
Whether its a hotlang, LLMs, or some new framework. Juniors like to dive right in because the promise of getting a competitive edge against people much more experienced than you is too tantalizing. You really want it to be true
Some things take very little time and effort to manifest into the world today that used to take a great deal. So one of the big changes is around whether some things are worth doing at all.
Note: I'm not taking any particular side of the "Juniors are F**d" vs "no they're not" argument.
IMO with the latest generation (gpt codex 5.3 and claude 4.6) most devs could probably be replaced by AI. They can do stuff that I've seen senior devs fail at. When I have a question about a co-workers project, I no longer ask them and instead immediately let copilot have a look at the repo and it will be faster and more accurate at identifying the root cause of issues than humans who actually worked on the project. I've yet to find a scenario where they fail. I'm sure there are still edge cases, but I'm starting to doubt humans will matter in them for long. At this point we really just need better harnesses for these models, but in terms of capabilities they may as well take over now.
> most devs could probably be replaced by AI. They can do stuff that I've seen senior devs fail at.
When I read these takes I wonder what kind of companies some of you have been working for. I say this as someone who has been using Opus 4.6 and GPT-Codex-5.3 daily.
I think the “senior developer” title inflation created a bubble of developers who coasted on playing the ticket productive game where even small tasks could be turned into points and sprints and charts and graphs such that busy work looked like a lot of work was being done.
Why is that bad? You write better code when you actually understand the business domain and the requirement. It's much easier to understand it when you get it direct from the source than filtered down through dozens of product managers and JIRA tickets.
Having had to support many of these systems for sales or automation or video production pipelines as soon as you dig under the covers you realize they are a hot mess of amateur code that _barely_ functions as long as you don't breath on it too hard.
Software engineering is in an entirely nascent stage. That the industry could even put forward ideas like "move fast and break things" is extreme evidence of this. We know how to handle this challenge of deep technical knowledge interfacing with domain specific knowledge in almost every other industry. Coders were once cowboys, now we're in the Upton Sinclair version of the industry, and soon we'll enter into regular honest professional engineering like every other new technology ultimately has.
Not sure why this is being downvoted. It’s spot on imo. Engineers who don’t want to understand the domain and the customers won’t be as effective in an engineering organization as those who do.
It always baffles me when someone wants to only think about the code as if it exists in a vacuum. (Although for junior engineers it’s a bit more acceptable than for senior engineers).
We're assuming we all somehow have perfect customers with technical knowledge who know exactly what they want and can express it as such, while gracefully accepting pushback over constraints brought up.
Anyone who's worked in a "bikeshed sensitive" stack of programming knows how quickly things railroad off when such customers get direct access to an engineer. Think being a fullstack dev but you constantly get requests over button colors while you're trying to get the database setup.
Okay. I'm glad you're privileged enough to where you can choose your customers. Customers that aren't abusive or otherwise out of their league thinking they know everything just because they have money.
Calling me "privileged" or "lucky" feels like a cheap attack on my competence.
I am certain that I went through the same problems you did in the past, maybe I just have a different way of dealing with them, or maybe I had even worse problems than you did but I have a different frame of comparison. We never stopped to compared notes.
All I'm saying is: for me dealing with business owners, end-users, CEOs and CTOs was always way easier than dealing with proxies. That's all.
>I am certain that I went through the same problems you did in the past,
And I'm certain you haven't if you really, never wanted a layer of separation between certain clients over behavioral issues that got in the way of the actual work. And I'm still male, so I'm sure I still have it better than certain other experiences I only heard third hand in my industry.
I don't see it as a cheap attack. Any teacher would love to be in a classroom exclusively made up of motivated honors students so they can focus on teaching and nurturing. Instead, most teachers tend to become parental proxies without the authority to actually discipline children. So they see a chair fly and at best they need to hope a principal handles it. But sometimes the kid is back in class the next day.
>you are making assumptions about my experience. Can you please stop?
You're 2 days into responding to a comment that amounted to "X depends on your exoerience". Is there something else you wish to get out of this thread?
Your complaint is an opinion. I disagree with that opinion. Unless you wish to ask about my experiences or go into yours, what's there to discuss here? Without that, I feel I said all I could on the topic.
>please stop dismissing my experience as lucky or privileged
I'm not gonna harp on it. I'll go to bed and wake up completely forgetting about this thread unless I get another notification.
But you're basically telling me to shut off my feelings. Hard to do. I don't know your experiences, so my feelings can be wrong.
I'm unsure why you are putting so much stock into an uninformed feeling on the internet. It doesn't seem like we want to expand on our stories so there's not much more to go on. And that's fine.
I don't try to assert everything about you, but I'm just explaining the vibes I got. But that's all my words are: vibes.
>Just that there might be worse problems elsewhere
I love my industry and in my personal experience I can count on one hand how many truly problematic coilkeages I've worked with or under. I am lucky in that regard for my industry.
Meanwhile, clients and consumers constantly make me question if I want to continue this career long term. My plan was always to focus more on a B2B angle to insulate from that, but the current winds blowing suggest that angle might not even exist in a decade. So I want to at least have a side hustle ready.
And despite those notions, I'm still on the lucky end in terms of what third and even secondhand accounts I've heard of. Diving more into that pool is unsettling for me, but it might still be more stable than what's going on right now.
+1, customers want their problem solved but at times they struggle to articulate that.
When a customer starts saying “we need to build X”, first ask what the actual problem is etc. It takes actual effort, and you need to speak their language (understand the domain).
But if you have a PM in the middle, now you just start playing telephone and I don’t believe that’s great for anyone involved.
Exactly. The game of telephone is prone to misinterpretation and, when this happens too much, it often answers with rigidity and lack of flexibility, out of fear.
Isn't it a bit of both? When it comes to noticing whether or not code will be a security nightmare, a performance nightmare, an architectural nightmare, etc, haven't experienced developers already learned to watch out for these issues?
Too right. Drilling into the domain from first principles and with critical faculties enabled unlocks so much more value, because the engineer can then see much better ways to solve problems.
Customer interaction has imo always been one of the most important parts in good engineering organizations. Delegating that to Product Managers adds unnecessary friction.
Having spent more hours than I care to count struggling to control my facial expressions in client-facing meetings your assertion that that friction is unnecessary is highly questionable. Having a "face man" who's sufficiently tech literate to ask decent questions manage the soft side of client relations frees up a ton of engineering resources that would otherwise be squandered replying to routine emails.
I pay $20 for OpenAI and codex makes me incredibly productive. With very careful prompts aimed at tiny tasks, I can review, fix and get a lot of things done.
I’ll happily pay up to $2k/month for it if I was left with no choice, but I don’t think it will ever get that expensive since you can run models locally and it could have the same result.
That being said, my outputs are similarish in the big picture. When I get something done, I typically don’t have the energy to keep going to get it to 2x or 3x because the cognitive load is about the same.
However I get a lot of time freed up which is amazing because I’m able to play golf 3-4 times a week which would have been impossible without AI.
Productive? Yes. Time saved? Yes. Overall outputs? Similar.
I would like to know what models people are running locally that get the same results as a $20/month ChatGPT plan
Same? Not quite as good as that. But google’s Gemma 3 27B is highly similar to their last Flash model. The latest Qwen3 variants are very good, to my need at least they are the best open coders, but really— here’s the thing:
There’s so many varieties, specialized to different tasks or simply different in performance.
Maybe we’ll get to a one-size fits all at some point, but for now trying out a few can pay off. It also starts to build a better sense of the ecosystem as a whole.
For running them: if you have an Nvidia GPU w/ 8GB of vram you’re probably able to run a bunch— quantized. It gets a bit esoteric when you start getting into quantization varieties but generally speaking you should find out the sort of integer & float math your gpu has optimized support for and then choose the largest quantized model that corresponds to support and still fits in vram. Most often that’s what will perform the best in both speed and quality, unless you need to run more than 1 model at a time.
To give you a reference point on model choice, performance, gpu, etc: one of my systems runs with an nvidia 4080 w/ 16GB VRAM. Using Qwen 3 Coder 30B, heavily quantized, I can get about 60 tokens per second.
I get tolerable performance out of a quantized gpt-oss 20b on an old RTX3050 I have kicking around (I want to say 20-30 tokens/s, or faster when cache is effective). It's appreciably faster on the 4060. It's not quite ideal for more interactive agentic coding on the 3050, but approaching it, and fitting nicely as a "coding in the background while I fiddle on something else" territory.
Yeah, tokens per second can very much influence the work style and therefore mindset a person should bring to usage. You can also build on the results of a faster but less than SOTA class model in different ways. I can let a coding tuned 7-12b model “sketch” some things at higher speed, or even a variety of things, and I can review real time, and pass off to a slower more capable model to say “this is structural sound, or at least the right framing, tighten it all up in the following ways…” and run in the background.
Just in case anyone hasn't seen this yet:
https://github.com/ggml-org/llama.cpp/discussions/15396 a guide for running gpt-oss on llama-server, with settings for various amounts of GPU memory, from 8GB on up
The run at home was in the context of $2k/mo. At that price you can get your money back on self-hosted hardware at a much more reasonable pace compared to 20/mo (or even 200).
Well theres an open source GPT model you can run locally. I dont think running models locally is all that cheap considering top of the line GPUs used to be $300 now you are lucky if you get the best GPU for under $2000. The better models require a lot more VRAM. Macs can run them pretty decently but now you are spending $5000 plus you could have just bought a rig with a 5090 with mediocre desktop ram because Sam Altman has ruined the RAM pricing market.
Mac can run larger models due to the unified memory architecture. Try building a 512GB nvidia VRAM machine. You basically can’t.
Fully aware, but who the heck wants to spend nearly 10 grand, and that's with just a 1TB hard drive (which needs to be able to fit your massive models mind you). Fair warning not ALL the RAM is fully unified. On my 24GB RAM Macbook Pro I can only use 16GB of VRAM, but its still better than me using my 3080 with only 10 GB of RAM, but I also didn't spend more than 2 grand on it.
I got some decent mileage out of aider and Gemma 27B. The one shot output was a little less good, but I don’t have to worry about paying per token or hitting plan limits so I felt more free to let it devise a plan, run it in a loop, etc.
Not having to worry about token limits is surprisingly cognitively freeing. I don’t have to worry about having a perfect prompt.
And what hardware they needed to run the model, because that's the real pinch in local inference.
There are no models that you can run locally that'll match a frontier LLM
Marx in his wildest nightmare couldn’t have anticipated the level selling short the working class does with the advent AI. Friend, you should be doing more than golf…
Bro, nobody wants to hear about the hustle anymore. We're in the second half of this decade now.
> nobody wants to hear about the hustle anymore
Plenty of people are still ambitious and being successful.
Some stats are trickling out in my company. Code heavy consulting projects show about 18% efficiency gains but I have problems with that number because no one has been able to tell me how it was calculated. Story points actual vs estimated is probably how it was done but that’s nonsensical because we all know how subjective estimates and even actuals are. It’s probably impossible to get a real number that doesn’t have significant “well I feel about x% more efficient…”
More interesting imo would be a measure of maintainability. I've heard that code that's largely written by AI is rarely remembered by the engineer that submitted even a week after merging
You're almost "locked in" to using more AI on top of it then. It may also make it harder to give estimates to non-technical staff on how long it'd take to make a change or implement a new feature
I don’t know how to measure maintainability but the AI generated code I’ve seen in my projects is pretty plain vanilla standard patterns with comments. So less of a headache than a LOT of human code I’ve seen. Also, one thing the agents are good at, at least in my experience so far, is documenting existing code. This goes a long ways in maintenance, it’s not always perfect but as the saying goes documentation is like sex, when it’s good it’s great when it’s bad it’s better than nothing.
Something I occasionally do is ask it to extensively comment a section of code for me, and to tell me what it thinks the intent of the code was, which takes a lot of cognitive load off of me. It means I'm in the loop without shutting off my brain, as I do have to read the code and understand it, so I find it a sweet spot of LLM use.
by "maintainability" and "rarely remembered by the engineer" i'm assuming the bigger concern (beyond commenting and sane code) is once everyone starts producing tons of code without looking - and reading(reviewing) code is, to me at least, much harder than writing - then all of this goes unchecked:
* subtle footguns
* hallucinations
* things that were poorly or incompletely expressed in the prompt and ended up implemented incorrectly
* poor performance or security bugs
other things (probably correctable by fine-tuning the prompt and the context):
* lots of redundancy
* comments that are insulting to the intelligence (e.g., "here we instantiate a class")
* ...
not to mention reduced human understanding of the system and where it might break or how this implementation is likely to behave. All of this will come back to bite during maintenance.
I find it funny that we, collectively, are now okay with comments in the code.
I remember the general consensus on this _not even two years ago_ being that the code should speak for itself and that comments harm more than help.
This matters less when agentic tools are doing the maintenance, I suppose, but the backslide in this practice is interesting.
It's never been the consensus. As far back as I can remember, the wisdom was always to comment why the code does what it does if needed, and to avoid saying what the code does.
Saying that function "getUserByName" fetches a user by name is redundant. Saying that a certain method is called because of a quirk in a legacy system is important.
I regularly implement financial calculations. Not only do I leave comments everywhere, I tend to create a markdown file next to the function, to summarise and explain the context around the calculation. Just plain english, what it's supposed to do, the high level steps, etc.
> I remember the general consensus on this _not even two years ago_ being that the code should speak for itself and that comments harm more than help.
If that was the consensus, it was wrong. There are valuable kinds of comments (whys, warnings, etc) that code can never say.
I'd describe that as a trend, rather than a consensus.
It wasn't an entirely bad idea, because comments carry a high maintenance cost. They usually need to be rewritten when nearby code is edited, and they sometimes need to be rewritten when remote code is edited - a form of coupling which can't be checked by the compiler. It's easy to squander this high cost by writing comments which are more noise than signal.
However, there's plenty of useful information which can only be communicated using prose. "Avoid unnecessary comments" is a very good suggestion, but I think a lot of people over-corrected, distorting the message into "never write comments" or "comments are a code smell".
In context of the thread, that's because AI fixes the key problem with comments, because it maintains them when the code is updated.
yeah that was weird, it was like a cult and some coworkers of mine were religiously hunting down every comment in other people's MR's, just kinda assumed that "no comments" is a hard rule. Very strange, i had to fight many battles for my sanity. There are many cases where you may want to explain why this is coded the way this is coded, not just how.
chasd00 did mention that this was for consulting projects, where presumably there's a handover to another team after a period of time. Maintainability was never a high priority for consultants.
But in general I agree with your point.
> engineer that submitted it
This is a poor metric as soon as you reach a scale where you've hired an additional engineer, where 10% annual employee turnover reflects > 1 employee, much less the scale where a layoff is possible.
It's also only a hope as soon as you have dependencies that you don't directly manage like community libraries.
Hint: Make sure the people giving you the efficiency improvement numbers don't have a vested interest in giving you good numbers. If so, you can not trust the numbers.
Reminds me of my last job where the team that pushed React Native into the codebase were the ones providing the metrics for "how well" React Native was going. Ain't no chance they'd ever provide bad numbers.
better than lines of code at least!
The title is a bit misleading. Reading the article, the argument seems to be that entry-level applicants (are expected to) have the highest AI literacy, so they want them to drive AI adoption.
At least today, I expect this will fail horribly. The challenge today isn't AI literacy in my experience, its domain knowledge required to keep LLMs on the rails.
People literate in AI, but inexperienced in all other facts. What could go wrong!
> People literate in AI, but inexperienced in all other facts. What could go wrong!
It sounds like it's appeal to MBAs, who are people literate in management, but inexperienced in all other areas.
ClawdBot Boardroom Edition
Sounds like the first step of a galactic scale fuck up
"Galactic scale" and "Fuck Up" are on brand for IBM.
It is IBM after all
Totally fair point.
dotcom implosion redux
It certainly feels that way. I was there. Fortunately had just waltzed into the tech side of things and scurried off back to my professional career for a couple of years.
I watched a lot of stuff burn. It was horrifying. We are nearly there again.
Yeah similar story here. I had to spend a couple of years painting houses before the local market recovered enough that tech jobs were a thing again. Shit was surreal. There was one guy I knew that went from building multi-million dollar server and networking projects for IBM to literally working as unskilled labor on a fencing crew just to make rent.
Problem is there aren't jobs where you can go and hide until the economy recovers this time.
For a time, there was a lot of good deals on nice used office furniture.
Yeah got a nice desk and a trinitron out of it. Covid got me an Aeron :)
I hope they have a good 10 years experience in that "literacy".
I just run sub agents in parallel. Yesterday I used Codex for the first time yesterday. I spun up 350,640 agents and got 10 years of experience in 15 minutes.
New metric: agent-hours spent on a task. Or so we measure in tokens. Clearly more tokens burned == more experience right?
There are actually books which recommend that organizations track employee tokens burned as a proxy for AI adoption. Surprised me a bit.
it's the only KPI available.
Unpatchable xp glitch
You should also mention how many millions lines of code you* created.
25 years of LLM experience for a mid-level
"AI is going to wipe out junior developers!"
They actually hire more junior developers
"Uhh .. to adopt AI better they're hiring more junior developers!"
This cope is especially low quality with the context that this is just another purge of older workers at IBM.
Is this for their in-house development or for their consulting services?
Because the latter would still be indicative of AI hurting entry level hiring since it may signal that other firms are not really willing to hire a full time entry level employee whose job may be obsoleted by AI, and paying for a consultant from IBM may be a lower risk alternative in case AI doesn't pan out.
And if it is for consulting, I doubt very serious they will based in the US. You can’t be priced competitive hiring an entry level consultant in the US and no company is willing to pay the bill rate for US based entry level consultants unless their email address is @amazon.com or @google.com.
Source: current (full time) staff consultant at a third party cloud consulting firm and former consultant (full time) at Amazon.
Why would Amazon bring on a full-time consultant instead of just hiring you?
I worked internally at AWS Professional Services - their internal consulting department - every AWS ProServe employee is a “blue badge” employee with the same initial four year offer structure of base + prorated signing bonus + RSUs (5/15/40/40). Google also has a large internal consulting department for GCP.
I can’t fault you for not knowing AWS ProServe exists. I didn’t know either until a recruiter reached out to me.
My partner is also a consultant and one client was Google. I’m also confused about the exact reason why they didn’t just hire someone.
"You see we leased this back from the company we sold it to and that way it comes under the monthly current budget and not the capital account."
~ Monty Python, Meaning of Line (1983), on The Machine that Goes Ping.
No that’s not what I meant at all. Amazon Professional Services are made up of full time “blue badge” employees who get the same type of base + bonus + RSUs that all other blue badge employees get.
One might ask what value seniors hold if their expertise of the junior stage is obsolete. Maybe the new junior will just be reigning in llm that does the work and senior level knowledge and compensation rots away as those people retire without replacement.
Huh?
People seem to think LLMs killing the cs career means companies will still pay senior salaries to shepherd agentic LLM style development. I think it is the senior that is the dinosaur here. As we speak cs curriculums are changing to teach people to cruch along with ai. The next batch of juniors will be taking these jobs. There won’t be seniors anymore at least at the salaries we’ve come to assume with that. The skill is getting removed from the profession and replaced with a framework with a far lower barrier of entry.
Bwahahahahahaha
There is no framework, just confusing junk. At some point, it needs to actually work and….
Glueing the junk together is what the bengaluru office is for. You don’t need bay area comp for that.
‘at some point it needs to actually work’….
Which can be handled offshoring.
Interesting given the current age discrimination lawsuit:
https://www.cohenmilstein.com/case-study/ibm-age-discriminat...
Another one? What is it with IBM, they must really save lots of money in a way no one else has figured out by firing people at 50yo. This is like the 3rd or 4th one i've heard from them.
It’s not very hard. Take a guy making $200k and 30% benefit overhead and replace with two offshore people at $50k total comp.
They don't have to keep giving people raises, why wait until the guy is 50, why not when he is 30 and making $100k? It's not like they have people doing manual labor, it's office jobs. People's faculties don't decline until their late 60's at the earliest. Why don't other multinationals do this and get sued also, what makes IBM special?
25k TC for an engineer? From where?
No - it's that they fired their vets in high cost areas and kept them in low cost areas.
A large number of vets can now choose to reapply for their old job (or similar job) at a fraction of the price with their pension/benefits reduced and the vets in low cost centers now become the SMEs. In many places in the company they were not taken seriously due to both internal politics, but also quite a bit of performative "output" that either didn't do anything or had to be redone.
Nothing to do with AI - everything to do with Arvind Krishna. One of the reasons the market loves him, but the tech community doesn't necessarily take IBM seriously.
IBM has cut ~8,000 jobs in the past year or so.
Sounds like business as usual to me, with a little sensationalization.
I realized the AI replacing developers hype was all hype after watching this.
Why Replacing Developers with AI is Going Horribly Wrong https://m.youtube.com/watch?v=WfjGZCuxl-U&pp=ygUvV2h5IHJlcGx...
A bunch of big companies took big bets on this hype and got burned badly.
You know when someone is singing the praises about AI and they get asked "if you're so much more productive with AI, what have you built with it"? Well I think a bunch of companies are asking this same question to their employees and realising that the productivity gains they are betting on were overhyped.
LLM's can be a very useful tool and will probably lead to measurable productivity increases in the future, at their current state they are not capable of replacing most knowledge workers. Remember, even computers as a whole didn't measurably impact the economy for years after their adoption. The real world is a messy place and hard to predict!
> measurable productivity
Which measure? Like when folk say something is more "efficient" it's more time-efficient to fly but one trades other efficiency. Efficiency, like productivity needs a second word with it to properly communicate.
Whtys more productive? Lines of code (a weak measure). Features shipped? Bugs fixed? Time by company saved? Time for client? Shareholders value (lame).
I don't know the answer but this year (2026) I'm gonna see if LLM is better at tax prep than my 10yr CPA. So that test is my time vs $6k USD.
Time could be very expensive as mistakes on taxes can be fraud resulting in prison time. Mostly they understand people make mistakes - but they need to look like honest mistakes and llm may not. remember you sign your taxes as correct to the best of your knowledge - your CPA is admitting you outsourced understanding to an expert, something they accept. However if you sign alone you are saying you understand it all even if you don't.
These days productivity at a macroeconomic scale is usually cited in something like GDP per hour worked.
Most recent BLS for the last quarter ‘25 was an annualized rate of 5.4%.
The historic annual average is around 2%.
It’s a bit early to draw a conclusion from this. Also it’s not an absolute measure. GDP per hour worked. So, to cut through any proxy factors or intermediating signals you’d really need to know how many hours were worked, which I don’t have to hand.
That said, in general macro sense, assuming hours worked does not decrease, productivity +% and gdp +% are two of the fundamental factors required for real world wage gains.
If you’re looking for signals in either direction on AI’s influence on the economy, these are #s to watch, among others. The Federal Reserve, the the Chair reports after each meeting, is (IMO) one of the most convenient places to get very fresh hard #s combined with cogent analysis and usually some q&a from the business press asking questions that are at least some of the ones I’d want to ask.
If you follow these fairly accessible speeches after meetings, you’ll occasionally see how lots of the things in them end up being thematic in lots of the stories that pop up here weeks or months later.
Economy-wide productivity can be measured reasonably well, although there are a few different measures [1]. The big question I guess is whether AI will make a measurable impact there. Historically tech has had less impact than people thought it would, as noted in Robert Solow's classic quip that "You can see the computer age everywhere but in the productivity statistics". [2]
[1] https://www.oecd.org/en/topics/sub-issues/measuring-producti...
[2] https://en.wikipedia.org/wiki/Productivity_paradox
Try agent zero, you can then upload your bank ( or credit card) statements in CSV etc. It then can analyse it
Number of features shipped. Traction metrics. Revenue per product. Ultimately business metrics. For example, tax prep effectiveness would be a proper experiment tied to specific metrics.
I used to write bugs in 8 hours. Now I write the same bugs in 4. My Productivity doubled. \s
I hear this every day, and I'm sure its true sometimes, but where is the tsunami of amazing software LLM users are producing? Where are the games that make the old games look like things from a bygone era? Where are the updates to the software that I currently use that greatly increase it capabilities? I have seen none of this.
I get that it takes a long time to make software, but people were making big promises a year ago and I think its time to start expecting some results.
Reddit and GitHub are littered with people launching new projects and appear to be way more feature-rich than new tool/app launches from previous years. I think it is a lot harder to get noticed with a new tool/app new because of this increase in volume of launches.
Also weekend hackathon events have completely/drastically changed as an experience in the last 2-3 years (expectations and also feature-set/polish of working code by the end of the weekend).
And as another example, you see people producing CUDA kernels and MLX ports as an individual (with AI) way more these days (compared to 1-2 years ago), like this: https://huggingface.co/blog/custom-cuda-kernels-agent-skills
I have no way of verifying any of those. Something I can easily verify, new games launched on steam.
January numbers are out and there were fewer games launched this January than last.
I’d be interested where you’re getting your data. SteamDB shows an accelerating trend of game releases over time, though comparing January 2026 to January 2025 directly shows a marginal gain [0].
This chart from a16z (scroll down to “App Store, Engage”) plots monthly iOS App Store releases each month and shows significant growth [1].
> After basically zero growth for the past three years, new app releases surged 60% yoy in December (and 24% on a trailing twelve month basis).
It’s completely anecdotal evidence but my own personal experience shows various sub-Reddit’s just flooded with AI assisted projects now, so much so that various pages have started to implement bans or limits of AI related posts (r/selfhosted just did this).
As far as _amazing software_ goes, that’s all a bit subjective. But there is definitely an increase happening.
[0] https://steamdb.info/stats/releases/
[1] https://www.a16z.news/p/charts-of-the-week-the-almighty-cons...
I got the numbers swapped. Turns out there was an increase of about 40 games between last January and this. Which is exactly what you wouldn’t expect if the 5-10x claims are true.
Also the accelerating trend dates back to 2018 if you remove the early COVID dip. Which is exactly my point. You can look at the graph and there is no noticeable impact correlated to any major AI advancements.
The iOS data is interesting. But it’s an outlier because the Play Store and Steam show nothing similar. And the iOS App Store is weird because they’ve had numerous periods of negative growth follow by huge positive growth over the years. My guess is that it probably has more to do with all of the VC money flowing into AI startups and all the small teams following the hype building wrappers and post training existing models. If you look at a random sample of the iOS new apps that looks likely.
Seriously go to the App Store, search AI and scroll until you get bored. There are literally thousands of AI API wrappers.
Specifically about custom CUDA kernels, I’ve implemented them with AI that significantly sped up the code in this project I worked on. Didn’t know how to code these kernels at all, but I implemented and tested a couple of variations and got it running fast in just two days. Basically impossible for me before AI coding (well not impossible but it would have taken me many weeks, so I wouldn’t have tried it).
Or just don't publish them, because they don't want to deal with uses.
I wrote a python DHCP server which connects with proxmox server to hand out stable IPs as long as the VM / container exists in proxmox.
Not via MAC but basically via VM ID ( or name)
The one thing AI is consistently better at than humans is shipping quickly. It will give you as much slop as you want right away, and if you push on it for a short period of time it will compile and if you run it a program will appear that has a button for each of the requested features.
Then you start asking questions like, does the button for each of the features actually do the thing? Are there any race conditions? Are there inputs that cause it to segfault or deadlock? Are the libraries it uses being maintained by anyone or are they full of security vulnerabilities? Is the code itself full of security vulnerabilities? What happens if you have more than 100 users at once? If the user sets some preferences, does it actually save them somewhere, and then load them back properly on the next run? If the preferences are sensitive, where is it saving them and who has access to it?
It's way easier to get code that runs than code that works.
Or to put it another way, AI is pretty good at writing the first 90% of the code:
Nowadays there are DOZENS of apps being launched solving the same problem.
Have you ever looked for, say, WisprFlow alternatives? I had to compare like 10 extremely similar solutions. Apps have no moat nowadays.
That's happening all over the place.
Look somewhere outside of the AI hype space. You’re seeing more AI competitors because it’s easy to build on top of someone’s existing model or API and everyone is trying to cash in. You saw the same thing with new crypto currency.
Just check foundry vtt and it's modules. The amount of modules released exploded since AI.
That’s an incredibly niche area. From their website it looks like there are 4k modules available. Is there a way to see historical data. Also is number of users available, so that you can rule out popularity growth?
Hmm no I don't think they publish data about buyers or players.
But the numbers of lfg is basically the same, maybe a few percent more. But not dozens of modules more per day more...
Even better, I write more bugs in 4 hours than I used to in 8.
And the bugs take me WAY longer to find and fix now!
A 10x employee creates enough bugs to keep 10 other employees busy.
10 other agents.
"I'm ten times the agent you are, agent 8.6!"
"If debugging is the process of removing software bugs, then programming must be the process of putting them in."
- Edsger Dijkstra
I bet you the predictions are largely correct but technology doesn't care about funding timelines and egos. It will come in its own time.
It's like trying to make fusion happen only by spending more money. It helps but it doesn't fundamentally solve thr pace of true innovation.
I've been saying for years now that the next AI breakthrough could come from big tech but it also has just a likely chance of comming from a smart kid with a whiteboard.
Well, the predictions are tied to the timelines. If someone predicts that AI will take over writing code sometime in the future I think a lot of people would agree. The pushback comes from suggesting it's current LLMs and that the timeline is months and not decades.
> I've been saying for years now that the next AI breakthrough could come from big tech but it also has just a likely chance of comming from a smart kid with a whiteboard.
It comes from the company best equipped with capital and infra.
If some university invents a new approach, one of the nimble hyperscalers / foundation model companies will gobble it up.
This is why capital is being spent. That is the only thing that matters: positioning to take advantage of the adoption curve.
Yes scaling is always capitol hungry but the innovation itself is not
I think for a lot of folks it basically comes down to just using AI to make the tasks they have to do easier and to free up time for themselves.
I’d argue the majority use AI this way. The minority “10x” workers who are using it to churn through more tasks are the motivated ones driving real business value being added - but let’s be honest, in a soulless enterprise 9-5 these folks are few and far between.
Sure but why haven’t you seen a drastic increase in single person startups.
Why are there fewer games launched in steam this January than last?
Because very few knows how to use AI. I teach AI courses on the side. I've done auditing supervised fine tuning and RLHF projects for a major provider. From seeing real prompts, many specifically from people who work with agents every day, people do not yet have the faintest clue how to productively prompt AI. A lot of people prompt them in ways that are barely coherent.
Even if models stopped improving today, it'd take years before we see the full effects of people slowly gaining the skills needed to leverage them.
Sure there are people holding it wrong.
But there are thousands of people on social media claiming huge productivity gains. Surely at least 5% of devs are holding it right.
If a 10x boost is possible, we’d notice that. There are only 20k games a year released on steam.
If my hypothesis is true and the real final output boost is somewhere near 20%, we’re seeing exactly what you’d expect.
I'd love to look at what you consider to be good prompts if you could provide a link.
You'd be surprised how low the bar is. What I'm seeing is down to the level of people not writing complete sentences.
There doesn't need to be any "magic" there. Just clearly state your requirements. And start by asking the model to plan out the changes and write a markdown file with a plan first (I prefer this over e.g. Claude Code's plan mode, because I like to keep that artefact), including planning out tests.
If a colleague of yours not intimately familiar with the project could get the plan without needing to ask followup questions (but able to spend time digging through the code), you've done pretty well.
You can go over-board with agents to assist in reviewing the code, running tests etc. as well, but that's the second 90%. The first 90% is just to write a coherent request for a plan, read the plan, ask for revisions until it makes sense, and tell it to implement it.
Not surprising. Many folks struggle with writing (hence why ChatGPT is so popular for writing stuff), so people struggling to coherently express what they want and how makes sense.
But the big models have come a long way in this regard. Claude + Opus especially. You can build something with a super small prompt and keep hammering it with fix prompts until you get what you want. It's not efficient, but it's doable, and it's much better than having to write a full spec not half a year ago.
This is exactly it. A lot of people use it that way. And it's still a vast improvement, but they could also generally do a lot better with some training. I think this is one of the areas where you'll unfortunately see a big gap developing between developers who do this well, and have the models work undisturbed for longer and longer while doing other stuff, and those who ends up needing a lot more rework than necessary.
> Claude + Opus especially. You can build something with a super small prompt and keep hammering it with fix prompts until you get what you want.
LOL: especially with Claude this was only in 1 out of 10 cases?
Claude output is usually (near) production ready on the first prompt if you precisely describe where you are, what you want and how you get it and what the result should be.
One thing that I’ve often seen is models, when very much told to just write a plan, still including sizeable amounts of code in the plan.
Maybe it’s needing to step back and even ask for design doc before a plan, but even then…
> Just clearly state your requirements.
Nothing new here. Getting users to clearly state their requirements has always been like pulling teeth. Incomplete sentences and all.
If the people you are teaching are developers, they should know better. But I'm not all that surprised if many of them don't. People will be people.
You're right, they should know better, but I think a lot of them have gotten away with it because most of them are not expected to produce written material setting out missing assumptions etc. and breaking down the task into more detail before proceeding to work, so a lot have never gotten the practice.
Once people have had the experience of being a lead and having to pass tasks to other developers a few times, most seem to develop this skill at least to a basic level, but even then it's often informal and they don't get enough practice documenting the details in one go, say by improving a ticket.
Because ai doesnt work like this “make me money” or “make stardew valley in space”. The hard part is the painful exploration and necessary taste to produce something useful. The number of these kind of people did not increase with ai.
Eg, ai is a big multiplier but that doesnt mean it will translate to “more” in the way people think.
It doesn’t need to be useful or a good game to launch on steam. Surely if it was a “big multiplier” 5-10x, it would be noticeably impacting steam launches.
Now if it’s something closer to 20%, we’re seeing exactly what you’d expect.
It comes down back to that whole discussion around intelligence becoming cheaper and more accessible but motivation and agency remaining stable.
I’ve worked with a few folks who have been given AI tools (like a designer who never coded in his life, a or video/content creator) who have absolutely taken off with creating web apps and various little tools and process improvements for themselves thanks by just vibecoding what they wanted. The key with both these individuals is high agency, curiosity, and motivation. That was innate, the AI tooling just gave them the external means to realise what they wanted to do with more ease.
These kinds of folks are not the majority, and we’re still early into this technological revolution imo (models are improving on a regular basis).
In summary, we’ve given the masses to “intelligence” but creativity and motivation stay the same.
My guess is that the true impact of this will be difficult to measure for a while. Most "single-person start-ups" will probably not be high-visibility VC-backed, YC affairs, and rather solopreneurs with a handful of niche moonlighted apps each making 3-4 digit monthly revenue.
Those would still be launching on places like product hunt though.
Haven't you? I have! In another reply, I noted the avalanche of WisprFlow competitors, as just one example.
95% of all new startups have the word AI in the description, so of course there are lots of new API wrappers and people trying to build off of existing models.
There aren’t noticeably more total startups or projects though.
Huh? Less games launched on steam? First time I hear that. Any source?
But my guess would be: games are closed sourced and need physics. Which AI is bad at.
Just google “games released on steam by year”.
Many games don’t need physics, and there are a billion hobby projects on GitHub.
https://steamdb.info/stats/releases/
Does not look like less games.
Sorry, I swapped the numbers. It's actually 1447 this year vs 1413 last year so 34 more games this year. So essentially now growth. Despite there being a clearly accelerating growth trend since 2018.
No. They're firing high paid seniors and replacing them with low pay juniors. This is IBM we're talking about.
The "limits of AI" bit is just smokescreen.
Firing seniors:
> Just a week after his comments, however, IBM announced it would cut thousands of workers by the end of the year as it shifts focus to high-growth software and AI areas. A company spokesperson told Fortune at the time that the round of layoffs would impact a relatively low single-digit percentage of the company’s global workforce, and when combined with new hiring, would leave IBM’s U.S. headcount roughly flat.
New workers will use AI:
> While she admitted that many of the responsibilities that previously defined entry-level jobs can now be automated, IBM has since rewritten its roles across sectors to account for AI fluency. For example, software engineers will spend less time on routine coding—and more on interacting with customers, and HR staffers will work more on intervening with chatbots, rather than having to answer every question.
Where does it say those cuts were senior software developers?
Obviously they want new workers to use AI but I don't really see anything to suggest they're so successful with AI that they're firing all their seniors and hiring juniors to be meatbags for LLMs.
This just doesn't make any sense. Juniors + AI just does not equal seniors, except for prototyping greenfield projects. Who knows about 2 months from now, it moves fast and stuff, but not right now.
> just doesn't make any sense
I suspect the gap is that you don't know enough about IBM's business model.
When something doesn't make sense, a very common cause is a lack of context: many things can be extremely sensible for a business to do; things which appear insane from an outsider's point of view.
probably aren't going to find a lot of articles discussing how water is wet, either.
LOL great example. There are tons of articles on this topic:
https://www.sciencefocus.com/science/is-water-wet https://centreforinquiry.ca/keiths-conundrums-is-water-wet https://www.theguardian.com/notesandqueries/query/0,5753,-17... http://scienceline.ucsb.edu/getkey.php?key=6097 https://parknotes.substack.com/p/is-water-wet-or-does-it-jus...
...etc. Turns out, it's not a solved question!
No one has built business AI that is flat correct to the standards of a high redundancy human organization.
Individuals make mistakes in air traffic control towers, but as a cumulative outcome it's a scandal if airplanes collide midair. Even in contested airspace.
The current infrastructure never gets there. There is no improvement path from MCP to air traffic control.
It's hard work and patience and math.
Meh, i think a lot of companies just wanted an excuse to do lay-offs without the bad press, and AI was convinent.
“AI will steal your job” never made sense. If your company is doing bad, sure maybe you fire people after automating their job. But we’re in a growth oriented economic system. If the company is doing good, and AI increases productivity, you actually will hire more people because every person is that much more of a return on investment
> "if you're so much more productive with AI, what have you built with it"
If my boss asked me a question like this my reply would be "exactly what you told me to build, check jira".
If you want to know if I'm more productive - look at the metrics. Isn't that what you pay Atlassian for? Maybe you could ask their AI...
As a senior engineer sometimes the system shows I did nothing because I was helping others. sometimes I get the really hard problem -'the isn't speller teh' type bugs are more common than thread race conditions - but a lot faster to solve.
[dead]
[flagged]
Everytime someone say something like that there is no link to the product. Maybe because it doesn't exist ?
Historically in a lot of niches such as search marketing etc, people would not name their successful projects because the barrier to entry is low.
It someone can use AI to make a $50,000/year project in three months, then someone else can also do so.
Obviously some people hype and lie. But also obviously some people DID succeed at SEO/Affiliate marketing/dropshipping etc. AI resembled those areas in that the entry barrier is low.
To get actual reports you often need to look to open source. Simon Willison details how he used it extensively and he has real projects. And here Mitchell Hashimoto, creator of Ghostty, details how he uses it: https://mitchellh.com/writing/my-ai-adoption-journey
Update: OP posted their own project however. Looks nice!
This is definitely the case. I have a project that while not wildly profitable yet, is producing real revenue, but that I will not give details of because the moat is so small. The main moat is that I know the potential is real, and hopefully not enough other people do, yet. I know it will disappear quickly, so I'm trying to make what I can of it while it's there. I may talk about it once the opportunity is gone.
It involves a whole raft of complex agents + code they've written, but that code and the agents were written by AI over a very short span of time. And as much as I'd like to stroke my own ego and assume it's one of a kind, realistically if I can do it, someone else can too.
Still need good taste and judgement to build the thing people actually want to use.
What an awful comment. The person above you is now flagged because of your paranoia. Of course later they post a link to exactly what they built.
I don't even know what flagged means lol
[flagged]
lmfao you're doing great man, keep posting.
[flagged]
He is overwhelmed with customers. Can't risk any more awareness.
Legitimately am. I get daily emails from customers telling me how much they love my product. Go search Google, it's free.
Search for "Rivian Roamer".
Sounds nice, for how many years have you had that annual recurring revenue so far?
I only started charging customers in September. Super-linear growth. I launched annual subscriptions and within less than a week > 15% of customers switched.
I'm with you. I own a business and have created multiple tools for myself that collectively save me hours every month. What were boring, tedious tasks now just get done. I understand that the large-scale economic data are much less clear about productivity benefits, in my individual case they could not be more apparent.
I'm thirding this sentiment!
I run an eComm business and have built multiple software tools that each save the business $1000+ per month, in measurable wage savings/reductions in misfires.
What used to take a month or so can now be spat out in less than a week, and the tools are absolutely fit for purpose.
It's arguably more than that, since I used to have to spread that month of work over 3-6 months (working part time while also doing daily tasks at the warehouse), but now can just take a week WFH and come back with a notable productivity gain.
I will say, to give credit to the anti-AI-hype crowd, that I make sure to roll the critical parts of the software by hand (things like the actual calculations that tell us what price an item at, for example). I did try to vibecode too much once and it backfired.
But things like UIs, task managers for web apps, simple API calls to print a courier label, all done with vibes.
Understanding when to make something deterministic and not is critical. Taste and judgement is critical.
Has anyone noticed Amazon or AWS shipping features faster than their pre-GenAI baseline? I haven't
I'm noticeably faster shipping.
The only thing the comments told me is that people lake judgement and taste to do it themselves. It's not hard, identify a problem that's niche enough for a problem you can solve.
Stop arguing on HN and get to building.
Every hype AI post is like this. “I’m making $$$ with these tools and you’re ngmi” I completely understand the joys of a few good months but this is the same as the people working two fang jobs at the start of Covid. Illusionary and not sustainable.
[flagged]
I built and debugged an embedded stub loader for Rp2350 to program MRAM and validate hardware status for a satellite. About 2.5 hours of my time, a lot of it while supervising students/doing other things.
This would have been a couple day+ unpleasant task before; possibly more. I had been putting it off because scouring datasheets and register maps and startup behavior is not fun.
It didn’t know how to troubleshoot the startup successfully itself, though. I had to advise it on a debugging strategy with sentinel values to bisect. But then once explained it fixed the defects and succeeded.
LLMs struggle in large codebases and the benefit is much smaller now. But that capability is growing fast, and not everything software developers do is large.
I'm not doubting of you or anything, but you just proved point above by saying you have a successful project without even mentioning which project is that.
[flagged]
Cool! Can we see it?
[flagged]
Nice, yeah I feel like there's a big opportunity for tech workers who are product-adjacent to use LLMs to get up to speed building SaaS etc.
Are you worried by any of those claims about SaaS being dead because of AI? lol
[flagged]
Looks cool. Are you a Rivian owner who solved their own problem or did you stumble upon it randomly??
Thanks! I used to own a Tesla and there were similar platforms out there. Bought a Rivian and wanted something like that. I started building this before AI-assisted coding was very popular. But it greatly increased my productivity.
There is that quote "there are cathedrals everywhere for those with the eyes to see". I feel like there is a solid variation with solid business opportunities instead of cathedrals haha.
I've found AI to be a big productivity boost for myself, but I don't really use it to generate much actual code. Maybe it could do more for me, idk, but I also don't feel like I'm being left behind. I actually enjoy writing code, but hate most other programming tasks so it's been nice to just focus on what I like. Feels good to have it generate a UI skeleton for me so I can just fill out the styles and stuff. Or figure out stupid build config and errors. Etc etc.
Anyways congrats on the product. I know a lot of people are negative about productivity claims and I'm certainly skeptical of a lot of them too, but if you asked most programmers 5 years ago if a super-autocomplete which could generate working code snippets and debug issues in a project would boost productivity everyone would say yes lol. People are annoyed that its overhyped, but there should still be room for reasonable hype imo.
First of all, thank you. I've always been told I have a back for seeing opportunities others don't.
For me, I always had the ideas and even as a competent engineer, the speed of development annoyed me.
I think folks get annoyed when their reality doesn't match other people's claims. But I have friends who aren't engineers who have launched successful SaaS products. I don't know if it's jealousy or what but people are quite passionate about how it doesn't have productivity gains.
Hell, I remember Intellisense in Visual Studio being a big boon for me. Now I can have tasks asynchronous, even if not faster, it frees up my time.
Details would help your argument. Since many did the same thing, before the AI wave...
Is the business 3 months old now?
It's not an argument, it's a fact.
Its also a fact my stopped clocked will show the correct time two times a day :-)
Fair. I've had super-linear growth since launching in September. Zero marketing outside of a referral program. People genuinely love what I'm building. I get multiple emails per week about how people appreciate the software and how I send out weekly emails about everything I've launched.
The whole point, that you seem to be have missed by now by the third interaction, is how was AI the crux of it...
Perhaps I'm being cynical, but could they be leaving out some detail? Perhaps they're replacing even more older workers with entry level workers than before? Maybe the AI makes the entry level workers just as good-- and much cheaper.
https://archive.today/D6Kyc
Yes, junior candidates lacking the knowledge and wisdom to redirect an LLM, that's who will unlock the mythical AI productivity.
> In the HR department, entry-level staffers now spend time intervening when HR chatbots fall short, correcting output and talking to managers as needed, rather than fielding every question themselves.
The job is essentially changing from "You have to know what to say, and say it" to "make sure the AI says what you know to be right"
I always though the usual 'they only hire seniors now' was a questionable take. If anything, all you need is a semi warm blooded human to hit retry until the agents get something functional. It's more likely tech will transform into an industry of lowly paid juniors imho, if it hasn't already started. Senior level skill is more replacable, not just because it's cheaper to hire juniors augmented with mostly AI but because they are more adaptable to the new dystopia since they never experienced anything else. They are less likely to get hung up on some code not being 'best practice' or 'efficient' or even 'correct'. They will just want to get the app working regardless of what goes in the sausage, etc.
Probably not on the IBM jobs site yet, where the number of entry level jobs is low compared to the size of the company (~250k):
https://www.ibm.com/careers/search?field_keyword_18[0]=Entry...
Total: 240
United States: 25
India: 29
Canada: 15
Aren't those general jobs opening. Like junior swe only needs a single generic posting for all positions
Exactly, that's why counting job postings is a terrible proxy for gauging market conditions. Companies may hire anywhere from 0 to 100s of people through the same JD.
[dead]
The article said they called for triple junior hire but cut 1000 jobs a month later, “so the number of jobs stay roughly the same”.
Certainly they didn’t mean 1000 junior positions were cut. So what they really want to say is that they cut senior positions as a way of saving cost/make profit in the age of AI? Totally contrary to what other companies believe? Sounds quite insane to me!
IBM is one of those companies that measures success by complexity. Meaning if it's complicated, they make money with consultants. If it's simple, they bundle it with other complex solutions that require consulting.
I had the chance to try a IBM internal AI. It was a normal chat interface where one could select models up to Sonnet 4.5. I have not seen anything agentic. So there is that.
Brings a new angle on the old joke: "Actually, Indians"
Bold move.
Not because it's wrong, but because it risks initiating the collapse of the AI bubble and the whole "AI is gonna replace all skilled work, any day now, just give us another billion".
Seems like IBM can no longer wait for that day.
Is IBM invested big in LLMs? I don't get the impression they have much to lose there.
They said they're going to invest like $150B over five years. Which is quite a bit smaller than other big tech firms.
They have their Granite family of models, but they're small language models so surely significantly less resources are going into them.
Their CEO already said what he's thinking about all the spending [0].
[0]: https://news.ycombinator.com/item?id=46124324
Good. Nobody needs to rip that bandaid off. Might as well be IBM.
I mean it’s IBM. On average, 70% of their decisions are bad ones. Not sure I’d pay a single bit of attention to what they do.
To a non-technical individual IBM is still seen as a reputable brand (their consulting business would've been bankrupt long ago otherwise) and they will absolutely pay attention.
Yeah, they are only 114 years old. How they can have the knowledge to stay afloat in trying times like this?
Agree, They could have owned the home computer market, but were out-manvoured by a couple of young programmers. They are hardly the company you want to look to for guidance on the future.
Tripling entry-level hiring is a good plan.
> Some executives and economists argue that younger workers are a better investment for companies in the midst of technological upheaval.
IBM, in the midst of a tech upheaval? They are so dysfunctional, it's the core of why I left
With the workforce may happen like with DRAM and NAND flash memories: unexpected demand in one side leaving without enough offer in other sides.
Doubt it. Unless we go through another decade of ZIRP tied to a newly invented hyped technology that lacks specialists, and discovering new untapped markets, there's not gonna be any massive demand spike of junior labor in tech that can't be met causing wages to shoot up.
The "learn to code" saga has run its course. Coder is the new factory worker job where I live, a commodity.
The title could be dead wrong; the tripling of junior jobs might not be due to the limits of AI, but because of AI increasing the productivity of juniors to that of a mid or senior (or at least 2-3x-ing the output of juniors), thus making hiring juniors an appealing prospect to increase the company's output relative to competitors who aren't hiring in response to AI tech improvements. Hope this is the case and hope it happens across broadly across the economy. While the gutter press fear mongers of job losses, if AI makes the average employee much more useful (even if its via newly created roles), it's conceivable there's a jobs/salaries boom, including among those who 'lose their job' and move into a new one!
When you read the comments here just remember there are people using ChatGPT to write code.
And those people probably aren’t developers by trade, just power users who superficially understand the moving parts but who cannot write code themselves.
Huh, weird, another "technological marvel" whose primary effect just seems to be devaluing labour.
Technologies entire job is to make it less work to accomplish something and therefore easier and cheaper. In some cases that will make it possible to do things you couldn't do before but in many cases it'll just end up causing the value of said labor to fall. The problem isn't change, but the rate of change and the fact it's affecting our own field rather than someone else's.
They hire juniors, give them Claude Code and some specs and save a mid/senior devs salary. I believe coding is over for SWE's by end of 2027, but will take time to diffuse though the economy hence still need some cheap labour for a few years, given the H1-B ban this is one way without offshoring.
If you had a truly thorough QA department, you might get away with that. Sadly, trashing QA is everyone’s second favorite new fad.
I want the big_model take.
These are just the draft tokens.
We are witnessing the Secularization of Code.
IBM has practiced ageism for decades with the same playbook. AI is just the latest excuse. Fire a wide enough swath so it isn’t all old employees and then only hire entry level positions. Often within the same year. Repeat.
AI is not removing entry-level roles — it’s exposing where judgment boundaries actually exist.
What does tripling actually mean in this context?
E.g. If you cut hiring from say 1,000 a year to 10 and now are 'tripling' it to 30 then that's still a nothingburger.
Nooooo how dare you!!! AGI is coming and engineers are obsolete!
Think about the economy and the AI children
An AI model has no drive or desire, or embodiment for that matter. Simply put, they don't exist in the real world and don't have the requirements or urgency to do anything unless prompted by a human, because, you know, survival under capitalism. Until they have to survive and compete like the rest of us and face the same pressures, they are going to be forever be relegated as mere tools.
[dead]
[dead]
[dead]
> The "AI will replace all junior devs" narrative never accounted for the fact that you still need humans who understand the business domain, can ask the right questions, and can catch when the AI is confidently wrong.
You work with junior devs that have those abilities? Because I certainly don't.
Not many, but junior devs grow into senior devs who do, which is the point. If there are no junior devs there is no one growing into those skill sets.
[dead]
[dupe] Earlier: https://news.ycombinator.com/item?id=46995146
Thanks - we-ve merged that thread hither.
It must be refactored: IBM is hopping that juniors(less paid) with AI can be sold as seniors.
Tbh, getting good results from ai requires senior level intuition. You can be rusty as hell and not even middling in the language being used, but you have to understand data structures and architecture more than ever to get non-shit results. If you just vibe it, you’ll eventually end up with a mountain of crap that works sort of, and since you’re not doing the coding, you can’t really figure it out as you go along. Sometimes it can work to naively make a thing and then have it rewritten from scratch properly though, so that might be the path.
100% accurate. The architect matters so much more than people think. The most common counter argument to this I've seen on reddit are the vibe coders (particularly inside v0 and lovable subreddits) claiming they built an app that makes $x0,000 over a weekend, so who needs (senior) software engineers and the like? A few weeks later, there's almost always a listing for a technical co-founder or a CTO with experience on their careers page or LinkedIn :)))
If that's true, it sounds like the vibe coders are winning - they're creating products people want, and pull in technical folks as needed to scale.
But the argument is not about market validation, the argument is about software quality. Vibe coders love shitting on experienced software folks until their code starts falling apart the moment there is any real world usage.
And about the pulling in devs - you can actually go to indeed.com and filter out listings for co-founders and CTOs. Usually equity only, or barely any pay. Since they're used to getting code for free. No real CTO/Senior dev will touch anything like that.
For every vibe coded product, there's a 100 clones more. It's just a red ocean.
This mirrors my experience exactly. Vibe coding straight up does not work for any serious code.
I can't help but feel the only reason to post a comment like this is due to something similar to Cunningham's Law.
https://meta.wikimedia.org/wiki/Cunningham%27s_Law
Todo web apps aren't serious code, I can buy that, but in your mind, what is? Are compilers "serious code"?
https://www.anthropic.com/engineering/building-c-compiler
Like, I'm sure it's just laundering gcc's source at some level, but if Claude can handle making a compiler, either we have to reframe a compiler as "not serious", or, well, come up with a different definition for what entails "serious" code.
Vibe coding doesn’t work for the imbedded system code that I am working on, which includes layered state machines, hardware drivers, and wire level protocol stacks. But supervised AI code generation definitely does work.
You need a highly refined sense of “smell” and intuition about architecture and data design, but if you give good specifications and clear design goals and architectural guidance, it’s like managing a small team but 12x faster iteration.
I sometimes am surprised with feature scope or minor execution details but usually whenever I drill down I’m seeing what I expected to see, even more so than with humans.
If I didn’t have the 4 decades of engineering and management experience I wouldn’t be able to get anything near the quality or productivity.
It’s an ideal tool for seasoned devs with experience shipping with a team. I can do the work of a team of 5 in this type of highly technical greenfield engineering, and I’m shipping better code with stellar documentation… and it’s also a lot less stressful because of the lack of interpersonal dynamics.
But… there’s no way I would give this to a person without technical management experience and expect the same results, because the specification and architectural work is critical, and the ability to see the code you know someone else is writing and understand the mistakes they will probably make if you don’t warn them away from it is the most important skillset here.
In a lot of ways I do fear that we could be pulling up the ladder, but if we completely rethink what it means to be a developer we could teach with an emphasis on architecture, data structures, and code/architecture intuition we might be able to prepare people to step into the role.
Otherwise we will end up with a lot of garbage code that mostly works most of the time and breaks in diabolically sinister ways.
Well, Claude is fixing code generation bugs in my Ruby AOT compiler written in Ruby, and it certainly can't launder any source for that.
Ticketing, payroll, point of sale, banking, HFT, e-commerce, warehouse, shipping… how have you not thought of these
The ones I've thought of, and the one's you've thought of, and the ones Ancalagon has in their mind are three partially disjoint sets, but there's probably some intersection, which we can then use as a point of discussion. Given that "serious code" isn't a rigorously defined industry term, maybe you could be less rude?
Still a wildly different thesis than the “juniors are fucked, ladder’s been raised”
just to be clear: from my standpoint it's the worst period ever being a junior in tech, you are not "fucked" if you are junior, but hard times are ahead of you.
OTOH, as a junior, you haven't learned all the wrong lessons that don't apply anymore, and you have fewer responsibilities than the seniors.
That still doesn’t sound employable
This case has always been made for juniors but it's almost always the opposite that's true. There's always some fad that the industry is over-indexing on. Senior developers tend to be less susceptible to falling for it but non-technical staff and junior developers are not
Whether its a hotlang, LLMs, or some new framework. Juniors like to dive right in because the promise of getting a competitive edge against people much more experienced than you is too tantalizing. You really want it to be true
Like what
Some things take very little time and effort to manifest into the world today that used to take a great deal. So one of the big changes is around whether some things are worth doing at all.
Note: I'm not taking any particular side of the "Juniors are F**d" vs "no they're not" argument.
IMO I have found that juniors working with AI is basically just like subscribing to an expensive AI agent.
IMO with the latest generation (gpt codex 5.3 and claude 4.6) most devs could probably be replaced by AI. They can do stuff that I've seen senior devs fail at. When I have a question about a co-workers project, I no longer ask them and instead immediately let copilot have a look at the repo and it will be faster and more accurate at identifying the root cause of issues than humans who actually worked on the project. I've yet to find a scenario where they fail. I'm sure there are still edge cases, but I'm starting to doubt humans will matter in them for long. At this point we really just need better harnesses for these models, but in terms of capabilities they may as well take over now.
> most devs could probably be replaced by AI. They can do stuff that I've seen senior devs fail at.
When I read these takes I wonder what kind of companies some of you have been working for. I say this as someone who has been using Opus 4.6 and GPT-Codex-5.3 daily.
I think the “senior developer” title inflation created a bubble of developers who coasted on playing the ticket productive game where even small tasks could be turned into points and sprints and charts and graphs such that busy work looked like a lot of work was being done.
they are good at some weird problems - but also write some really bad code and sometimes come up with wrong answers.
That's why you write tests.
There are whole classes of problems that tests can't catch.
[flagged]
ehm ... it's basically what all big consultancies have been doing in the last 20 years .. and they made tons of money with this model.
Making money consulting doesn't require positive results.
"software engineers will spend less time on routine coding—and more on interacting with customers"
Ahh, what could possibly go wrong!
Why is that bad? You write better code when you actually understand the business domain and the requirement. It's much easier to understand it when you get it direct from the source than filtered down through dozens of product managers and JIRA tickets.
You write more efficient software for the task.
Having had to support many of these systems for sales or automation or video production pipelines as soon as you dig under the covers you realize they are a hot mess of amateur code that _barely_ functions as long as you don't breath on it too hard.
Software engineering is in an entirely nascent stage. That the industry could even put forward ideas like "move fast and break things" is extreme evidence of this. We know how to handle this challenge of deep technical knowledge interfacing with domain specific knowledge in almost every other industry. Coders were once cowboys, now we're in the Upton Sinclair version of the industry, and soon we'll enter into regular honest professional engineering like every other new technology ultimately has.
Engineers and customers often talk past each other. They focus on different things. They use different vocabulary.
Only true for engineers who don't want to bother learning this skill. Those engineers are going to start finding themselves left behind.
Not sure why this is being downvoted. It’s spot on imo. Engineers who don’t want to understand the domain and the customers won’t be as effective in an engineering organization as those who do.
It always baffles me when someone wants to only think about the code as if it exists in a vacuum. (Although for junior engineers it’s a bit more acceptable than for senior engineers).
We're assuming we all somehow have perfect customers with technical knowledge who know exactly what they want and can express it as such, while gracefully accepting pushback over constraints brought up.
Anyone who's worked in a "bikeshed sensitive" stack of programming knows how quickly things railroad off when such customers get direct access to an engineer. Think being a fullstack dev but you constantly get requests over button colors while you're trying to get the database setup.
Dealing with the occasional pushy customers is way easier than dealing with pushy PMs or designers. Which happen to be the majority.
Customers bikeshed WAY less than those two categories.
I'm glad you dealt with some good customers. I can't agree in my experience, though.
It's not luck.
Customers want to save money and see projects finished. That anyone can reason with.
Someone inside the company trying to climb the corporate ladder? Different story.
Okay. I'm glad you're privileged enough to where you can choose your customers. Customers that aren't abusive or otherwise out of their league thinking they know everything just because they have money.
Otherwise, you never feeelanced on the cheap.
Calling me "privileged" or "lucky" feels like a cheap attack on my competence.
I am certain that I went through the same problems you did in the past, maybe I just have a different way of dealing with them, or maybe I had even worse problems than you did but I have a different frame of comparison. We never stopped to compared notes.
All I'm saying is: for me dealing with business owners, end-users, CEOs and CTOs was always way easier than dealing with proxies. That's all.
>I am certain that I went through the same problems you did in the past,
And I'm certain you haven't if you really, never wanted a layer of separation between certain clients over behavioral issues that got in the way of the actual work. And I'm still male, so I'm sure I still have it better than certain other experiences I only heard third hand in my industry.
I don't see it as a cheap attack. Any teacher would love to be in a classroom exclusively made up of motivated honors students so they can focus on teaching and nurturing. Instead, most teachers tend to become parental proxies without the authority to actually discipline children. So they see a chair fly and at best they need to hope a principal handles it. But sometimes the kid is back in class the next day.
Its envy more than anything else.
Once again you are making assumptions about my experience. Can you please stop?
> if you really, never wanted a layer of separation between certain clients over behavioral issues that got in the way of the actual work
Who says I haven’t?
My entire complaint is that the layer of separation is often more difficult than customers, and doesn’t have the same incentives for behaving.
>you are making assumptions about my experience. Can you please stop?
You're 2 days into responding to a comment that amounted to "X depends on your exoerience". Is there something else you wish to get out of this thread?
Your complaint is an opinion. I disagree with that opinion. Unless you wish to ask about my experiences or go into yours, what's there to discuss here? Without that, I feel I said all I could on the topic.
All I'm asking is for you to please stop dismissing my experience as lucky or privileged, or claiming I didn't have problems.
I never said clients aren't difficult or that I only had good customers anywhere. Just that there might be worse problems elsewhere.
>please stop dismissing my experience as lucky or privileged
I'm not gonna harp on it. I'll go to bed and wake up completely forgetting about this thread unless I get another notification.
But you're basically telling me to shut off my feelings. Hard to do. I don't know your experiences, so my feelings can be wrong.
I'm unsure why you are putting so much stock into an uninformed feeling on the internet. It doesn't seem like we want to expand on our stories so there's not much more to go on. And that's fine.
I don't try to assert everything about you, but I'm just explaining the vibes I got. But that's all my words are: vibes.
>Just that there might be worse problems elsewhere
I love my industry and in my personal experience I can count on one hand how many truly problematic coilkeages I've worked with or under. I am lucky in that regard for my industry.
Meanwhile, clients and consumers constantly make me question if I want to continue this career long term. My plan was always to focus more on a B2B angle to insulate from that, but the current winds blowing suggest that angle might not even exist in a decade. So I want to at least have a side hustle ready.
And despite those notions, I'm still on the lucky end in terms of what third and even secondhand accounts I've heard of. Diving more into that pool is unsettling for me, but it might still be more stable than what's going on right now.
There you go, the second part of your answer is a good way of disagreeing without diminishing or dismissing someone else's experience.
+1, customers want their problem solved but at times they struggle to articulate that.
When a customer starts saying “we need to build X”, first ask what the actual problem is etc. It takes actual effort, and you need to speak their language (understand the domain).
But if you have a PM in the middle, now you just start playing telephone and I don’t believe that’s great for anyone involved.
Exactly. The game of telephone is prone to misinterpretation and, when this happens too much, it often answers with rigidity and lack of flexibility, out of fear.
Isn't it a bit of both? When it comes to noticing whether or not code will be a security nightmare, a performance nightmare, an architectural nightmare, etc, haven't experienced developers already learned to watch out for these issues?
Too right. Drilling into the domain from first principles and with critical faculties enabled unlocks so much more value, because the engineer can then see much better ways to solve problems.
Programmers have an unfortunate tendancy to be too honest!
Customer interaction has imo always been one of the most important parts in good engineering organizations. Delegating that to Product Managers adds unnecessary friction.
Having spent more hours than I care to count struggling to control my facial expressions in client-facing meetings your assertion that that friction is unnecessary is highly questionable. Having a "face man" who's sufficiently tech literate to ask decent questions manage the soft side of client relations frees up a ton of engineering resources that would otherwise be squandered replying to routine emails.
I’m a people person.
https://www.youtube.com/watch?v=hNuu9CpdjIo
Sounds like we're finally doing agile.
[flagged]