Hacker News

jernestomg
I miss thinking hard jernesto.com

gyomu9 hours ago

This March 2025 post from Aral Balkan stuck with me:

https://mastodon.ar.al/@aral/114160190826192080

"Coding is like taking a lump of clay and slowly working it into the thing you want it to become. It is this process, and your intimacy with the medium and the materials you’re shaping, that teaches you about what you’re making – its qualities, tolerances, and limits – even as you make it. You know the least about what you’re making the moment before you actually start making it. That’s when you think you know what you want to make. The process, which is an iterative one, is what leads you towards understanding what you actually want to make, whether you were aware of it or not at the beginning. Design is not merely about solving problems; it’s about discovering what the right problem to solve is and then solving it. Too often we fail not because we didn’t solve a problem well but because we solved the wrong problem.

When you skip the process of creation you trade the thing you could have learned to make for the simulacrum of the thing you thought you wanted to make. Being handed a baked and glazed artefact that approximates what you thought you wanted to make removes the very human element of discovery and learning that’s at the heart of any authentic practice of creation. Where you know everything about the thing you shaped into being from when it was just a lump of clay, you know nothing about the image of the thing you received for your penny from the vending machine."

nindalf3 minutes ago

For me it’s a related but different worry. If I’m no longer thinking deeply, then maybe my thinking skills will simply atrophy and die. Then when I really need it, I won’t have it. I’ll be reduced to yanking the lever on the AI slot machine, hoping it comes up with something that’s good enough.

But at that point, will I even have the ability to distinguish a good solution from a bad one? How would I know, if I’ve been relying on AI to evaluate if ideas are good or not? I’d just be pushing mediocre solutions off as my own, without even realising that they’re mediocre.

dwaite11 minutes ago

Supposedly when Michelangelo was asked about how he created the statue of David, he said "I just chipped away everything that wasn’t David.”

Your work is influenced by the medium by which you work. I used to be able to tell very quickly if a website was developed in Ruby on Rails, because some approaches to solve a problem are easy and some contain dragons.

If you are coding in clay, the problem is getting turned into a problem solvable in clay.

The challenge if you are directing others (people or agents) to do the work is that you don't know if they are taking into account the properties of the clay. That may be the difference between clean code - and something which barely works and is unmaintainable.

I'd say in both cases of delegation, you are responsible for making sure the work is done correctly. And, in both cases, if you do not have personal experiences in the medium you may not be prepared to judge the work.

helloplanets9 hours ago

And when programming with agentic tools, you need to actively push for the idea to not regress to the most obvious/average version. The amount of effort you need to expend on pushing the idea that deviates from the 'norm' (because it's novel), is actually comparable to the effort it takes to type something out by hand. Just two completely different types of effort.

There's an upside to this sort of effort too, though. You actually need to make it crystal clear what your idea is and what it is not, because of the continuous pushback from the agentic programming tool. The moment you stop pushing back, is the moment the LLM rolls over your project and more than likely destroys what was unique about your thing in the first place.

fallous8 hours ago

You just described the burden of outsourcing programming.

onion2k4 hours ago

Outsourcing development and vibe coding are incredibly similar processes.

If you just chuck ideas at the external coding team/tool you often get rubbish back.

If you're good at managing the requirements and defining things well you can achieve very good things with much less cost.

darkwater6 hours ago

With the basic and enormous difference that the feedback loop is 100 or even 1000x faster. Which changes the type of game completely, although other issues will probably arise as we try this new path.

Terr_4 hours ago

That embeds an assumption that the outsourced human workers are incapable of thought, and experience/create zero feedback loops of their own.

Frustrated rants about deliverables aside, I don't think that's the case.

darkwater3 hours ago

No. It just means the harsh reality: what's really soul crushing in outsourced work is having endless meetings to pass down / get back information, having to wait days/weeks/months to get some "deliverable" back on which iterate etc. Yes, outsourced human workers are totally capable of creative thinking that makes sense, but their incentive will always be throughput over quality, since their bosses usually give closed prices (at least in what I lived personally).

If you are outsourcing to an LLM in this case YOU are still in charge of the creative thought. You can just judge the output and tune the prompts or go deep in more technical details and tradeoffs. You are "just" not writing the actual code anymore, because another layer of abstraction has been added.

Jagerbizzle17 minutes ago

Also, with an LLM you can tell it to throw away everything and start over whenever you want.

When you do this with an outsourced team, it can happen at most once per sprint, and with significant pushback, because there's a desire for them to get paid for their deliverable even if it's not what you wanted or suffers some other fundamental flaw.

tomrod7 hours ago

100%! There is significant analogy between the two!

salawat7 hours ago

There is a reason management types are drawn to it like flies to shit.

theshrike795 hours ago

Working with and communicating with offshored teams is a specific skill too.

There are tips and tricks on how to manage them and not knowing them will bite you later on. Like the basic thing of never asking yes or no questions, because in some cultures saying "no" isn't a thing. They'll rather just default to yes and effectively lie than admit failure.

[deleted]8 hours agocollapsed

agumonkey6 hours ago

We need a new word for on-premise offshoring.

On-shoring ;

aleph_minus_one6 hours ago

> On-shoring

I thought "on-shoring" is already commonly used for the process that undos off-shoring.

saghm5 hours ago

How about "in-shoring"? We already have "insuring" and "ensuring", so we might as well add another confusingly similar sounding term to our vocabulary.

weebullan hour ago

How about we leave "...shoring" alone?

[deleted]4 hours agocollapsed

tmtvl44 minutes ago

Rubber-duckying... although a rubber ducky can't write code... infinite-monkeying?

pferde3 hours ago

Corporate has been using the term "best-shoring" for a couple of years now. To my best guess, it means "off-shoring or on-shoring, whichever of the two is cheaper".

intended4 hours ago

Ai-shoring.

Tech-shoring.

johnisgood4 hours ago

Would work, but with "snoring". :D

dzdt3 hours ago

vibe-shoring

heliumtera2 hours ago

We already have a perfect one

Slop;

GCUMstlyHarmls8 hours ago

I can't help but imagine training horses vs training cats. One of them is rewarding, a pleasure, beautiful to see, the other is frustrating, leaves you with a lot of scratches and ultimately both of you "agreeing" on a marginal compromise.

lambdaone3 hours ago

Right now vibe coding is more like training cats. You are constantly pushing against the model's tendency to produce its default outputs regardless of your directions. When those default outputs are what you want - which they are in many simple cases of effectively English-to-code translation with memorized lookup - it's great. When they are not, you might as well write the code yourself and at least be able to understand the code you've generated.

kimixa2 hours ago

Yup - I've related it to working with Juniors, often smart and have good understandings and "book knowledge" of many of the languages and tools involved, but you often have to step back and correct things regularly - normally around local details and project specifics. But then the "junior" you work with every day changes, so you have to start again from scratch.

I think there needs to be a sea change in the current LLM tech to make that no longer the case - either massively increased context sizes, so they can contain near a career worth of learning (without the tendency to start ignoring that context, as the larger end of the current still-way-too-small-for-this context windows available today), or even allow continuous training passes to allow direct integration of these "learnings" into the weights themselves - which might be theoretically possible today, but is many orders of magnitude higher in compute requirements than available today even if you ignore cost.

throwthrowuknow9 minutes ago

Try writing more documentation. If your project is bigger than a one man team then you need it anyways and with LLM coding you effectively have an infinite man team.

KptMarchewa2 hours ago

I've never seen horse that scratches you.

rixedan hour ago

To me it feels a bit like literate programming, it forces you to form a much more accurate idea of your project before your start. Not a bad thing, but can be wasteful also when eventually you realise after the fact that the idea was actually not that good :)

fflluuxx3 hours ago

This is why people thinkless of artists like Damien Hirst and Jeff Koons because their hands have never once touched the art. They have no connection to the effort. To the process. To the trail and error. To the suffer. They’ve out sourced it, monetized it, and make it as efficient as possible. It’s also soulless.

dkdbejwi3834 hours ago

Fair enough but I am a programmer because I like programming. If I wanted to be a product manager I could have made that transition with or without LLMs.

raw_anon_111118 minutes ago

I’m a programmer (well half my job) because I was a short (still short) fat (I got better) kid with a computer in the 80s.

Now, the only reason I code and have been since the week I graduated from college was to support my insatiable addictions to food and shelter.

While I like seeing my ideas come to fruition, over the last decade my ideas were a lot larger than I could reasonably do over 40 hours without having other people working on projects I lead. Until the last year and a half where I could do it myself using LLMs.

Seeing my carefully designed spec that includes all of the cloud architecture get done in a couple of days - with my hands on the wheel - that would have taken at least a week with me doing some work while juggling dealing with a couple of other people - is life changing

jiveturkey8 hours ago

> need to make it crystal clear

That's not an upside in that it's unique to LLM vs human written code. When writing it yourself, you also need to make it crystal clear. You do that in the language of implementation.

balamatom4 hours ago

And programming languages are designed for clarifying the implementation details of abstract processes; while human language is this undocumented, half grandfathered in, half adversarially designed instrument for making apes get along (as in, move in the same general direction) without excessive stench.

The humane and the machinic need to meet halfway - any computing endeavor involves not only specifying something clearly enough for a computer to execute it, but also communicating to humans how to benefit from the process thus specified. And that's the proper domain not only of software engineering, but the set of related disciplines (such as the various non-coding roles you'd have in a project team - if you have any luck, that is).

But considering the incentive misalignments which easily come to dominate in this space even when multiple supposedly conscious humans are ostensibly keeping their eyes on the ball, no matter how good the language machines get at doing the job of any of those roles, I will still intuitively mistrust them exactly as I mistrust any human or organization with responsibly wielding the kind of pre-LLM power required for coordinating humans well enough to produce industrial-scale LLMs in the first place.

What's said upthread about the wordbox continually trying to revert you to the mean as you're trying to prod it with the cowtool of English into outputting something novel, rings very true to me. It's not an LLM-specific selection pressure, but one that LLMs are very likely to have 10x-1000xed as the culmination of a multigenerational gambit of sorts; one whose outset I'd place with the ever-improving immersive simulations that got the GPU supply chain going.

[deleted]7 hours agocollapsed

Der_Einzige7 hours ago

Yet another example of "comments that are only sort of true because high temperature sampling isn't allowed".

If you use LLMs at very high temperature with samplers which correctly keep your writing coherent (i.e. Min_p, or better like top-h, P-less decoding, etc), than "regression to the mean" literally DOES NOT HAPPEN!!!!

hnlmorg5 hours ago

Have you actually tried high temperature values for coding? Because I don’t think it’s going to do what you claim it will.

LLMs don’t “reason” the same way humans do. They follow text predictions based on statistical relevance. So raising the temperature will more likely increase the likelihood of unexecutable pseudocode than it would create a valid but more esoteric implementation of a problem.

Terr_4 hours ago

To put it another way, a high-temperature mad-libs machine will write a very unusual story, but that isn't necessarily the same as a clever story.

balamatom3 hours ago

So why is this "temperature" not on, like, a rotary encoder?

So you can just, like, tweak it when it's working against your intent in either direction?

bob10294 hours ago

High temperature seems fine for my coding uses on GPT5.2.

Code that fails to execute or compile is the default expectation for me. That's why we feed compile and runtime errors back into the model after it proposes something each time.

I'd much rather the code sometimes not work than to get stuck in infinite tool calling loops.

adevilinyc7 hours ago

How do you configure LLM température in coding agents, e.g. opencode?

kabr7 hours ago

Der_Einzige7 hours ago

You can't without hacking it! That's my point! The only places you can easily are via the API directly, or "coomer" frontends like SillyTavern, Oobabooga, etc.

Same problem with image generation (lack of support for different SDE solvers, the image version of LLM sampling) but they have different "coomer" tools, i.e. ComfyUI or Automatic1111

yoyohello137 hours ago

Once again, porn is where the innovation is…

dizhn5 hours ago

Please.. "Creative Writing"

socalgal25 hours ago

To me it's all abstraction. I didn't write my own OS. I didn't write my own compiler. I didn't write the standard library. I just use them. I could write them but I'm happy to work on the new thing that uses what's already there.

This is no different than many things. I could grow a tree and cut it into wood but I don't. I could buy wood and nails and brackets and make furniture but I don't. I instead just fill my house/apartment with stuff already made and still feel like it's mine. I made it. I decided what's in it. I didn't have to make it all from scratch.

For me, lots of programming is the same. I just want to assemble the pieces

> When you skip the process of creation you trade the thing you could have learned to make for the simulacrum of the thing you thought you wanted to make

No, your favorite movie is not crap because the creators didn't grind their own lens. Popular and highly acclaimed games not at crap because they didn't write their own physics engine (Zelda uses Havok) or their own game engine (Plenty of great games use Unreal or Unity)

Krssst5 hours ago

OS and compilers have a deterministic public interface. They obey a specification developers know, so you they can be relied on to write correct software that depends on them even without knowing the internal behavior. Generative AI does not have those properties.

signatoremo14 minutes ago

> They obey a specification developers know

Which spec? Is there a spec that says if you use a particular set of libraries you’d get less than 10 millisecond response? You can’t even know that for sure if you roll your own code, with no 3rd party libraries.

Bugs are by definition issues arise when developers expect they code to do one thing, but it does another thing, because of unforeseen combination of factors. Yet we all are ok with that. That’s why we accept AI code. They work well enough.

raw_anon_111112 minutes ago

Yes but developers don’t have a deterministic interface. I still had to be careful about writing out my specs and make sure they were followed. At least I don’t have to watch my tone when my two mid level ticket taking developers - Claude and Codex - do something stupid. They also do it a lot faster

refactor_master4 hours ago

But the code you’re writing is guard railed by your oversight, the tests you decide on and the type checking.

So whether you’re writing the spec code out by hand or ask an LLM to do it is besides the point if the code is considered a means to an end, which is what the post above yours was getting at.

skydhash2 hours ago

Tests and type checking are often highway-wide guardrails when the path you want to take is like a tightrope.

Also the code is not a means to an end. It’s going to be run somewhere doing stuff someone wants to do reliably and precisely. The overall goal was ever to invest some programmer time and salary in order to free more time for others. Not for everyone to start babysitting stuff.

cowboylowrez2 hours ago

When I read discussions about this sort of thing, I often find that folks look harder for similarities and patterns but once they succeed here, they ignore the differences. AI in particular is so full of this "pattern matching" style of thinking that the really significance of this tech, ie., how absolutely new and different it is, yeah it just sort of goes ignored, or even worse, machines get "pattern matched" into humans and folks argue from that point of view lol witness all the "new musicians" who vibe code disco hits, I'll invariably see the argument that AIs train on existing music just like humans do so whats the big deal?

But these arguments and the OP's article do reinforce that AI rots brains. Even my sparing use of googles gemini and my interaction with the bots here have really dinged my ability to do simple math.

jstanley3 hours ago

> I didn't write my own OS. I didn't write my own compiler. I didn't write the standard library. I just use them. I could write them

Maybe, but beware assuming you could do something you haven't actually tried to do.

Everything is easy in the abstract.

Hendrikto2 hours ago

> No, your favorite movie is not crap because the creators didn't grind their own lens.

But Pulp Fiction would not have been a masterpiece if Tarantino just typed “Write a gangster movie.” into a prompt field.

adriandan hour ago

> But Pulp Fiction would not have been a masterpiece if Tarantino just typed “Write a gangster movie.” into a prompt field.

Doesn’t that prove the point? You could do that right now, and it would be absolute trash. Just like how right now we are nowhere close to being able to make great software with a single prompt.

I’ve been vibecoding a side project and it has been three months of ideating, iterating, refining and testing. It would have taken me immeasurably longer without these tools, but the end result is still 100% my vision, and it has been a tremendous amount of work.

heliumtera2 hours ago

And if he did, why would I prefer using his prompt instead of mine?

"Write a gangster movie that I like", instead of "...a movie this other guy likes".

But because this is not the case, we appreciate Tarantino more than we appreciate gangster movies. It is about the process.

dropofwill19 minutes ago

This is exactly the process happening in the music space with Suno. Go to their subreddit, they all talk about how they only listen to ‘their’ songs, for the exact reasons you list.

Its bleak out there.

yason4 hours ago

The creative process is not dependent on the abstraction.

> For me, lots of programming is the same. I just want to assemble the pieces

How did those pieces came to be? By someone assembling other pieces or by someone crafting them together out of nowhere because nobody else had written them by the time?

Of course you reuse other parts and abstractions to do whatever things that you're not working on but each time you do something that hasn't been done before you can't but engage the creative process, even if you're sitting on top of 50 years worth of abstractions.

In other words, what a programmer essentially has is a playfield. And whether the playfield is a stack of transistors or coding agents, when you program you create something new even if it's defined and built in terms of the playfield.

tonyedgecombe4 hours ago

>I instead just fill my house/apartment with stuff already made and still feel like it's mine.

I'm starting to wonder if we lose something in all this convenience. Perhaps my life is better because I cook my own food, wash my own dishes, chop my own firewood, drive my own car, write my own software. Outwardly the results look better the more I outsource but inwardly I'm not so sure.

On the subject of furnishing your house the IKEA effect seems to confirm this.

https://en.wikipedia.org/wiki/IKEA_effect

globular-toast4 hours ago

There are two stages to becoming a decent programmer: first you learn to use abstraction, then you learn when not to use abstraction.

Trying to find the right level is the art. Once you learn the tools of the trade and can do abstraction, it's natural to want to abstract everything. Most programmers go through such a phase. But sometimes things really are distinct and trying to find an abstraction that does both will never be satisfactory.

When building a house there are generally a few distinct trades that do the work: bricklayers, joiners, plumbers, electricians etc. You could try to abstract them all: it's all just joining stuff together isn't it? But something would be lost. The dangers of working with electricity are completely different to working with bricks. On the other hand, if people were too specialised it wouldn't work either. You wouldn't expect a whole gang of electricians, one who can only do lighting, one who can only do sockets, one who can only do wiring etc. After centuries of experience we've found a few trades that work well together.

So, yes, it's all just abstraction, but you can go too far.

throwaway1324484 hours ago

Well said, great analogy. Sometimes the level of abstraction feels arbitrary - you have to understand the circumstances that led there to see why it's not.

Trasmatta33 minutes ago

Did you not read the post? You're talking from the space of the Builder while neglecting the Thinker. That's fine for some people, but not for others.

jstanley3 hours ago

But you can move a layer up.

Instead of pouring all of your efforts into making one single static object with no moving parts, you can simply specify the individual parts, have the machine make them for you, and pour your heart and soul into making a machine that is composed of thousands of parts, that you could never hope to make if you had to craft each one by hand from clay.

We used to have a way to do this before LLMs, of course: we had companies that employed many people, so that the top level of the company could simply specify what they wanted, and the lower levels only had to focus on making individual parts.

Even the person making an object from clay is (probably) not refining his own clay or making his own oven.

berkesan hour ago

> we had companies that employed many people, so that the top level of the company could simply specify what they wanted, and the lower levels only had to focus on making individual parts.

I think this makes a perfect counter-example. Because this structure is an important reason for YC to exist and what the HN crowd often rallies against.

Such large companies - generally - don't make good products. Large companies rarely make good products in this way. Most, today, just buy companies that built something in the GP's cited vein: a creative process, with pivots, learnings, more pivots, failures or - when successful - most often successful in an entirely different form or area than originally envisioned. Even the large tech monopolies of today originated like that. Zuckerberg never envisioned VR worlds, photo-sharing apps, or chat apps, when he started the campus-fotobook-website. Bezos did not have some 5d-chess blueprint that included the largest internet-infrastructure-for-hire when he started selling books online.

If anything, this only strengthens the point you are arguing against: a business that operates by a "head" "specifying what they want" and having "something" figure out how to build the parts, is historically a very bad and inefficient way to build things.

i7lan hour ago

And therein lies the crux: some people love to craft each part themselves, whereas others love to orchestrate but not manufacture each part.

With LLMs and engineers often being forced by management to use them, everyone is pushed to become like the second group, even though it goes against their nature. The former group see the part as a means, whereas the latter view it as the end.

Some people love the craft itself and that is either taken away or hollowed out.

ChrisMarshallNY2 hours ago

This is really what it’s about.

As someone that started with Machine Code, I'm grateful for compiled -even interpreted- languages. I can’t imagine doing the kind of work that I do, nowadays, in Machine Code.

I’m finding it quite interesting, using LLM-assisted development. I still need to keep an eye on things (for example, the LLM tends to suggest crazy complex solutions, like writing an entire control from scratch, when a simple subclass, and five lines of code, will work much better), but it’s actually been a great boon.

I find that I learn a lot, using an LLM, and I love to learn.

croes2 hours ago

But we become watchers instead of makers.

There is a difference between cooking and putting a ready meal into the microwave.

Both satisfy your hunger but only one can give some kind of pride.

ChrisMarshallNYan hour ago

Eh. I've had pride in my work for over 40 years.

The tools change, but the spirit only grows.

szundi27 minutes ago

[dead]

amelius3 hours ago

Yes, but bad ingredients do not make a yummy pudding.

Or, it's like trying to make a MacBook Pro by buying electronics boards from AliExpress and wiring them together.

jstanley3 hours ago

I'd rather have a laptop made from AliExpress components than only have a single artisanal hand-crafted resistor.

i7lan hour ago

That's a false dichotomy, because transistors and ICs are manufactured to be deterministic and nearly perfect. LLMs can never be guaranteed to be like that.

Yes, some things are better when manufactured in highly automated ways (like computer chips), but their design has been thoroughly tested and before shipping the chips themselves go through lots of checks to make sure they are correct. LLM code is almost never treated that way today.

amelius2 hours ago

Yes, the point is that only if you're willing to accept crappy results then you can use AI to build bigger things.

sdoering2 hours ago

To me that seems like a spurious (maybe even false) dichotomy. You can have crappy results without AI. And you can have great results with AI.

Your contrast is an either or, that - in the real world - does not exist.

Take content written by AI, prompted by a human. A lot of it is slop and crap. And there will be more slop and crap with AI than before. But that was the case, when the medium changed from hand writen to printed books. And when paper and printing became cheap, we had slop like those 10 Cent Western or Romance novellas.

We also still had Goethe, still had Kleist, still had Grass (sorry, very German centric here).

We also have Inception vs. the latest sequel of any Marvel franchise.

I have seen AI writen, but human prompted short stories, that made people well up and find ideas presented in a light not seen before. And I have seen AI generated stories that one wants to purge from my brain.

It isn't the tool - it is the one yielding it.

Question: Did photoshop kill photography? Because honestly, this AI discussion to me sounds very much like the discussion back then.

weebullan hour ago

> Question: Did photoshop kill photography? Because honestly, this AI discussion to me sounds very much like the discussion back then.

It killed an aspect of it. The film processing in the darkroom. Even before digital cameras were ubiquitous it was standard to get a scan before doing any processing digitally. Chemical processing was reduced the minimum necessary.

[deleted]an hour agocollapsed

ameliusan hour ago

Lightroom killed photography.

mlrtime2 hours ago

I was going to reply defending AI tooling and crappy results, but I think I'm done with it.

I think there are just a class of people know that think that you cannot get 'macbook' quality with a LLM. I don't know why I try to convince them, it's not in my benefit.

[deleted]2 hours agocollapsed

sfn422 hours ago

It's more like the chess.com vs lichess example in my mind. On the one hand you have a big org, dozens of devs, on the other you have one guy doing a better job.

It's amazing what one competent developer can do, and it's amazing how little a hundred devs end up actually doing when weighed down by beaurocracy. And lets not pretend even half of them qualify as competent, not to mention they probably don't care either. They get to work and have a 45 min coffee break, move some stuff around in the Kanban board, have another coffee break, then lunch, then foosball etc. Ad when they actually write some code it's ass.

And sure, for those guys maybe LLMs represent a huge productivity boost. For me it's usually faster to do the work myself than to coax the bot into creating something acceptable.

anymouse12345616 minutes ago

Having a background in fine art (and also knew Aral many years ago!), this prose resonates heavily with me.

Most of the OP article also resonated with me as I bounce back and forth between learning (consuming, thinking, pulling, integrating new information) to building (creating, planning, doing) every few weeks or months. I find that when I'm feeling distressed or unhappy, I've lingered in one mode or the other a little too long. Unlike the OP, I haven't found these modes to be disrupted by AI at all, in fact it feels like AI is supporting both in ways that I find exhilarating.

I'm not sure OP is missing anything because of AI per se, it might just be that they are ready to move their focus to broader or different problem domains that are separate from typing code into an IDE?

For me, AI has allowed me to probe into areas that I would have shied away from in the past. I feel like I'm being pulled upward into domains that were previously inaccessible.

I use Claude on a daily basis, but still find myself frequently hand-writing code as Claude just doesn't deliver the same results when creating out of whole cloth.

Claude does tend to make my coarse implementations tighter and more robust.

I admittedly did make the transition from software only to robotics ~6 years ago, so the breadth of my ignorance is still quite thrilling.

raw_anon_111135 minutes ago

In 30 years across 10 jobs, the companies I’ve worked for have not paid me to “code”. They’ve paid me to use my experience to add more business value than the total cost of employing me.

I’m no less proud of what I built in the last three weeks using three terminal sessions - one with codex, one with Claude, and one testing everything from carefully designed specs - than I was when I first booted a computer, did “call -151” to get to the assembly language prompt on my Apple //e in 1986.

The goal then was to see my ideas come to life. The goal now is to keep my customers happy, get projects done on time, on budget and meets requirements and continue to have my employer put cash in my account twice a month - and formerly put AMZN stock in my brokerage account at vesting.

resters28 minutes ago

It's very similar now, you have to surf a swell of selective ignorance that is (feels?) less reliable than the ignorance that one adopts when using a dependency one hasn't read and understood the source code for.

One must be conversant in abstractions that are themselves ephemeral and half hallucinated. It's a question of what to cling to, what to elevate beyond possible hallucinated rubbish. At some level it's a much faster version of the meastspace process and it can be extermely emotionally uncomfortable and anarchic to many.

abhgh6 hours ago

This is an amazing quote - thank you. This is also my argument for why I can't use LLMs for writing (proofreading is OK) - what I write is not produced as a side-effect of thinking through a problem, writing is how I think through a problem.

Cthulhu_5 hours ago

Counterpoint (more devil's advocate), I'd argue it's better than an LLM writes something (e.g. the solution or thinking through of a problem) than nothing at all.

Counterpoint to my own counterpoint, will anyone actually (want to) read it?

counterpoint to the third degree, to loop it back around, an LLM might and I'd even argue an LLM is better at reading and ingesting long text (I'm thinking architectural documentation etc) than humans are. Speaking for myself, I struggle to read attentively through e.g. a document, I quickly lose interest and scan read or just focus on what I need instead.

yurishimo2 hours ago

I kinda saw this happen in realtime on reddit yesterday. Someone asked for advice on how to deal with a team that was in over their heads shipping slop. The crux of their question was fair, but they used a different LLM to translate their original thoughts from their native language into English. The prompt was "translate this to english for a reddit post" - nothing else.

The LLM adding a bunch of extra formatting to add emphasis and structure to what might have originally been a bit of a ramble, but obviously human written. The comments absolutely lambasted this OP for being a hypocrite complaining about their team using AI, but then seeing little problem with posting what is obviously an AI generated question because the OP didn't deem their English skills good enough to ask the question directly.

I'm not going to pass judgement on this scenario, but I did think the entire encounter was a "fun" anecdote in addition to your comments.

Edit: wrods

vict72 hours ago

I saw the same post and was a bit saddened that all the comments seemed to be focused on the implied hypocrisy of the OP instead of addressing the original concern.

As someone that’s a bit of a fence-sitter on the matter of AI, I feel that using it in the way that OP did is one of the less harmful or intrusive uses.

samusiam3 hours ago

Writing is how I think through a problem too, but that also applies to writing and communicating with an AI coding agent. I don't need to write the code per se to do the thinking.

skydhash2 hours ago

You could write pseudocode as well. Bit fo someone who is familiar with a programming language, it’s just faster to use the latter. And if you’re really familiar with the language, you start thinking in it.

Cuervo_3 hours ago

I personally have found success with an approach that's the inverse of how agents are being used generally.

I don't allow my agent to write any code. I ask it for guidance on algorithms, and to supply the domain knowledge that I might be missing. When using it for game dev for example, I ask it to explain in general terms how to apply noise algorithms for procedural generation, how to do UV mapping etc, but the actual implementation in my language of choice is all by hand.

Honestly, I think this is a sweet spot. The amount of time I save getting explanations of concepts that would otherwise get a bit of digging to get is huge, but I'm still entirely in control of my codebase.

shsksjan hour ago

Yep, this is the sweet spot. Though I still let it type code a lot - boilerplate stuff I’d be bored out of my mind typing. And I’ve found it has an extremely high success rate typing that code on top of its very easy for me to review that code. No friction at all. Granted this is often no larger than 100 lines or so (across various files).

If it takes you more than a few seconds or so to understand code an agent generated you’re going to make mistakes. You should know exactly what it’s going to produce before it produces it.

oceanplexian6 hours ago

Coding is not at all like working a lump of clay unless you’re still writing assembly.

You’re taking a bunch of pre-built abstractions written by other people on top of what the computer is actually doing and plugging them together like LEGOs. The artificial syntax that you use to move the bricks around is the thing you call coding.

The human element of discovery is still there if a robot stacks the bricks based on a different set of syntax (Natural Language), nothing about that precludes authenticity or the human element of creation.

vaylian5 hours ago

> You’re taking a bunch of pre-built abstractions written by other people on top of what the computer is actually doing and plugging them together like LEGOs.

Correct. However, you will probably notice that your solution to the problem doesn't feel right, when the bricks that are available to you, don't compose well. The AI will just happily smash together bricks and at first glance it might seem that the task is done.

Choosing the right abstraction (bricks) is part of finding the right solution. And understanding that choice often requires exploration and contemplation. AI can't give you that.

Cthulhu_5 hours ago

Not yet, anyway; I do trust LLMs for writing snippets or features at this point, but I don't trust them for setting up new applications, technology choices, architectures, etc.

The other day people were talking about metrics, the amount of lines of code people vs LLMs could output in any given time, or the lines of code in an LLM assisted application - using LOC as a metric for productivity.

But would an LLM ever suggest using a utility or library, or re-architecture an application, over writing their own code?

I've got a fairly simple application, renders a table (and in future some charts) with metrics. At the moment all that is done "by hand", last features were stuff like filtering and sorting the data. But that kind of thing can also be done by a "data table" library. Or the whole application can be thrown out in favor of a workbook (one of those data analysis tools, I'm not at home in that are at all). That'd save hundreds of lines of code + maintenance burden.

z3dd2 hours ago

I was creating a Jira/bb wrapper with node recently and Claude actually used plenty of libraries to solve some tasks.

Nab4432 hours ago

Same with gpt, but I felt it was more like "hey, everyone uses that, so why not me" than finding the right tool for the job. Can't say for Claude.

hennell4 hours ago

It depends what you're doing not really what you do it with.

I can do some crud apps where it's just data input to data store to output with little shaping needed. Or I can do apps where there's lots of filters, actions and logic to happen based on what's inputted that require some thought to ensure actually solve the problem it's proposed for.

"Shaping the clay" isn't about the clay, it's about the shaping. If you have to make a ball of clay and also have to make a bridge of Lego a 175kg human can stand on, you'll learn more about Lego and building it than you will about clay.

Get someone to give you a Lego instruction sheet and you'll learn far less, because you're not shaping anymore.

satvikpendem6 hours ago

Exactly, and that's why I find AI coding solving this well, because I find it tedious to put the bricks together for the umpteenth time when I can just have an AI do it (which I will of course verify the code when it's done, not advocating for vibe coding here).

This actually leaves me with a lot more time to think, about what I want the UI to look like, how I'll market my software, and so on.

Jensson5 hours ago

> Coding is not at all like working a lump of clay unless you’re still writing assembly.

Isn't the analogy apt? You can't make a working car using a lump of clay, just a car statue, a lump of clay is already an abstraction of objects you can make in reality.

balamatom3 hours ago

Bingo.

lsy5 hours ago

I think the analogy to high level programming languages misunderstands the value of abstraction and notation. You can’t reason about the behavior of an English prompt because English is underspecified. The value of code is that it has a fairly strong semantic correlation to machine operations, and reasoning about high level code is equivalent to reasoning about machine code. That’s why even with all this advancement we continue to check in code to our repositories and leave the sloppy English in our chat history.

skydhash2 hours ago

Yep. Any statement in python or others can be mapped to something that the machine will do. And it will be the same thing every single time (concurrency and race issue aside). There’s no english sentence that can be as clear.

We’ve created formal notation to shorten writing. And computation is formal notation that is actually useful. Why write pages of specs when I could write a few lines of code?

hnlmorg5 hours ago

You’re both right. It just depends on the problems you’re solving and the languages you use.

I find languages like JavaScript promote the idea that of “Lego programming” because you’re encouraged to use a module for everything.

But when you start exploring ideas that haven’t been thoroughly explored already, and particularly in systems languages which are less zealous about DRY (don’t repeat yourself) methodologies, the you can feel a lot more like a sculptor.

Likewise if you’re building frameworks rather than reusing them.

So it really depends on the problems you’re solving.

For general day-to-day coding for your average 9-to-5 software engineering job, I can definitely relate to why people might think coding is basically “LEGO engineering”.

nielsbot5 hours ago

While there is still a market for artisanal furniture, dishes and clothes most people buy mass-produced dishes, clothes and furniture.

I wonder if software creation will be in a similar place. There still might be a small market for handmade software but the majority of it will be mass produced. (That is, by LLM or even software itself will mostly go away and people will get their work done via LLM instead of "apps")

Cthulhu_5 hours ago

As with furniture, it's supply vs demand, and it's a discussion that goes back decades at this point.

Very few people (even before LLM coding tools) actually did low level "artisanal" coding; I'd argue the vast majority of software development goes into implementing features in b2b / b2c software, building screens, logins, overviews, detail pages, etc. That requires (required?) software engineers too, and skill / experience / etc, but it was more assembling existing parts and connecting them.

Years ago there was already a feeling that a lot of software development boiled down to taping libraries together.

Or from another perspective, replace "LLM" with "outsourcing".

pinkgolem5 hours ago

I would argue the opposite..

What you get right now is mass replicated software, just another copy of sap/office/Spotify/whatever

That software is not made individually for you, you get a copy like millions of other people and there is nearly no market anymore for individual software.

Llms might change that, we have a bunch of internal apps now for small annoying things..

They all have there quirks, but are only accessible internally and make life a little bit easier for people working for us.

Most of them are one shot llms things, throw away if you do not need it anymore or just one shoot again

Cthulhu_5 hours ago

The question is whether that's a good thing or not; software adages like "Not Invented Here" aren't going to go away. For personal tools / experiments it's probably fine, just like hacking together something in your spare time, but it can become a risk if you, others, or a business start to depend on it (just like spare time hacked tools).

I'd argue that in most cases it's better to do some research and find out if a tool already exists, and if it isn't exactly how you want it... to get used to it, like one did with all other tools they used.

williamcottonan hour ago

> it can become a risk if you, others, or a business start to depend on it (just like spare time hacked tools).

So that Excel spreadsheet that manages the entire sales funnel?

intended4 hours ago

Acceptance of mass production is only post establishment of quality control.

Skipping over that step results in a world of knock offs and product failures.

People buy Zara or H&M because they can offload the work of verifying quality to the brand.

This was a major hurdle that mass manufacturing had to overcome to achieve dominance.

pixl973 minutes ago

>Acceptance of mass production is only post establishment of quality control.

Hence why a lot of software development is gluing libraries together these days.

CraigJPerry3 hours ago

>> Coding is like

That description is NOT coding, coding is a subset of that.

Coding comes once you know what you need to build, coding is the process of you expressing that in a programming language and as you do so you apply all your knowledge, experience and crucially your taste, to arrive at an implementation which does what's required (functionally and non-functionally) AND is open to the possibility of change in future.

Someone else here wrote a great comment about this the other day and it was along the lines of if you take that week of work described in the GP's comment, and on the friday afternoon you delete all the code checked in. Coding is the part to recreate the check in, which would take a lot less than a week!

All the other time was spent turning you into the developer who could understand why to write that code in the first place.

These tools do not allow you to skip the process of creation. They allow you to skip aspects of coding - if you choose to, they can also elide your tastes but that's not a requirement of using them, they do respond well to examples of code and other directions to guide them in your tastes. The functional and non-functional parts they're pretty good at without much steering now but i always steer for my tastes because, e.g. opus 4.5 defaults to a more verbose style than i care for.

pikzel3 hours ago

It's all individual. That's like saying writing only happens when you know exactly the story to tell. I love open a blank project with a vague idea of what I want to do, and then just start exploring while I'm coding.

pixl97a few seconds ago

I'm sure some coding works this way, but I'd be surprised if it's more than a small percentage of it.

koliber2 hours ago

Sometimes you want an artistic vase that captures some essential element of beauty, culture, or emotion.

Sometimes you want a utilitarian teapot to reliably pour a cup of tea.

The materials and rough process for each can be very similar. One takes a master craftsman and a lot of time to make and costs a lot of money. The other can be made on a production line and the cost is tiny.

Both have are desirable, for different people, for different purposes.

With software, it's similar. A true master knows when to get it done quick and dirty and when to take the time to ponder and think.

bayindirh2 hours ago

> Sometimes you want a utilitarian teapot to reliably pour a cup of tea.

If you pardon the analogy, watch how Japanese make a utilitarian teapot which reliably pours a cup of tea.

It's more complicated and skill-intensive than it looks.

In both realms, making an artistic vase can be simpler than a simple utilitarian tool.

AI is good at making (poor quality, arguably) artistic vases via its stochastic output, not highly refined, reliable tools. Tolerances on these are tighter.

koliber34 minutes ago

There is a whole range of variants in between those two "artistic vs utilitarian" points. Additionally, there is a ton of variance around "artistic" vs "utilitarian".

Artisans in Japan might go to incredible lengths to create utilitarian teapots. Artisans who graduated last week from a 4-week pottery workshop will produce a different kind quality, albeit artisan. $5.00 teapots from an East Asian mass production factory will be very different than high quality mass-produced upmarket teapots at a higher price. I have things in my house that fall into each of those categories (not all teapots, but different kinds of wares).

Sometimes commercial manufacturing produces worse tolerances than hand-crafting. Sometimes, commercial manufacturing is the only way to get humanly unachievable tolerances.

You can't simplify it into "always" and "never" absolutes. Artisan is not always nicer than commercial. Commercial is not always cheaper than artisan. _____ is not always _____ than ____.

If we bring it back to AI, I've seen it produce crap, and I've also seen it produce code that honestly impressed me (my opinion is based on 24 years of coding and engineering management experience). I am reluctant to make a call where it falls on that axis that we've sketched out in this message thread.

jatora20 minutes ago

Yeah? And then you continue prompting and developing, and go through a very similar iterative process, except now it's faster and you get to tackle more abstract, higher level problems.

"Most developers don't know the assembly code of what they're creating. When you skip assembly you trade the very thing you could have learned to fully understand the application you were trying to make. The end result is a sad simulacrum of the memory efficiency you could have had."

This level of purity-testing is shallow and boring.

ibestvina4 hours ago

This makes no sense to me. There are plenty of artists out there (e.g. El Anatsui), not to mention whole professions such as architects, who do not interact directly with what they are building, and yet can have profound relationship with the final product.

Discovering the right problem to solve is not necessarily coupled to being "hands on" with the "materials you're shaping".

lolive3 hours ago

In my company, [enterprise IT] architects are separated into two kinds. People with a CV longer than my arm who know/anticipate everything that could fail and have reached a level of understandind that I personnally call "wisdom". And theorists, who read books and norms, who focus mostly on the nominal case, and have no idea [and no interest] in how the real world will be a hard brick wall that challenges each and every idea you invent.

Not being hands-on, and more important not LISTENING to the hands-on people and learning from them, is a massive issue in my surroundings.

So thinking hard on something is cool. But making it real is a whole different story.

Note: as Steve used to say, "real artists ship".

darepublic4 hours ago

you think El Anatsui would concur that they didn't interact directly with what they were building? "hands on", "material you're shaping" is a metaphor

ibestvina4 hours ago

I don't see why his involvement, explaining to his team how exactly to build a piece, is any different from a developer explaining to an LLM how to build a certain feature, when it comes to the level of "being hands on".

Obviously I am not comparing his final product with my code, I am simply pointing out how this metaphor is flawed. Having "workers" shape the material according to your plans does not reduce your agency.

skydhash2 hours ago

> I don't see why his involvement, explaining to his team how exactly to build a piece, is any different from a developer explaining to an LLM

Because everyone under him knows that a mistake big enough is a quick way to unemployment or legal actions. So the whole team is pretty much aligned. A developer using an LLM may as well try to herd cats.

ibestvina2 hours ago

First, that's quite a sad view of incentives structures. Second, you can't be serious in thinking that "worker worried they might be fired" puts the person in charge closer to the "materials" and more "hands on" with the project.

bodge50004 hours ago

"The muse visits during the act of creation, not before. Start alone."

That has actually been a major problem for me in the past where my core idea is too simple, and I don't give "the muse" enough time to visit because it doesn't take me long enough to build it. Anytime I have given the muse time to visit, they always have.

isolli5 hours ago

This is very insightful, thanks. I had a similar thought regarding data science in particular. Writing those pandas expressions by hand during exploration means you get to know the data intimately. Getting AI to write them for you limits you to a superficial knowledge of said data (at least in my case).

darepublic4 hours ago

Thanks for the quote, it definitely resonates. Distressing to see many people who can't relate to this, taking it literally and arguing that there is nothing lost the more removed they are from the process.

anonymous3449 hours ago

yes, this is maybe it's my preference to jump directly to coding, instead of canva to draw the gui and stuff. i would not know what to draw because the involvemt is not so deep ...or something

leftbehinds3 hours ago

reminds of arguments for - hosting a server vs running stuff in cloud - vps vs containers

Bengalilol4 hours ago

I love Aral, he is so invested.

spacecadet2 hours ago

This is cute, but this is true for ALL activities in life. I have to constantly remind my brother that his job is not unique and if he took a few moments, he might realize, flipping burgers is also molding lumps of clay.

I think the biggest beef I have with Engineers is that for decades they more or less reduced the value of other lumps of clay and now want to throw up arms when its theirs.

logicprog2 hours ago

This is beautifully written, but as a point against agentic AI coding, I just don't really get it.

It seems to assume that vibe coding or like whatever you call the Gas Town model of programming is the only option, but you don't have to do that. You don't have to specify upfront what you want and then never change or develop that as you go through the process of building it, and you don't have to accept whatever the AI gives you on the other end as final.

You can explore the affordances of the technologies you're using, modify your design and vision for what you're building as you go; if anything, I've found AI coding mix far easier to change and evolve my direction because it can update all the various parts of the code that need to be updated when I want to change direction as well as keeping the tests and specification and documentation in sync, easily and quickly.

You also don't need to take the final product as a given, a "simulacrum delivered from a vending machine": build, and then once you've gotten something working, look at it and decide that it's not really what you want, and then continue to iterate and change and develop it. Again, with AI coding, I've found this easier than ever because it's easier to iterate on things. The process is a bit faster for not having to move the text around and looking up API documentation myself, even though I'm directly dictating the architecture and organization and algorithms and even where code should go most of the time.

And with the method I'm describing, where you're in the code just as much as the AI is, just using it to do the text/API/code munging, you can even let the affordances of not just the technologies, but the source code and programming language itself effect how you do this: if you care about the code quality and clarity and organization of the code that the AI is generating, you'll see when it's trying to brute force its way past technical limitations and instead redirect it to follow the grain. It just becomes easier and more fluid to do that.

If anything, AI coding in general makes it easier to have a conversation with the machine and its affordances and your design vision and so on, then before because it becomes easier to update everything and move everything around as your ideas change.

And nothing about it means that you need to be ignorant of what's going on; ostensibly you're reviewing literally every line of code it creates and deciding what libraries and languages as well as the architecture, organization and algorithms it's using. You are aren't you? So you should know everything you need to know. In fact, I've learned several libraries and a language just from watching it work, enough that I can work with them without looking anything up, even new syntax and constructs that would have been very unfamiliar prior on my manual coding days.

moron4hire3 hours ago

I have no idea who this guy is (I guess he's a fantasy novelist?) but this video came up in my YouTube feed recently and feels like it matches closely with the themes you're expressing. https://youtu.be/mb3uK-_QkOo?si=FK9YnawwxHLdfATv

boredtofears8 hours ago

I dunno, when you've made about 10,000 clay pots its kinda nice to skip to the end result, you're probably not going to learn a ton with clay pot #10,001. You can probably come up with some pretty interesting ideas for what you want the end result to look like from the onset.

I find myself being able to reach for the things that my normal pragmatist code monkey self would consider out of scope - these are often not user facing things at all but things that absolutely improve code maintenance, scalability, testing/testability, or reduce side effects.

belZaah8 hours ago

Depends on the problem. If the complexity of what you are solving is in the business logic or, generally low, you are absolutely right. Manually coding a signup flow #875 is not my idea of fun either. But if the complexity is in the implementation, it’s different. Doing complex cryptography, doing performance optimization or near-hardware stuff is just a different class of problems.

aleph_minus_one6 hours ago

> If the complexity of what you are solving is in the business logic or, generally low, you are absolutely right.

The problem is rather that programmers who work on business logic often hate programmers who are actually capable of seeing (often mathematical) patterns in the business logic that could be abstracted away; in other words: many business logic programmers hate abstract mathematical stuff.

So, in my opinion/experience this is a very self-inflected problem that arises from the whole culture around business logic and business logic programming.

skydhashan hour ago

Coding signup flow #875 should as easy as using a snippet tool or a code generator. Everyone that explains why using an LLM is a good idea always sound like living in the stone age of programming. There are already industrial level tools to get things done faster. Often so fast that I feel time being wasted describing it in english.

boredtofears7 hours ago

In my experience AI is pretty good at performance optimizations as long as you know what to ask for.

Can't speak to firmware code or complex cryptography but my hunch is if it's in it's training dataset and you know enough to guide it, it's generally pretty useful.

kranner7 hours ago

> my hunch is if it's in it's training dataset and you know enough to guide it, it's generally pretty useful.

Presumably humanity still has room to grow and not everything is already in the training set.

aleph_minus_one6 hours ago

> In my experience AI is pretty good at performance optimizations as long as you know what to ask for.

This rather tells that the kind of performance optimizations that you ask for are very "standard".

charcircuit5 hours ago

Most optimizations are making sure you do not do work that is unnecessary or that you use the hardware effectively. The standard techniques are all you need 99% of the time you are doing performance work. The hard part about performance is dedicating the time towards it and not letting it regress as you scale the team. With AI you can have agents constantly profiling the codebase identifying and optimizing hotspots as they get introduced.

aleph_minus_one2 hours ago

> Most optimizations are making sure you [...] use the hardware effectively.

If you really care about using the hardware effectively, optimizing the code is so much more than what you describe.

bravetraveler7 hours ago

import claypot

trillion dollar industry boys

mlvljr5 hours ago

[dead]

CamperBob28 hours ago

Eloquent, moving, and more-or-less exactly what people said when cameras first hit the scene.

sonofhans7 hours ago

Ironic. The frequency and predictability of this type of response — “This criticism of new technology is invalid because someone was wrong once in the past about unrelated technology” — means there might as well be an LLM posting these replies to every applicable article. It’s boring and no one learns anything.

It would be a lot more interesting to point out the differences and similarities yourself. But then if you wanted an interesting discussion you wouldn’t be posting trite flamebait in the first place, would you?

hackable_sand2 hours ago

Note that we still have not solved cameras or even cars.

The biggest lesson I am learning recently is that technologists will bend over backwards to gaslight the public to excuse their own myopia.

dwrolvink6 hours ago

Interesting comparison. I remember watching a video on that. Landscape paintings, portraits, etc, was an art that has taken an enormous nosedive. We, as humans, have missed out on a lot of art because of the invention of the camera. On the other hand, the benefits of the camera need no elaboration. Currently AI had a lot of foot guns though, which I don't believe the camera had. I hope AI gets to that point too.

jack_pp4 hours ago

The footgun cameras had was exposure time.

1826 - The Heliograph - 8+ hours

1839 - The Daguerreotype - 15–30 Mins

1841 - The Calotype - 1–2 Mins

1851 - Wet Plate Collodion - 2–20 Secs

1871 - The Dry Plate - < 1 Second.

So it took 45 years to perfect the process so you could take an instant image. Yet we complain after 4 years of LLMs that they're not good enough.

AdieuToLogic7 hours ago

> Eloquent, moving, and more-or-less exactly what people said when cameras first hit the scene.

This is a non sequitur. Cameras have not replaced paintings, assuming this is the inference. Instead, they serve only to be an additional medium for the same concerns quoted:

  The process, which is an iterative one, is what leads you 
  towards understanding what you actually want to make, 
  whether you were aware of it or not at the beginning.
Just as this is applicable to refining a software solution captured in code, just as a painter discards unsatisfactory paintings and tries again, so too is it when people say, "that picture didn't come out the way I like, let's take another one."

williamcottonan hour ago

Photography’s rapid commercialisation [21] meant that many painters – or prospective painters – were tempted to take up photography instead of, or in addition to, their painting careers. Most of these new photographers produced portraits. As these were far cheaper and easier to produce than painted portraits, portraits ceased to be the privilege of the well-off and, in a sense, became democratised [22].

Some commentators dismissed this trend towards photography as simply a beneficial weeding out of second-raters. For example, the writer Louis Figuier commented that photography did art a service by putting mediocre artists out of business, for their only goal was exact imitation. Similarly, Baudelaire described photography as the “refuge of failed painters with too little talent”. In his view, art was derived from imagination, judgment and feeling but photography was mere reproduction which cheapened the products of the beautiful [23].

https://www.artinsociety.com/pt-1-initial-impacts.html#:~:te...

CamperBob27 hours ago

Cameras have not replaced paintings, assuming this is the inference.

You wouldn't have known that, going by all the bellyaching and whining from the artists of the day.

Guess what, they got over it. You will too.

lkey7 hours ago

What stole the joy you must have felt, fleetingly, as a child that beheld the world with fresh eyes, full of wonder?

Did you imagine yourself then, as your are now, hunched over a glowing rectangle. Demanding imperiously that the world share your contempt for the sublime. Share your jaundiced view of those that pour the whole of themselves into the act of creation, so that everyone might once again be graced with wonder anew.

I hope you can find a work of art that breaks you free of your resentment.

ceuk6 hours ago

Thank you for brightening my morning with a brief moment of romantic idealism in a black ocean of cynicism

kuerbel6 hours ago

Love your comment.

I took the liberty of pasting it to chatgpt and asked it to write another paragraph in the same style:

Perhaps it is easier to sneer than to feel, to dull the edges of awe before it dares to wound you with longing. Cynicism is a tidy shelter: no drafts of hope, no risk of being moved. But it is also a small room, airless, where nothing grows. Somewhere beyond that glowing rectangle, the world is still doing its reckless, generous thing—colors insisting on being seen, sounds reaching out without permission, hands shaping meaning out of nothing. You could meet it again, if you chose, not as a judge but as a witness, and remember that wonder is not naïveté. It is courage, practiced quietly.

balamatom3 hours ago

Thank you for the AI warning, so I didn't have to read that.

exodust6 hours ago

Plot twist. The comment you love is the cynical one, responding to someone who clearly embraces the new by rising above caution and concern. Your GPT addition has missed the context, but at least you've provided a nice little paradox.

AdieuToLogic7 hours ago

>> Cameras have not replaced paintings, assuming this is the inference.

> You wouldn't have known that, going by all the bellyaching and whining from the artists of the day.

> Guess what, they got over it.

You conveniently omitted my next sentence, which contradicts your position and reads thusly:

  Instead, they serve only to be an additional medium for the 
  same concerns quoted ...
> You will too.

This statement is assumptive and gratuitous.

CamperBob27 hours ago

Username checks out, at least.

AdieuToLogic6 hours ago

> Username checks out, at least.

Thoughtful retorts such as this are deserving of the same esteem one affords the "rubber v glue"[0] idiom.

As such, I must oblige.

0 - https://idioms.thefreedictionary.com/I%27m+rubber%2c+you%27r...

salawat7 hours ago

Logic needs to be shown the door on occasion. Sometimes via the help of an ole Irish bar toss.

kranner7 hours ago

> Guess what, they got over it. You will too.

Prediction is difficult, especially of the future.

cjohnson3186 hours ago

Yeah, and cameras changed art forever.

exodust6 hours ago

people still make clay pots and paint landscapes

navigate83104 hours ago

Creativity is not what would expect out of the Renaissance

vermilingua8 hours ago

Source?

CamperBob28 hours ago

Art history. It's how we ended up with Impressionism, for instance.

People felt (wrongly) that traditional representational forms like portraiture were threatened by photography. Happily, instead of killing any existing genres, we got some interesting new ones.

keyle9 hours ago

I don't get it.

I think just as hard, I type less. I specify precisely and I review.

If anything, all we've changed is working at a higher level. The product is the same.

But these people just keep mixing things up like "wow I got a ferrari now, watch it fly off the road!"

Yeah so you got a tools upgrade; it's faster, it's more powerful. Keep it on the road or give up driving!

We went from auto completing keywords, to auto completing symbols, to auto completing statements, to auto completing paragraphs, to auto completing entire features.

Because it happened so fast, people feel the need to rename programming every week. We either vibe coders now, or agentic coders or ... or just programmers hey. You know why? I write in C, I get machine code, I didn't write the machine code! It was all an abstraction!

Oh but it's not the same you say, it changes every time you ask. Yes, for now, it's still wonky and janky in places. It's just a stepping stone.

Just chill, it's programming. The tools just got even better.

You can still jump on a camel and cross the desert in 3 days. Have at it, you risk dying, but enjoy. Or you can just rent a helicopter and fly over the damn thing in a few hours. Your choice. Don't let people tell you it isn't travelling.

We're all Linus Torvalds now. We review, we merge, we send back. And if you had no idea what you were doing before, you'll still have no idea what you're doing today. You just fat-finger less typos today than ever before.

darepublic4 hours ago

>You can still jump on a camel and cross the desert in 3 days. Have at it, you risk dying, but enjoy. Or you can just rent a helicopter and fly over the damn thing in a few hours. Your choice. Don't let people tell you it isn't travelling.

its obviously not wrong to fly over the desert in a helicopter. its a means to an end and can be completely preferable. I mean myself I'd prefer to be in a passenger jet even higher above it, at a further remove personally. But I wouldn't think that doing so makes me someone who knows the desert the same way as someone who has crossed it on foot. It is okay to prefer and utilize the power of "the next abstraction", but I think its rather pig headed to deny that nothing of value is lost to people who are mourning the passing of what they gained from intimate contact with the territory. and no it's not just about the literal typing. the advent of LLMs is not the 'end of typing', that is more reductionist failure to see the point.

hnfong8 hours ago

I think I understand what the author is trying to say.

We miss thinking "hard" about the small details. Maybe "hard" isn't the right adjective, but we all know the process of coding isn't just typing stuff while the mind wanders. We keep thinking about the code we're typing and the interactions between the new code and the existing stuff, and keep thinking about potential bugs and issues. (This may or may not be "hard".)

And this kind of thinking is totally different from what Linus Torvalds has to think about when reviewing a huge patch from a fellow maintainer. Linus' work is probably "harder", but it's a different kind of thinking.

You're totally right it's just tools improving. When compilers improved most people were happy, but some people who loved hand crafting asm kept doing it as a hobby. But in 99+% cases hand crafting asm is a detriment to the project even if it's fun, so if you love writing asm yourself you're either out of work, or you grudgingly accept that you might have to write Java to get paid. I think there's a place for lamenting this kind of situation.

jtrn6 hours ago

Spot on. It’s the lumberjack mourning the axe while holding a chainsaw. The work is still hard. it’s just different. The friction comes from developers who prioritize the 'craft' of syntax over delivering value. It results in massive motivated reasoning. We see people suddenly becoming activists about energy usage or copyright solely to justify not using a tool they dislike. They will hunt for a single AI syntax error while ignoring the history of bugs caused by human fatigue. It's not about the tech. it's about the loss of the old way of working.

And it's also somewhat egotistical it seems to me. I sense a pattern that many developers care more about doing what they want instead of providing value to others.

alkonaut5 hours ago

I disagree. It's like the lumberjack working from home watching an enormous robotic forestry machine cut trees on a set of tv-screens. If he enjoyed producing lumber, then what he sees on those screens will fill him with joy. He's producing lots of lumber. He's much more efficient than with both axe and chainsaw.

But if he enjoyed being in the forest, and _doesn't really care about lumber at all_ (Because it turns out, he never used or liked lumber, he merely produced it for his employer) then these screens won't give him any joy at all.

That's how I feel. I don't care about code, but I also don't really care about products. I mostly care about the craft. It's like solving sudokus. I don't collect solved sudokus. Once solved I don't care about them. Having a robot solve sudokus for me would be completely pointless.

> I sense a pattern that many developers care more about doing what they want instead of providing value to others.

And you'd be 100% right. I do this work because my employer provides me with enough sudokus. And I provide value back which is more than I'm compensated with. That is: I'm compensated with two things: intellectual challenge, and money. That's the relationship I have with my employer. If I could produce 10x more but I don't get the intellectual challenge? The employer isn't giving me what I want - and I'd stop doing the work.

I think "You do what the employer wants, produce what needs to be produced, and in return you get money" is a simplification that misses the literal forest for all the forestry.

jstummbillig4 hours ago

But now you are conflating solving problems with a personal preference of how the problem should be solved. This never bodes well (unless you always prefer picking the method best suited to solve the problem.)

alkonaut3 hours ago

Well as I said, I consider myself compensated with intellectual challenge/stimulus as part of my compensation. It's _why_ I do the work to begin with. Or to put it another way: it's either done in a way I like, or it's probably not done at all.

I'm replaceable after all. If there is someone who is better and more effective at solving problems in some objectively good way - they should have my job. The only reason I still have it is because it seems this is hard to find. Employers are stuck with people who solve problems in the way they like for varying personal reasons and not the objectively best way of solving problems.

The hard part in keeping employees happy is that you can't just throw more money at them to make them effective. Keeping them stimulated is the difficult part. Some times you must accept that you must perhaps solve a problem that isn't the most critical one to address, or perhaps a bad call business wise, to keep employees happy, or keep them at all. I think a lot of the "Big rewrites" are in this category, for example. Not really a good idea compared to maintenance/improvement, but if the alternative is maintaining the old one _and_ lose the staff who could do that?

mlvljr5 hours ago

[dead]

chamomeal2 hours ago

> And it's also somewhat egotistical it seems to me. I sense a pattern that many developers care more about doing what they want instead of providing value to others.

I use LLMs a lot. They're ridiculously cool and useful.

But I don't think it's fair to categorize anybody as "egotistical". I enjoy programming for the fun puzzley bits. The big puzzles, and even often the small tedious puzzles. I like wiring all the chunks up together. I like thinking about the best way to expose a component's API with the perfect generic types. That's the part I like.

I don't always like "delivering value" because usually that value is "achieve 1.5% higher SMM (silly marketing metric) by the end of the quarter, because the private equity firm that owns our company is selling it next year and they want to get a good return".

latexr5 hours ago

> We see people suddenly becoming activists about energy usage or copyright solely to justify not using a tool they dislike.

Maybe you don’t care about the environment (which includes yourself and the people you like), or income inequality, or the continued consolidation of power in the hands of a few deranged rich people, or how your favourite artists (do you have any?) are exploited by the industry, but some of us have been banging the drum about those issues for decades. Just because you’re only noticing it now or don’t care it doesn’t mean it’s a new thing or that everyone else is being duplicitous. It’s a good thing more people are waking up and talking about those.

Helmut100016 hours ago

I agree. I think some of us would rather deal with small, incremental problems than address the big, high-level roadmap. High-level things are much more uncertain than isolated things that can be unit-tested. This can create feelings of inconvenience and unease.

[deleted]8 hours agocollapsed

[deleted]8 hours agocollapsed

nunez6 hours ago

You _think_ you're thinking as hard. Reading code != writing it. Just like watching someone do a thing isn't the same as actually doing it.

abm536 hours ago

Correct… reading code is a much more difficult and ultimately, productive, task.

I suspect those using the tools in the best way are thinking harder than ever for this reason.

dns_snek5 hours ago

> reading code is a much more difficult

Not inherently, no. Reading it and getting a cursory understanding is easy, truly understanding what it does well, what it does poorly, what the unintended side effects might be, that's the difficult part.

In real life I've witnessed quite a few intelligent and experienced people who truly believe that they're thinking "really hard" and putting out work that's just as good as their previous, pre-AI work, and they're just not. In my experience it roughly correlates to how much time they think they're saving, those who think they're saving the most time are in fact cutting corners and putting out the sloppiest quality work.

William_BB32 minutes ago

Sure. Reading a book is a much more difficult and ultimately, productive, task than writing a book.

rising-sky7 hours ago

I think the more apt analog isn't a faster car, a la Ferrari, it's more akin to someone who likes to drive and now has to sit and monitor the self-driving car steer and navigate. Comparing to the Ferrari is incorrect since it still takes a similar level of agency from the driver versus a <insert slower vehicle>

nunez6 hours ago

This is exactly the right analogy here.

FSD is very very good most of the time. It's so good (well, v14 is, anyway), it makes it easy to get lulled into thinking that it works all the time. So you check your watch here, check your phone there, and attend to other things, and it's all good until the car decides to turn into a curb (which almost happened to me the other day) or swerve hard into a tree (which happened to someone else).

Funny enough, much like AI, Tesla is shoving FSD down people's throats by gating Autopilot 2, a lane keeping solution that worked extremely well and is much friendlier to people who want limited autonomy here and there, behind the $99/mo FSD sub (and removing the option to pay for the package out of pocket).

anavat7 hours ago

It is simple. Continuing your metaphor, I have a choice of getting exactly where I want on a camel in 3 days, or getting to a random location somewhere on the other side of the desert on a helicopter in few hours.

And being a reasonable person I, just like the author, choose the helicopter. That's it, that's the whole problem.

satvikpendem6 hours ago

Why is that the reasonable choice if it doesn't get you to your destination?

I too did a lot of AI coding but when I saw the spaghetti it made, I went back to regular coding, with ask mode not agent mode as a search engine.

anavat5 hours ago

Because of compound efficiency and technological enablement.

Or, risking to beat the metaphor to death, because over a span of time I'll cross many more deserts than I would have on a camel, and because I'll cross deserts that I wouldn't even try crossing on a camel.

satvikpendem5 hours ago

Why does it matter how many deserts you cross if you never get to where you want to go? I similarly can take 10 flights across oceans but never end up in the city I'm trying to visit. Sounds like in your metaphor the person is just crossing desserts because they want to with no goal or destination in mind.

amelius2 hours ago

Because taking a rental camel from the airport is faster.

nottorpan hour ago

Helicopters are deterministic though :)

augment_me6 hours ago

You did something smart and efficinently using the least amount of energy and time needed. +1 for consciousness being a mistake

tired-turtle8 hours ago

> We're all Linus Torvalds now.

So...where's your OS and SCM?

I get your point that wetware stills matter, but I think it's a bit much to contend that more than a handful of people (or everyone) is on the level of Linus Torvalds now that we have LLMs.

keyle8 hours ago

I should have been clearer. It was a pun, a take, a joke. I was referring to his day-to-day activity now, where he merges code, doesn't write hardly any code for the linux kernel.

I didn't imply most of use can do half the thing he's done. That's not right.

Draiken2 hours ago

Even disregarding what he has done, this is utterly absurd. I almost spit my coffee reading that.

You are going to tell me that the vibe coders care and read the code they merge with the same attention to detail and care that Linus has? Come on...

That's the key for me. People are churning out "full features" or even apps claiming they are dealing with a new abstraction level, but they don't give a fuck about the quality of that shit. They don't care if it breaks in 3 weeks/months/years or if that code's even needed or not.

Someone will surely come say "I read all the code I generate" and then I'll say either you're not getting these BS productivity boost people claim or you're lying.

I've seen people pushing out 40k lines of code in a single PR and have the audacity to tell me they've reviewed the code. It's preposterous. People skim over it and YOLO merge.

Or if you do review everything, then it's not gonna be much faster than writing it yourself unless it's extremely simple CRUD stuff that's been done a billion times over. If you're only using AI for these tasks maybe you're a bit more efficient, but nothing close to the claims I keep reading.

I wish people cared about what code they wrote/merged like Linus does, because we'd have a hell of a lot less issues.

tired-turtle8 hours ago

> his day-to-day activity now, where he merges code

But even then...don't you think his insight into and ability to verify a PR far exceeds that of most devs (LLM or not)? Most of us cannot (reasonably) aspire to be like him.

keyle8 hours ago

Like I said if you didn't know what you were doing before, you won't know what you're doing with today.

Agentic coding in general only amplify your ability (or disability).

You can totally learn how to build an OS and invest 5 years of your life doing so. The first version of Linux I'm sure was pretty shoddy. Same for a SCM.

I've been doing this for 30 years. At some point, your limit becomes how much time you're willing to invest in something.

DANmode2 hours ago

But some can aspire to be him circa five years ago,

while Linus has his own efforts multiplied as well.

fragmede5 hours ago

My hair hasn't turned blonde and I don't suddenly know how to speak Finnish, either.

You might have missed their point.

mohsen12 hours ago

I agree! It's a lot more pleasant than being stuck over figuring out how to use awk properly for hours. I knew what I needed to do then, and I know what I need to do now too. The difference is I get to results faster. Sometimes I even learn that awk was not even the right tool in my situation and learn about a new way of doing things while AI is "thinking" for me

rvz6 hours ago

> We're all Linus Torvalds now. We review, we merge, we send back. And if you had no idea what you were doing before, you'll still have no idea what you're doing today. You just fat-finger less typos today than ever before.

Except Linus understands the code that is being reviewed / merged in since he already built the kernel and git by hand. You only see him vibe-coding toys but not vibe-coding in the kernel.

Today, we are going to see a gradual skill atrophy with developers over-relying on AI and once something like Claude goes down, they can't do any work at all.

The most accurate representation is that AI is going to rapidly make lots of so-called 'senior engineers' who are over-reliant and unable to detect bad AI code like juniors and interns.

keyle6 hours ago

If you can't rebuke code today. You can't rebuke code tomorrow.

tmtvl36 minutes ago

By induction that means either nobody can rebuke code or someone who can rebuke code can do that from the day they're born.

[deleted]21 minutes agocollapsed

tdstein8 hours ago

> You just fat-finger less typos today than ever before.

My typos are largely admissible.

globular-toast6 hours ago

I get it.

I got excited about agents because I told myself it would be "just faster typing". I told myself that my value was never as a typist and that this is just the latest tool like all the tools I had eagerly added to my kit before.

But the reality is different. It's not just typing for me. It's coming up with crap. Filling in the blanks. Guessing.

The huge problem with all these tools is they don't know what they know and what they don't. So when they don't know they just guess. It's absolutely infuriating.

It's not like a Ferrari. A Ferrari does exactly what I tell it to, up to the first-order effects of how open the throttle is, what direction the wheels face, how much pressure is on the brakes etc. The second-order effects are on me, though. I have to understand what effect these pressures will have on my ultimate position on the road. A normie car doesn't give you as much control but it's less likely to come off the road.

Agents are like a teleport. You describe where you want to be and it just takes you directly there. You say "warm and sunny" and you might get to the Bahamas, but you might also get to the Sahara. So you correct: "oh no, I meant somewhere nice" and maybe you get to the Bahamas. But because you didn't travel there yourself you failed to realise what you actually got. Yeah, it's warm, sunny and nice, but now you're on an island in the middle of nowhere and have to import basically everything. So I prompt again and rewrite the entire codebase, right?

Linus Torvalds works with experts that he trusts. This is like a manic 5 year old that doesn't care but is eager to work. Saying we all get to be Torvalds is like saying we all get to experience true love because we have access to porn.

joseangel_sc8 hours ago

except the thing does not work as expected and it just makes you worse not better

keyle8 hours ago

Like I said that's temporary. It's janky and wonky but it's a stepping stone.

Just look at image generation. Actually factually look at it. We went from horror colours vomit with eyes all over, to 6 fingers humans, to pretty darn good now.

It's only time.

leecommamichael8 hours ago

Why is image generation the same as code generation?

dcw3038 hours ago

it's not. We were able to get rid of 6 fingered hands by getting very specific, and fine tuning models with lots of hand and finger training data.

But that approach doesn't work with code, or with reasoning in general, because you would need to exponentially fine tune everything in the universe. The illusion that the AI "understands" what it is doing is lost.

rvz6 hours ago

It isn't.

Code generation progression in LLMs still carries higher objective risk of failure depending on the experience on the person using it because:

1. They still do not trust if the code works (even if it has tests) thus, needs thorough human supervision and still requires on-going maintainance.

2. Hence (2) it can cost you more money than the tokens you spent building it in the first place when it goes horribly wrong in production.

Image generation progression comes with close to no operational impact, and has far less human supervision and can be safely done with none.

mr_freemanan hour ago

> Just look at image generation. Actually factually look at it. We went from horror colours vomit with eyes all over, to 6 fingers humans, to pretty darn good now.

Yes, but you’re not taking into account what actually caused this evolution. At first glance, it looks like exponential growth, but then we see OpenAI (as one example) with trillions in obligations compared to 12–13 billion in annual revenue. Meanwhile, tool prices keep rising, hardware demand is surging (RAM shortages, GPUs), and yet new and interesting models continue to appear. I’ve been experimenting with Claude over the past few days myself. Still, at some point, something is bound to backfire.

The AI "bubble" is real, you don’t need a masters degree in economics to recognize it. But with mounting economic pressures worldwide and escalating geopolitical tension we may end up stuck with nothing more than those amusing Will Smith eating pasta videos for a while.

[deleted]8 hours agocollapsed

beebmam8 hours ago

Comments like these are why I don't browse HN nearly ever anymore

w4yai7 hours ago

Nothing new. Whenever a new layer of abstraction is added, people say it's worse and will never be as good as the old way. Though it's a totally biased opinion, we just have issues with giving up things we like as human being.

roadbuster5 hours ago

> Whenever a new layer of abstraction is added

LLMs aren't a "layer of abstraction."

99% of people writing in assembly don't have to drop down into manual cobbling of machine code. People who write in C rarely drop into assembly. Java developers typically treat the JVM as "the computer." In the OSI network stack, developers writing at level 7 (application layer) almost never drop to level 5 (session layer), and virtually no one even bothers to understand the magic at layers 1 & 2. These all represent successful, effective abstractions for developers.

In contrast, unless you believe 99% of "software development" is about to be replaced with "vibe coding", it's off the mark to describe LLMs as a new layer of abstraction.

w4yai4 hours ago

> unless you believe 99% of "software development" is about to be replaced with "vibe coding"

Probably not vibe coding, but most certainly with some AI automation

duskdozer5 hours ago

The difference is that LLM output is very nondeterministic.

w4yai4 hours ago

It depends. Temperature is a variable. If you really need determinism, you could build a LLM for that. Non-determinism can be a good feature though.

duskdozer30 minutes ago

How would you do that? If it's possible, it seems strange that someone hasn't done it already.

[deleted]7 hours agocollapsed

[deleted]8 hours agocollapsed

CrimsonRain6 hours ago

That's your opinion and you can not use those tools.

People are paying for it because it helps them. Who are you to whine about it?

nunez6 hours ago

But that's the entire flippin' problem. People are being forced to use these tools professionally at a stagering rate. It's like the industry is in its "training your replacement" era.

CrimsonRain4 hours ago

you don't like it? Find a place that doesn't enforce it. Can't find it? Then either build it or accept that you want a horse carriage while people want taxi.

mr_freemanan hour ago

That's Capitalism, baby

thegrim0004 hours ago

You know, I was expecting what the post would say and was prepared to dunk on it and just tell them to stop using ai then, but the builder/thinker division they presented got me thinking. How ai/vibe coding fulfills the builder, not the thinker, made me realize that I'm basically 100% thinker, 0% builder, and that's why I don't really care at all about ai for coding.

I'll spend years working on a from scratch OS kernel or a vulkan graphics engine or whatever other ridiculous project, which never sees the light of day, because I just enjoy the thinking / hard work. Solving hard problems is my entertainment and my hobby. It's cool to eventually see results in those projects, but that's not really the point. The point is to solve hard problems. I've spent decades on personal projects that nobody else will ever see.

So I guess that explains why I see all the ai coding stuff and pretty much just ignore it. I'll use ai now as an advanced form of google, and also as a last ditch effort to get some direction on bugs I truly can't figure out, but otherwise I just completely ignore it. But I guess there's other people, the builders, where ai is a miraculous thing and they're going to crazy lengths to adopt it in every workflow and have it do as much as possible. Those 'builder' types of people are just completely different from me.

jatora2 minutes ago

Absolutely nah. I know it feels good to jump into the 'thinker' camp and lump the users of AI into a non-thinker group, but this dichotomy is very poorly suited. Builders/engineers want a great tool to build faster with. Coders want to write code and find elegance in the prose. Both are thinkers.

electsaudit0qan hour ago

This resonates with me alot. I've been noticing this pattern where I reach for Claude or ChatGPT for things I used to just... think through? Like debugging something weird - before I would stare at the code, trace through it mentally, maybe draw some diagrams. Now I just paste it and ask "whats wrong here".

The thing is, those 20 minutes of frustration were when the actual learning happened. When you finally figure out that its a race condition or whatever, that knowledge sticks because you earned it. When the AI just tells you, its like reading a spoiler - you know the answer but you didnt really understand the journey.

Not saying AI tools are bad, I use them constantly. But I've started forcing myself to struggle with hard problems for at least 30min before reaching for help. Sometimes I solve it myself and it feels great. Sometimes I dont and the AI helps. But that initial struggle matters I think.

Anyone else doing something similar? Curious how others are balancing the convenience vs the learning aspect.

nasretdinov30 minutes ago

That reminds me of why I don't think that if err != nil in Go is actually a problem, because while it's annoying to have to pause each time an error can happen, it's actually very useful, because it forces you to consider all the possible failure states and it often lets you discover the flaws in your original design while you're typing in the code. This eventually leads to much better outcomes and allows for the tools I write to be much more resilient than they otherwise would.

Obviously it all goes out of the window as soon as AI coding comes into question, and that's why I learned that I actually _don't_ want AI to generate code for me. I would only ask it simple questions like "how do I do X in Go" or in some other system, but the implementation I do myself, otherwise I lose this "having to consider every error path" part, which is apparently very helpful when your goal is to write resilient software

monch19627 hours ago

As someone who's been coding for several decades now (i.e. I'm old), I find the current generation of AI tools very ... freeing.

As an industry, we've been preaching the benefits of running lots of small experiments to see what works vs what doesn't, try out different approaches to implementing features, and so on. Pre-AI, lots of these ideas never got implemented because they'd take too much time for no definitive benefit.

You might spend hours thinking up cool/interesting ideas, but not have the time available to try them out.

Now, I can quickly kick off a coding agent to try out any hare-brained ideas I might come up with. The cost of doing so is very low (in terms of time and $$$), so I get to try out far more and weirder approaches than before when the costs were higher. If those ideas don't play out, fine, but I have a good enough success rate with left-field ideas to make it far more justifiable than before.

Also, it makes playing around with one-person projects a lot practical. Like most people with partner & kids, my down time is pretty precious, and tends to come in small chunks that are largely unplannable. For example, last night I spent 10 minutes waiting in a drive-through queue - that gave me about 8 minutes to kick off the next chunk of my one-person project development via my phone, review the results, then kick off the next chunk of development. Absolutely useful to me personally, whereas last year I would've simply sat there annoyed waiting to be serviced.

I know some people have an "outsourcing Lego" type mentality when it comes to AI coding - it's like buying a cool Lego kit, then watching someone else assemble it for you, removing 99% of the enjoyment in the process. I get that, but I prefer to think of it in terms of being able to achieve orders of magnitude more in the time I have available, at close to zero extra cost.

raw_anon_111129 minutes ago

Exactly, it kills me to see a people a lot younger (I’m 51) pining about the good old days while the coding part of my day to day life now is using AI tools to their fullest extent.

lll-o-lll4 hours ago

> 8 minutes to kick off the next chunk of my one-person project development via my phone, review the results, then kick off the next chunk of development.

How are you doing this via your phone?

samusiam3 hours ago

Termius + tailscale + tmux is a common setup for mobile coding sessions.

fragmede3 hours ago

The (iOS) Claude phone app has a Claude code feature which runs "in the cloud". It's pretty handy for getting things done on the bus.

yieldcrv4 hours ago

claude can deploy to github spaces and modify code for deployment to those by commits and pull requests to the repo exclusively

claude via browser and claude mobile apps function this way

but alongside that, people do make tunnels to their personal computer and setup ways to be notified on their phone, or to get the agent unstuck when it asks for a permission, from their phone

marcus_holmes6 hours ago

Totally agree. I can spend an afternoon trying out an approach to a problem or product (usually while taking meetings and writing emails as well). If it doesn't work, then that's a useful result from my time. If it does work, I can then double-down on review, tests, quality, security, etc and make sure it's all tickety-boo.

fuomag95 hours ago

Completely agree, there’s so many small projects I’d never been able to even start in my free time, because I’m NOT a full-stack dev and I’d rather not spend all my evenings fixing or working around all the small changes and quirks of the $currentjsframework

davidmurdochan hour ago

I'm wondering if everyone here saying they think harder with LLM agents have never reached "flow state" while programming. I just can't imagine using 100% of my mental focus state for hours with an agent. Sure, I think differently when my coding is primarily via agent, but I've never been totally enveloped by my thoughts while doing so.

For those who have found a "flow state" with LLM agents, what's that like?

dysoco35 minutes ago

I believe you can enter "flow state" with something like Claude Code, from what I've read, but it's mostly reduced to pressing 1 or 2 and typing a few prompts. The reward loop is much more closed now though, so it's a bit more akin to reaching flow state playing Tetris.

melodyogonna6 minutes ago

I use Aider because it allows me to retain both personalities and still benefit from AI. It truly is the best assistant I've used.

wendgeabos5 minutes ago

I love thinking hard, it's genuinely my favorite thing, but ... we get paid to ship.

m0rc5 hours ago

I think the article has a point. There seem to be two reactions among senior engineers atound me these days.

On one side, there are people who have become a bit more productive. They are certainly not "10x," but they definitely deliver more code. However, I do not observe a substantial difference in the end-to-end delivery of production-ready software. This might be on me and my lack of capacity to exploit the tools to their full extent. But, iterating over customer requirements, CI/CD, peer reviews, and business validation takes time (and time from the most experienced people, not from the AI).

On the other hand, soemtimes I observe a genuine degradation of thinking among some senior engineers (there aren’t many juniors around, by the way). Meetings, requirements, documents, or technology choices seem to be directly copy/pasted from an LLM, without a grain of original thinking, many times without insight.

The AI tools are great though. They give you an answer to the question. But, many times making the correct question, and knowing when the answer is not correct is the main issue.

I wonder if the productivity boost that senior engineers actually need is to profit from the accumulated knowledge found in books. I know it is an old technology and it is not fashionable, but I believe it is mostly unexploited if you consider the whole population of engineers :D

nasretdinov25 minutes ago

> This might be on me and my lack of capacity to exploit the tools to their full extent. But, iterating over customer requirements, CI/CD, peer reviews, and business validation takes time (and time from the most experienced people, not from the AI).

Yeah, you're certainly not the only one. For me the implementation part has always been a breeze compared to all the "communication overhead" so to speak. And in any mature system it easily takes 90% of all time or more.

urutom7 hours ago

One thing this discussion made me realize is that "thinking hard" might not be a single mode of thinking.

In grad school, I had what I'd call the classic version. I stayed up all night mentally working on a topology question about turning a 2-torus inside out. I already knew you can't flip a torus inside out in ordinary R^3 without self-intersection. So I kept moving and stretching the torus and the surrounding space in my head, trying to understand where the obstruction actually lived.

Sometime around sunrise, it clicked that if you allow the move to go through infinity(so effectively S^3), the inside/outside distinction I was relying on just collapses, and the obstruction I was visualizing dissolves. Birds were chirping, I hadn't slept, and nothing useful came out of it, but my internal model of space felt permanently upgraded. That's clearly "thinking hard" in the sense.

But there's another mode I've experienced that feels related but different. With a tough Code Golf problem, I might carry it around for a week. I'm not actively grinding on it the whole time, but the problem stays loaded in the background. Then suddenly, in the shower or on a walk, a compression trick or a different representation just clicks.

That doesn't feel "hard" moment to moment. It's more like keeping a problem resident in memory long enough for the right structure to surface.

One is concentrated and exhausting, the other is diffuse and slow-burning. They're different phenomenologically, but both feel like forms of deep engagement that are easy to crowd out.

abcde66677716 minutes ago

Well, for programming work which is essentially repetition (e.g. making another website not unlike thousands of others), it's no surprise that AI programming can work wonders - you're essentially using a sophisticated form of copy paste.

But there's still a lot of programming out there which requires originality.

Speaking personally, I never was nor ever will be too interested in the former variety.

Fire-Dragon-DoL9 hours ago

I haven't reduced my thinking! Today I asked AI to debug an issue. It came with a solution that it was clearly correct, but it didn't explain why the code was in that state. I kept steering AI (which just wanted to fix) toward figuring out the why and it digged through git and github issue at some point,in a very cool way. And finally it pulled out something that made sense. It was defensive programming introduced to fix an issue somewhere else, which was also in turn fixed, so useless.

At that point an idea popped in my mind and I decided to look for similar patterns in the codebase, related to the change, found 3. 1 was a non bug, two were latent bugs.

Shipped a fix plus 2 fixes for bugs yet to be discovered.

throwerxyz7 hours ago

>I haven't reduced my thinking!

You just detailed an example of where you did in fact reduce your thinking.

Managers who tell people what to get done do not think about the problem.

Fire-Dragon-DoL6 hours ago

I think my message is doing a disservice to explaining what actually happened because a lot of it happens in my head.

    1. I received the ticket, as soon as I read it I had a hunch it was related to some querying ignoring a field that should be filtered by every query (thinking)
    2. I give this hunch to the AI which goes search in the codebase in the areas I suggested the problem could be and that's when it find the issue and provide a fix
    3. I think the problem could be spread given there is a method that removes the query filter, it could have been used in multiple places, so I ask AI to find other usages of it (thinking, this is my definition of "steering" in this context)
    4. AI reports 3 more occurrences and suggests that 2 have the same bug, but one is ok
    5. I go in, review the code and understand it and I agree, it doesn't have the bug (thinking)
    6. AI provide the fix for all the right spots, but I said "wait, something is fishy here, there is a commit that explicitly say it was added to remove the filter, why is that?" (thinking), so I ask AI to figure out why the commit says that
    7. AI proceeds to run a bunch of git-history related commands, finds some commit and then does some correlation to find another commit. This other commit introduced the change at the same time to defend from a bug in a different place
   8. I understand what's going on now, I'm happy with the fix, the history suggests I am not breaking stuff. I ask AI to write a commit with detailed information about the bug and the fix based on the conversation
    
There is a lot of thinking involved. What's reduced is search tooling. I can be way more fuzzy, rather than `rg 'whatever'` I now say "find this and similar patterns"

glemmaPaul5 hours ago

Thanks for expanding your comment. But to what you explain here, I think your knowledge and comprehension has only slimmed down a notch. It seems to me that this argument equates thinking to be on the vertical vertices only, but may I say there is a horizontal/broad aspect to it? e.g. You lose grip on what is a good combination of framework/language/standards, you remove the abstraction of multiple layers of external and internal APIs, you leave to study the right software pattern for the job, having the AI comprehend the large chunks for you (thats all loss on thinking). You've lost simple querying and digging through codebase. Gosh, lets even say you lost a bit of git command knowledge. You catch my drift here? I am completely for using AI as a tool to do a lot of the boilerplate work with the right directions. Though remembering some changes in codebase before and letting LLMs do the work, is not the same to me as fully owning up to your system as you know, you actually know. Old man shouting at screen so, to each their own of course! Cheers

booleandilemma6 hours ago

Did you use your AI to create that list for you?

phist_mcgee3 hours ago

That's not very nice. Be nice.

booleandilemma2 hours ago

Who are you? The morality police?

alexpotato36 minutes ago

I'm a DevOps/SRE and I've spent the past couple weeks trying to vibecode as much of what I do as possible.

In some ways, it's magical. e.g. I whipped up a web based tool for analyzing performance statistics of a blockchain. Claude was able to do everything from building the gui, optimizing the queries, adding new indices to the database etc. I broke it down into small prompts so that I kept it on track and it didn't veer off course. 90% of this I could have done myself but Claude took hours where it would have taken me days or even weeks.

Then yesterday I wanted to do a quick audit of our infra using Ansible. I first thought: let's try Claude again. I gave it lots of hints on where our inventory is, which ports matter etc but it still was grinding away for several minutes. I eventually Ctrl-C'ed and used a couple one liners that I wrote myself in a few minutes. In other words, I was faster that the machine in this case.

After the above, it makes sense to me that people may have conflicting feelings about productivity. e.g. sometimes it's amazing, sometimes it does the wrong thing.

rogerkirkness33 minutes ago

I think there's an argument where if Claude had the knowledge map of your personal one liners and a tool for using them, it would often do the right thing in those cases. But it's definitely not as able to compress all the entropy of 'what can go wrong' operations wise as it is when composing code yet.

raw_anon_111131 minutes ago

My experience that with careful specs, Claude or Codex can whip up either CDK, Cloudformation, or Terraform code much quicker than I can and I’ve been using IAC for 8 years - developer/consultant specializing in development + cloud architecture

topspin9 hours ago

I'm using LLMs to code and I'm still thinking hard. I'm not doing it wrong: I think about design choices: risks, constraints, technical debt, alternatives, possibilities... I'm thinking as hard as I've ever done.

josephg9 hours ago

Yeah, but thinking with an LLM is different. The article says:

> By “thinking hard,” I mean encountering a specific, difficult problem and spending multiple days just sitting with it to overcome it.

The "thinking hard" I do with an LLM is more like management thinking. Its chaotic and full of conversations and context switches. Its tiring, sure. But I'm not spending multiple days contemplating a single idea.

The "thinking hard" I do over multiple days with a single problem is more like that of a scientist / mathematician. I find myself still thinking about my problem while I'm lying in bed that night. I'm contemplating it in the shower. I have little breakthroughs and setbacks, until I eventually crack it or give up.

Its different.

buu7006 hours ago

YMMV, but I've found that I actually do way more of that type of "thinking hard" thanks to LLMs. With the menial parts largely off my plate, my attention has been freed up to focus on a higher density of hard problems, which I find a lot more enjoyable.

marcus_holmes6 hours ago

There are a lot of hard problems to solve in orchestration. We've barely scratched the surface on this.

holysoles9 hours ago

I very much think its possible to use LLMs as a tool in this way. However a lot of folks are not. I see people, both personally and professionally, give it a problem and expect it to both design and implement a solution, then hold it as a gold standard.

I find the best uses, for at least my self, are smaller parts of my workflow where I'm not going to learn anything from doing it: - build one to throw away: give me a quick prototype to get stakeholder feedback - straightforward helper functions: I have the design and parameters planned, just need an implementation that I can review - tab-completion code-gen - If I want leads for looking into something (libraries, tools) and Googling isn't cutting it

topspin8 hours ago

> then hold it as a gold standard

I just changed employers recently in part due to this: dealing with someone that appears to now spend his time coercing LLM's to give the answers he wants, and becoming deaf to any contradictions. LLMs are very effective at amplifying the Reality Distortion Field for those that live in them. LLMs are replacing blog posts for this purpose.

paladin3141599 hours ago

I echo this sentiment. Even though I'm having Claude Code write 100% of the code for a personal project as an experiment, the need for thinking hard is very present.

In fact, since I don't need to do low-thinking tasks like writing boilerplate or repetitive tests, I find my thinking ratio is actually higher than when I write code normally.

jernestomgop9 hours ago

I'm with you, thinking about architecture is generally still a big part of my mental effort. But for me most architectural problems are solve in short periods of thought and a lot of iteration. Maybe its an skill issue, but not now nor in the pre-LLM era I've been able to pre-solve all the architecture with pure thinking.

That said architectural problems have been also been less difficult, just for the simple fact that research and prototyping has become faster and cheaper.

ratorx9 hours ago

I think it depends on the scope and level of solution I accept as “good”. I agree that often the thinking for the “next step” is too easy architecturally. But I still enjoy thinking about the global optimum or a “perfect system”, even it’s not immediately feasible, and can spend large amounts of time on this.

And then also there’s all the non-systems stuff - what is actually feasible, what’s most valuable etc. Less “fun”, but still lots of potential for thinking.

I guess my main point is there is still lots to think about even post-LLM, but the real challenge is making it as “fun” or as easily useful as it was pre-LLM.

I think local code architecture was a very easy domain for “optimality” that is actually tractable and the joy that comes with it, and LLMs are harmful to that, but I don’t think there’s nothing to replace it with.

Aeolun9 hours ago

And thinking of how to convey all of that to Claude without having to write whole books :)

MarcelOlsz9 hours ago

tfw you start expressing your thoughts as code because its shorter instead

sodapopcan9 hours ago

Ya, they are programming languages after all. Language is really powerful when you really how to use it. Some of us are more comfortable with the natural variety, some of us are more comfy with code ¯\_(ツ)_/¯

exodust5 hours ago

Agreed. My recent side projects involve lots of thinking over days and weeks.

With AI we can set high bars and do complex original stuff. Obviously boilerplate and common patterns are slop slap without much thinking. That's why you branch into new creative territory. The challenge then becomes visualising the mental map of modular pieces all working nicely together at the right time to achieve your original intent.

lelanthran3 hours ago

> I'm using LLMs to code and I'm still thinking hard. I'm not doing it wrong: I think about design choices: risks, constraints, technical debt, alternatives, possibilities... I'm thinking as hard as I've ever done.

Okay, for you that is new - post-LLM.

For me, pre-LLM I thought about all those things as well as the code itself.

IOW, I thought about even more things. Now you (if I understand your claim correctly) think only about those higher level things, unencumbered by stuff like implementation misalignments, etc. By definition alone, you are thinking less hard.

------------------------

[1] Many times the thinking about code itself acted as a feedback mechanism for all those things. If thinking about the code itself never acted as a feedback mechanism to your higher thought processes then ... well, maybe you weren't doing it the way I was.

thrw0459 hours ago

Reading this comment and other similar comments there's definitely a difference between people. Personally I agree and resonate a lot with the blog post, and I've always found designs of my programs to come sort of naturally. Usually the hard problems are the technical problems and then the design is figured out based on what's needed to control the program. I never had to think that hard about design.

cwnyth8 hours ago

Aptitude testing centers like Johnson O'Connor have tests for that. There are (relatively) huge differences between different people's thinking and problem solving styles. For some, creating an efficient process feels natural, while others need stability and redundancy. Programmers are by and large the latter.

[1]: https://www.jocrf.org/how-clients-use-the-analytical-reasoni...

allovertheworld6 hours ago

Thats not thinking hard, you are making decisions

johnfn9 hours ago

It's certainly a different style of thinking hard. I used to really stress myself over coding - i.e. I would get frustrated that solving an issue would cause me to introduce some sort of hack or otherwise snowball into a huge refactor. Now I spend most of my time thinking about what cool new features I am going to build and not really stressing myself out too much.

gkoberger9 hours ago

I'd go as far as to say I think harder now – or at least quicker. I'm not wasting cycles on chores; I can focus on the bigger picture.

9rx9 hours ago

I've never felt more mental exhaustion than after a LLM coding session. I assume that is a result of it requiring me to think harder too.

josephg9 hours ago

I feel this too. I suspect its a byproduct of all the context switching I find myself doing when I'm using an LLM to help write software. Within a 10 minute window, I'll read code, debug a problem, prompt, discuss the design, test something, do some design work myself and so on.

When I'm just programming, I spend a lot more time working through a single idea, or a single function. Its much less tiring.

Gigachad5 hours ago

In my experience it's because you switch from writing code to reviewing code someone else wrote. Which is massively more difficult than writing code yourself.

AlotOfReading9 hours ago

It wasn't until I read your comment that I was able to pinpoint why the mental exhaustion feels familiar. It's the same kind (though not degree) of exhaustion as formal methods / proofs.

Except without the reward of an intellectual high afterwards.

samusiam2 hours ago

Personally I do get the intellectual high after a long LLM coding session.

sho_hn9 hours ago

I think OP's post is an attempt to move us past this stage of the discussion, which is frankly an old hat.

The point they are making is that using AI tools makes it a lot harder for them to keep up the discipline to think hard.

This may or may not be true for everyone.

ksymph9 hours ago

It is a different kind of thinking, though.

amiantos9 hours ago

I use Claude Code a lot, and it always lets me know the moment I stopped thinking hard, because it will build something completely asinine. Garbage in, garbage out, as they say...

senectus19 hours ago

its how you use the tool... reminds me of that episode of simpsons when homer gets a gun lic... he goes from not using it at all, to using it a little, to using it without thinking about what hes doing and for ludicrous things...

thinking is tiring and life is complicated, the tool makes it easy to slip into bad habits and bad habits are hard to break even when you recognise its a bad habit.

Many people are too busy/lazy/self-unaware to evaluate their behaviour to recognise a bad habit.

Aeglaecia9 hours ago

there's no such thing as right or wrong , so the following isn't intended as any form of judgement or admonition , merely an observation that you are starting to sound like an llm

topspin9 hours ago

> you are starting to sound like an llm

My observation: I've always had that "sound." I don't know or care much about what that implies. I will admit I'm now deliberately avoiding em dashs, whereas I was once an enthusiastic user of them.

samusiam2 hours ago

I still use em-dashes. I started using them when my professor lambasted my use of semi-colons. I'm not looking back -- LLM haters be damned!

wnolens9 hours ago

Yes, if anything I think harder because I know it's on the frontier of whatever I'm building (so i'm more motivated and there's much more ROI)

Sammi2 hours ago

I'm thinking much more than ever, now that the coding agent is building for me.

I strongly experience that coding agents are helping me think about stuff I wasn't able to think through before.

I very much have both of these builder and thinker personas inside me, and I just am not getting this experience with "lack of thinking" that I'm seeing so many other people write about. I have it exactly the other way around, even if I'm a similar arch type of person. I'm spending less time building and more time thinking than ever.

nunez7 hours ago

I will never not be upset at my fellow engineers for selling out the ONE thing that made us valuable and respected in the marketplace and trying to destroy software engineering as a career because "Claude Code go brrrrrr" basically.

It's like we had the means for production and more or less collectively decided "You know what? Actually, the bourgeoisie can have it, sure."

rune-dev7 minutes ago

I find it fascinating how the “true believers” of AI and AGI, have essentially been manipulated by capitalists to undermine the value of their own labor.

sph6 hours ago

The personification of the quote “your scientists were so preoccupied with whether or not they could that they didn't stop to think if they should”

I feel the existential problem for a world that follows the religion of science and technology to its extreme, is that most people in STEM have no foundation in humanities, so ethical and philosophical concerns never pass through their mind.

We have signed a pact with the devil to help us through boring tasks, and no one thought to ask what we would give in exchange.

lstodd2 hours ago

Yeah, lawyers, politicians and MBA types all usually have solid foundation in humanities. Not exactly known for concerning themselves with ethics or philosophy though.

[deleted]2 hours agocollapsed

oblio6 hours ago

Money. It's always money. It was always money.

nunez6 hours ago

Couldn't agree more. AI as it's designed today is very heavy on the "f u; got mine" vibe.

rune-dev6 minutes ago

Tech in general, I remember when I was younger thinking the tech world was so cool and different.

I still love the work, but to say I’m disillusioned by the industry is an understatement.

s53007 hours ago

[dead]

sebastianmestrean hour ago

I did competitive programming seriously between '17 and '24, then kept on coaching people

As a beginner I often thought about a problem for days before finding a solution, but this happened less and less as I improved

I got better at exploiting the things I knew, to the point where I could be pretty confident that if I couldn't solve a problem in a few hours it was because I was missing some important piece of theory

I think spending days "sitting with" a problem just points at your own weakness in solving some class of problems.

If you are making no articulable progress whatsoever, there is a pathology in your process.

Even when working on my thesis, where I would often "get stuck" because the problem was far beyond what I could solve in one sitting, I was still making progress in some direction every time.

juggy69an hour ago

Do you mean that sitting with the problem for days is a weakness that you should fix since you're wasting time making no progress? Or that it is a necessary practice in order to understand your weaknesses?

sebastianmestrean hour ago

When you're a beginner you're a beginner, no way around it.

But understanding your weaknesses and working on them is huge, and I think most people just don't try to do it.

Being stuck for days is something to be overcome.

The next step would be being slow because you are trying out many different ideas and have no intuition for what the right one is.

postit16 minutes ago

I usually think hard. I correlate subjects with different areas to find similarities.

What I miss is having other people who likes to think and not always pushing for shallow results

DaanDL23 minutes ago

"I still encounter those occasionally, but the number of problems requiring deep creative solutions feels like it is diminishing rapidly."

Just let it try and solve an issue with your advanced SQLAlchemy query and see it burn. xD

qwertox28 minutes ago

AI is way less of a problem in regards to thinking that digital media consumption is.

I used to think about my projects when in bed, now i listen to podcasts or watch youtube videos before sleeping.

I think it has a much bigger impact than using our "programming calculator" as an assistant.

levitatorius2 hours ago

The post resonates deeply with me. I am a health professional in diagnostics and through the years I have observed different extremes in approaches to solving diagnostic challenges - the one extreme is to rely on "knowing", the other on "thinking/reasoning". The former is usually very fast, but not easily explainable - just like pattern recognition. The latter was slow, but could give a solution from "first principles" and possibly not described before. Of course it's a spectrum and the thinking part requires and includes the deep enough "knowing" part. One usually uses both approaches on daily work, but I have seen some people who relied much more on knowing than thinking/reasoning, sometimes to the extreme (as in refusing to diagnose a condition on their own because they "have not seen this before").

mastermedoan hour ago

I relate to the post, but I'm not sure it's hitting the nail on the head _for me_.

I like being useful, and I'm not yet sure how much of what I'm creating with AI is _me_, and how much it is _it_. It's hard to derive as much purpose/meaning from it compared to the previous reality where it was _all me_.

If I compare it to a real world problem; e.g. when I unplug the charging cable from my laptop at my home desk, the charging cable slides off the table. I could order a solution online that fixes the problem and be done with it, but I could also think how _I_ can solve the problem with what I already have in my spare parts box. Trying out different solutions makes me think and I'm way more happy with the end result. Every time I unplug the cable now and it stays in place it reminds me of _my_ labour and creativity (and also the cable not sliding down the table -- but that's besides the point).

r-johnv10 hours ago

I've found that it's often useful to spend the time thinking about the way I would architect the code (down to a fair level of minutia) before letting the agent have a go.

That way my 'thinker' is satiated and also challenged - Did the solution that my thinker came up with solve the problem better than the plan that the agent wrote?

Then either I acknowledge that the agent's solution was better, giving my thinker something to chew on for the next time; or my solution is better which gives the thinker a dopamine hit and gives me better code.

[deleted]9 hours agocollapsed

Insanityan hour ago

Advent of Code (which given my schedule runs into January). That’s the last time thinking hard about a coding problem, I don’t remember exactly if it was day 10 or 11 that had me scratching my head for a while.

I intentionally do not use AI though.

But I sympathize with the author. I enjoy thinking deeply about problems which is why I studied compsci and later philosophy, and ended up in the engineering field. I’m an EM now so AI is less of an “immediate threat” to my thinking habits than the role change was.

That said, I did recently pick up more philosophy reading again just for the purpose of challenges my brain.

chairmansteve2 hours ago

I find I think harder with AI programming. It generates the code, but I have to approve the overall design and then approve every single line. I will edit and rearrange the code until it is correct.

But since the AI is generating a lot of code, it is challenging me. It also allows me to tackle problems in unfamiliar areas. I need to properly understand the solutions, which again is challenging. I know that if I don't understand exactly what the code is doing and have confidence in the design and reliability, it will come back and bite me when I release it into the wild. A lesson learnt the hard way during many decades of programming.

ontouchstartan hour ago

After leaving my previous day job, I have some downtime to get back to thinking and realizing how much I love reading and thinking.

Contemplating the old RTFM, I started a new personal project called WTFM and spends time writing instead of coding. There is no agenda and product goals.

https://wtfm-rs.github.io/

There are so many interesting things in human generated computer code and documentation. Well crafted thoughts are precious.

jonahrd42 minutes ago

Dear author, I suggest trying out a job in a niche part of the field like firmware/embedded. Bonus if it's a company with a bunch of legacy devices to maintain. AI just hasn't quite grokked it there yet and thinking still reigns supreme :)

77773322155 hours ago

Seems like a lot of people use AI to code in their private commercial IP and products. How are people not concerned with the fact that all these ai companies have the source code to everything? Your just helping them destroy your job. Code is not worthless, you cannot easily duplicate any complex project with equal features, quality, and stability.

practal5 hours ago

I think that is a very good point. Code is definitely not worthless, but I don't think that capitalism has the right tools for pricing it properly. I think it will become a lot like mathematics in that way.

ninadwritesan hour ago

You're extremely on point. I don't remember the last time I was able to sit around to think because at the back of my mind, I knew AI could help me generate the initial draft of ideas, logic, or structure that would otherwise require hours of my time.

But to be honest, those hours spent structuring thoughts are so important to making things work. Or you might as well get out of the way and let AI do everything, why even pretend to work when all we're going to do is just copy and paste things from AI outputs?

Nevermark38 minutes ago

i have often pondered if the (sometimes facetiously, sometimes seriously) postulated AI utopia scenario of humans who don’t need to work but can devote their time to art and recreational pursuits, might be a hellscape for many industrious people.

This essay captures that.

Even the pure artist, for whom utility may not seem to matter, manufactures meaning not just from creative exploration directly, but also from the difficulty (which can take many forms) involved in doing something genuinely new, and what they learn from that.

What happens to that when we even have “new” on tap.

InfiniteRandan hour ago

This isn’t the point of the piece, but I have found that the thinker often gets in the way of the builder, because there’s always a better way to build, there’s always some imperfect subsystem you just want to tear out and rewrite and then you realize you were all wrong about this and that, etc.

More to the piece itself, I know some crusty old embedded engineers who feel the same way about compilers as this guy does about AI, it doesn’t invalidate his point but it’s food for thought

novoreorx8 hours ago

To be honest, I do not quite understand the author's point. If he believes that agentic coding or AI has negative impact on being a thinker, or prevent him from thinking critically, he can simply stop using them.

Why blame these tools if you can stop using them, and they won't have any effect on you?

In my case, my problem was often overthinking before starting to build anything. Vibe coding rescued me from that cycle. Just a few days ago, I used openclaw to build and launch a complete product via a Telegram chat. Now, I can act immediately rather than just recording an idea and potentially getting to it "someday later"

To me, that's evolutional. I am truly grateful for the advancement of AI technology and this new era. Ultimately, it is a tool you can choose to use or not, rather than something that prevents you from thinking more.

ganzsz7 hours ago

For me personally, the problem is my teammates. The ability or will to critically think, or investigate existing tools in the codebase seems to disappear. Too often now I have to send back a PR where something is fixed using novel implementations instead of the single function call using existing infrastructure.

ccortes8 hours ago

People here seem to be conflating thinking hard and thinking a lot.

Most examples mentioned of “thinking hard” in the comments sound like they think about a lot of stuff superficially instead one particular problem deeply, which is what OP is referring to.

ghuun7 hours ago

If you actually have a problem worth thinking deeply about, AI usually can’t help with it. For example, AI can’t help you make performant stencil buffers on a Nokia Ngage for fun. It just doesn’t have that in it. Plenty of such problems abound, especially in domains involving some or the other extreme (like high throughput traffic). Just the other day someone posted a vibe coded Wikipedia project that took ages to load (despite being “just” 66MB) and insisted it was the best it was possible to do, whereas Google can load the entire planet (perceptually) in a fraction of a second.

Underqualified4 hours ago

This resonates with me, but I quit programming about a decade ago when we were moving from doing low level coding to frameworks. It became no longer about figuring out the actual problem, but figuring out how to get the framework to solve it and that just didn't work for me.

I do miss hard thinking, I haven't really found a good alternative in the meantime. I notice I get joy out of helping my kids with their, rather basic, math homework, so the part of me that likes to think and solve problems creatively is still there. But it's hard to nourish in today's world I guess, at least when you're also a 'builder' and care about efficiency and effectiveness.

smy200116 hours ago

I miss entering flow state when coding. When vibe coding, you are in constant interruption and only think very shallow. I never see anyone enter flow state when vibe coding.

krzat4 hours ago

Same here, waiting for response destroys any focus I have had.

fragmede3 hours ago

The two ways I get into flow state these days are in setting up agentic loops, so I can get out of the way by letting AI check the results for itself, and by doing more things. I've got ~4 Claude Code instances working on problems, per project, and I've got multiple projects I'm working on at the same time.

ChaitanyaSai9 hours ago

I miss the thrill of running through the semi-parched grasslands and the heady mix of terror triumph and trepidation as we close in on our meal for the week.

Aeolun9 hours ago

I think that feeling is fairly common across the entire population. Play more tag, it’ll help.

goatlover8 hours ago

There are people who still hunt, fish and run. Some even climb without ropes. It would seem the feeling is missed.

hoppp28 minutes ago

All the time. Been working on my own projects, they all require hard thinking.

phamilton8 hours ago

I think harder because of AI.

I have to think more rigorously. I have to find ways to tie up loose ends, to verify the result efficiently, to create efficient feedback loops and define categorical success criteria.

I've thought harder about problems this last year than I have in a long time.

keiferski2 hours ago

This title and the first half of the post (before the AI discussion) just makes me miss the intellectual environment of college.

So I’m tempted to say that this is just a part of the economic system in general, and isn’t specifically linked to AI. Unless you’re lucky enough to grab a job that requires deep intellectual work, your day job will probably not challenge your mental abilities as much as a difficult college course does.

Sad but true, but unfortunately I don’t think any companies are paying people to think deeply about metaphysics (my personal favorite “thinking hard subject” from college.)

nate5 minutes ago

author obviously isn't wrong. it's easy to fall into this trap. and it does take willpower to get out of it. and the AI (christ i'm going to sound like they paid me) can actually be a tool to get there.

i was working for months on an entity resolution system at work. i inherited the basic algo of it: Locality Sensitive Hashing. Basically breaking up a word into little chunks and comparing the chunk fingerprints to see which strings matched(ish). But it was slow, blew up memory constraints, and full of false negatives (didn't find matches).

of course i had claude seek through this looking to help me and it would find things. and would have solutions super fast to things that I couldn't immediately comprehend how it got there in its diff.

but here's a few things that helped me get on top of lazy mode. Basically, use Claude in slow mode. Not lazy mode:

1. everyone wants one shot solutions. but instead do the opposite. just focus on fixing one small step at a time. so you have time to grok what the frig just happened. 2. instead of asking claude for code immediately, ask for more architectural thoughts. not claude "plans". but choices. "claude, this sql model is slow. and grows out of our memory box. what options are on the table to fix this." and now go back and forth getting the pros and cons of the fixes. don't just ask "make this faster". Of course this is the slower way to work with Claude. But it will get you to a solution you more deeply understand and avoid the hallucinations where it decides "oh just add where 1!=1 to your sql and it will be super fast". 3. sign yourself up to explain what you just built. not just get through a code review. but now you are going to have a lunch and learn to teach others how these algorithms or code you just wrote work. you better believe you are going to force yourself to internalize the stuff claude came up with easily. i gave multiple presentations all over our company and to our acquirers how this complicated thing worked. I HAD TO UNDERSTAND. There's no way I could show up and be like "i have no idea why we wrote that algorithm that way". 4. get claude to teach it to you over and over and over again. if you spot a thing you don't really know yet, like what the hell is is this algorithm doing. make it show you in agonizingly slow detail how the concept works. didn't sink in, do it again. and again. ask it for the 5 year old explanation. yes, we have a super smart, over confident and naive engineer here, but we also have a teacher we can berate with questions who never tires of trying to teach us something, not matter how stupid we can be or sound.

Were there some lazy moments where I felt like I wasn't thinking. Yes. But using Claude in slow mode I've learned the space of entity resolution faster and more thoroughly than I could have without it and feel like I actually, personally invented here within it.

joshpicky10 hours ago

I generally feel the same. But in addition, I also enjoy the pure act of coding. At least for me that’s another big part why I feel left behind with all this Agent stuff.

jernestomgop9 hours ago

I agree, that's another factor. Definitely the mechanical act of coding specially if your are good at it gives the type of joy that I can imagine an artisan or craftsman having when doing his work.

BoostandEthanol5 hours ago

I’d been feeling this until quite literally yesterday, where I sort of just forced myself to not touch an AI and grappled with the problem for hours. Got myself all mixed up with trig and angles until I got a headache and decided to back off a lot of the complexity. I doubt I got everything right, I’m sure I could’ve had a solution with near identical outputs using an AI in a fraction of the time.

But I feel better for not taking the efficient way. Having to be the one to make a decision at every step of the way, choosing the constraints and where I cut my losses on accuracy, I think has taught me more about the subject than even reading literature would’ve directly stated.

repelsteeltje4 hours ago

I think the heart of the matter is this section in the blog:

> Yes, I blame AI for this.

> I am currently writing much more, and more complicated software than ever, yet I feel I am not growing as an engineer at all. [...] (emphasis added by me)

AI is a force multiplier for accidental complexity in the Brooks sense. (https://en.wikipedia.org/wiki/No_Silver_Bullet)

andyferris7 hours ago

My solution has been to lean into harder problems - even as side projects, if they aren't available at work.

I too am an ex-physcist used to spending days thinking about things, but programming is a gold mine as it is adjacent to computer science. You can design a programming language (or improve an existing one), try to build a better database (or improve an existing one), or many other things that are quite hard.

The LLM is a good rubber duck for exploring the boundaries of human knowledge (or at least knowledge common enough to be in its training set). It can't really "research" on its own, and whenever you suggest something novel and plausable it gets sycophantic, but it can help you prototype ideas and implementation strategies quite fast, and it can help you explore how existing software works and tackles similar problems (or help you start working on an existing project).

lccerina5 hours ago

"Oh no, I am using a thing that no one is forcing me to use, and now I am sad".

Just don't use AI. The idea that you have ship ship ship 10X ship is an illusion and a fraud. We don't really need more software

oa3359 hours ago

I feel like AI has given me the opportunity to think MORE, not less. I’m doing so much less mindless work, spending most of my efforts critically analyzing the code and making larger scale architectural decisions.

The author says “ Even though the AI almost certainly won't come up with a 100% satisfying solution, the 70% solution it achieves usually hits the “good enough” mark.”

The key is to keep pushing until it gets to the 100% mark. That last 30% takes multiples longer than the first 70%, but that is where the satisfaction lies for me.

freshbreath4 hours ago

"I don't want to have to write this for the umpteenth time" -- Don't let it even reach a -teenth. Automate it on the 2nd iteration. Or even the 1st if you know you'll need it again. LLMs can help with this.

Software engineers are lazy. The good ones are, anyway.

LLMs are extremely dangerous for us because it can easily become a "be lazy button". Press it whenever you want and get that dopamine hit -- you don't even have to dive into the weeds and get dirty!

There's a fine line between "smart autocomplete" and "be lazy button". Use it to generate a boilerplate class, sure. But save some tokens and fill that class in yourself. Especially if you don't want to (at your own discretion; deadlines are a thing). But get back in those weeds, get dirty, remember the pain.

We need to constantly remind ourselves of what we are doing and why we are doing it. Failing that, we forget the how, and eventually even the why. We become the reverse centaur.

And I don't think LLMs are the next layer of abstraction -- if anything, they're preventing it. But I think LLMs can help build that next layer... it just won't look anything like the weekly "here's the greatest `.claude/.skills/AGENTS.md` setup".

If you have to write a ton of boilerplate code, then abstract away the boilerplate in code (nondeterminism is so 2025). And then reuse that abstraction. Make it robust and thoroughly tested. Put it on github. Let others join in on the fun. Iterate on it. Improve it. Maybe it'll become part of the layers of abstraction for the next generation.

chuliomartinez2 hours ago

I guess it depends on what you build, i feel the most complex part of the deal, that makes me think the hardest, is to figure out what to build. Eg understanding the client, and creating a solution that fits between their needs and abilities. The rest is often a technical detail, yes sometimes you need to deep dive to optimize. Anyway if you miss debugging try debugging people;)

lukewarmdaisiesan hour ago

Then think hard? Have a level of self discipline and don’t consistently turn to AI to solve your problems. Go to a library if you have to! People act like victims to the machine when it comes to building their thinking muscles and AI and it confuses me.

practal5 hours ago

I see the current generation of AI very much as a thing in between. Opus 4.5 can think and code quite well, but it cannot do these "jumps of insight" yet. It also struggles with straightforward, but technically intricate things, where you have to max out your understanding of the problem.

Just a few days ago, I let it do something that I thought was straightforward, but it kept inserting bugs, and after a few hours of interaction it said itself it was running in circles. It took me a day to figure out what the problem was: an invariant I had given it was actually too strong, and needed to be weakened for a special case. If I had done all of it myself, I would have been faster, and discovered this quicker.

For a different task in the same project I used it to achieve a working version of something in a few days that would have taken me at least a week or two to achieve on my own. The result is not efficient enough for the long term, but for now it is good enough to proceed with other things. On the other hand, with just one (painful) week more, I would have coded a proper solution myself.

What I am looking forward to is being able to converse with the AI in terms of a hard logic. That will take care of the straightforward but technically intricate stuff that it cannot do yet properly, and it will also allow the AI to surface much quicker where a "jump of insight" is needed.

I am not sure what all of this means for us needing to think hard. Certainly thinking hard will be necessary for quite a while. I guess it comes down to when the AIs will be able to do these "jumps of insight" themselves, and for how long we can jump higher than they can.

[deleted]2 hours agocollapsed

cbdevidal3 hours ago

It’s possible to be both.

The last time I had to be a Thinker was because I was in Builder mode. I’ve been trying to build an IoT product but I’ve been wayyyy over my head because I knew maybe 5% of what I needed to be successful. So I would get stuck many, many times, for days or weeks at a time.

I will say though that AI has made the difference in the last few times I got stuck. But I do get more enjoyment out of Building than Thinking, so I embrace it.

rammy12348 hours ago

Great article. Moment I finished reading this article, I thought of my time in solving a UI menu problem with lot of items in it and algorithm I came up with to solve for different screen sizes. It took solid 2 hrs of walking and thinking. I still remember how I was excited when I had the feeling of cracking the problem. Deep thinking is something everyone has it within and it varies how fast you can think. But we all got it with right environment and time we all got it in us. But thats long time ago. Now I always off load some thinking to AI. it comes up with options and you just have to steer it. By time it is getting better. Just ask it you know. But I feel like it is good old days to think deep by yourself. Now I have a partner in AI to think along with me. Great article.

getnormality2 hours ago

If you miss challenge, the world has plenty more. Maybe it's not all your comfort zone, but if you try being a little ambitious and maybe use AI to understand a field you're not already deeply familiar with, you can continue to grow.

jsattler6 hours ago

I had similar thoughts recently. I wouldn't consider myself "the thinker", but I simply missed learning by failure. You almost don't fail anymore using AI. If something fails, it feels like it's not your fault but the AI messed up. Sometimes I even get angry at the AI for failing, not at myself. I don't have a solution either, but I came up with a guideline on when and how to use AI that has helped me to still enjoy learning. I'm not trying to advertise my blog and you don't need to read it, the important part is the diagram at the end of "Learning & Failure": https://sattlerjoshua.com/writing/2026-02-01-thoughts-on-ai-.... In summary, when something is important and long-term, I heavily invest into understanding and use an approach that maximizes understanding over speed. Not sure if you can translate it 100% to your situation but maybe it helps to have some kind of guideline, when to spend more time thinking instead of directly using and AI to get to the solution.

bariswheel9 hours ago

Good highlight of the struggle between Builder and Thinker, I enjoyed the writing. So why not work on PQC? Surely you've thought about other avenues here as well.

If you're looking for a domain where the 70% AI solution is a total failure, that's the field. You can't rely on vibe coding because the underlying math, like Learning With Errors (LWE) or supersingular isogeny graphs, is conceptually dense and hasn't been commoditized into AI training data yet. It requires that same 'several-day-soak' thinking you loved in physics, specifically because we're trying to build systems that remain secure even against an adversary with a quantum computer. It’s one of the few areas left where the Thinker isn't just a luxury, but a hard requirement for the Builder to even begin.

ahyangyi2 hours ago

In most research areas, if a few days thinking is good enough to reach a worthwhile conclusion, it's not "thinking hard". It's "low-hanging fruit".

cladopa5 hours ago

I believe the article is wrong in so many ways.

If you think too much you get into dead ends and you start having circular thoughts, like when you are lost in the desert and you realise you are in the same place again after two hours as you have made a great circle(because one of your legs is dominant over the other).

The thinker needs feedback on the real world. It needs constant testing of hypothesis on reality or else you are dealing with ideology, not critical thinking. It needs other people and confrontation of ideas so the ideas stay fresh and strong and do not stagnate in isolation and personal biases.

That was the most frustrating thing before AI, a thinker could think very fast, but was limited in testing by the ability to build. Usually she had to delegate it to people that were better builders, or else she had to be builder herself, doing what she hates all the time.

Thanematean hour ago

The crowd that counterpoints with "just don't use it then" miss the point: The general population lacks the ability to judge when should use it and when they shouldn't. The average person will always lean towards the less effortful option, without awareness of the long term consequences.

On top of that, the business settings/environments will always lean towards the option that provides the largest productivity gains, without awareness of the long term consequences for the worker. At that environment, not using it is not an option, unless you want to be unemployed.

Where does that leave us? Are we supposed to find the figurative "gym for problem solving" the same way office workers workout after work? Because that's the only solution I can think of: Trading off my output for problem solving outside of work settings, so that I can improve my output with the tool at work.

alex-moonan hour ago

> Are we supposed to find the figurative "gym for problem solving" the same way office workers workout after work?

That's it, yeah. It sucks but it's part of the job. It makes you a better engineer.

You're absolutely right that this isn't sustainable however. In one of my earlier jobs - specifically, the one that trained me up to become the senior engineer I am now - we had "FedEx Fridays" (same day delivery, get it?). In a word, you have a single work day to work on something non-work related, with one condition: you had to have a deliverable by the end of the day. I cannot overstate how useful having something like this in place in the place of business is for junior devs. The trick is convincing tech businesses that this kind of "training" is a legitimate overhead - the kinds of businesses that are run by engineers get this intuitively. The kind that have a non-technical C-suite less so.

foxmoss9 hours ago

Eventually I always get to a problem I can't solve by just throwing an LLM at it and have to go in and properly debug things. At that point knowing the code base helps a hell of a lot, and I would've been better off writing the entire thing by hand.

dchftcsan hour ago

you still can think hard but you can offload some parts to LLM when you're stuck. Then you can leave space for more hard-won inspiration. When you're faced with a high-stakes decision, evaluating all sorts of possibilities, it's really easy to maximize the utilization of your brain, so in those cases you have plenty of chance to think hard.

lxgr5 hours ago

I've had the completely opposite experience as somebody that also likes to think more than to build: LLMs take much of the legwork of actually implementing a design, fixing trivial errors etc. away from me and let me validate theories much more quickly than I could do by myself.

More importantly, thinking and building are two very different modes of operating and it can be hard to switch at moment's notice. I've definitely noticed myself getting stuck in "non-thinking building/fixing mode" at times, only realizing that I've been making steady progress into the wrong direction an hour or two in.

This happens way less with LLMs, as they provide natural time to think while they churn away at doing.

Even when thinking, they can help: They're infinitely patient rubber ducks, and they often press all the right buttons of "somebody being wrong on the Internet" too, which can help engineers that thrive in these kinds of verbal pro/contra discussions.

6mirrors7 hours ago

The sampling rate we use to take input information is fixed. And we always find a way to work with the sampled information, no matter if the input information density is high or low.

We can play a peaceful game and a intense one.

Now, when we think, we can always find a right level of abstract to think on. Decades ago a programmer thought with machine codes, now we think with high level concepts, maybe towards philosophy.

A good outcome always requires hard thinking. We can and we WILL think hard at a appropriate level.

pyreal2 hours ago

The author clearly loves coding more than the output from coding. I'm thinking harder than ever and so grateful I can finally think hard about the output I really want rather than how to resolve bugs or figure out how to install some new dependency.

danavar8 hours ago

Many people here might be in a similar situation to me, but I took an online masters program that allowed for continuing education following completion of the degree. This has become one of my hobbies; I can take classes at my own expense, not worry about my grades, and just enjoy learning. I can push myself as much as I want and since the classes are hard, just completing 1 assignment is enough to force me to "think". Just sharing my experience for people who might be looking for ways to challenge themselves intellectually.

fattybob2 hours ago

Thinking hard and fast with positive results is like a drug, ah those were good and rewarding days in my past, would jump back into that work framework any time ( that was running geological operations in an unusually agile oil exploration programme )

enthus1ast_3 hours ago

When I wrote nimja's template inheritance. I thought about it multiple days, until, during a train commute, it made click and I had to get out my notebook and write it, directly in the train. Then some month later I found out, I had the same bug that jinja2 had fixed years ago. So I felt kinda like a brothers in hard thinking :)

rc-11409 hours ago

I think what plagues a lot of pure STEM types in this tumultuous period of AI (or "AI") is that they've spent a majority of their lives mulling over some problem until they've worked out every possible imperfection, and once they've achieved something they consider close to that level of perfection, that's when they say they're done.

While this may be an unfair generalization, and apologies to those who don't feel this way, but I believe STEM types like the OP are used to problem solving that's linear in the sense that the problem only exists in its field as something to be solved, and once they figure it out, they're done. The OP even described his mentality as that of a "Thinker" where he received a problem during his schooling, mulled over it for a long time, and eventually came to the answer. That's it, next problem to crack. Their whole lives revolve around this process and most have never considered anything outside it.

Even now, despite my own healthy skepticism of and distaste for AI, I am forced to respect that AI can do some things very fast. People like the OP, used to chiseling away at a problem for days, weeks, months, etc., now have that throughput time slashed. They're used to the notion of thinking long and hard about a very specific problem and finally having some output; now, code modules that are "good enough" can be cooked up in a few minutes, and if the module works the problem is solved and they need to find the next problem.

I think this is more common than most people want to admit, going back to grumblings of "gluing libraries together" being unsatisfying. The only suggestion I have for the OP is to expand what you think about. There are other comments in this thread supporting it but I think a sea change that AI is starting to bring for software folks is that we get to put more time towards enhancing module design, user experience, resolving tech debt, and so on. People being the ones writing code is still very important.

I think there's more to talk about where I do share the OP's yearning and fears (i.e., people who weren't voracious readers or English/literary majors being oneshot by the devil that is AI summaries, AI-assisted reading, etc.) but that's another story for another time.

ai_critic9 hours ago

> I think what plagues a lot of pure STEM types in this tumultuous period of AI (or "AI") is that they've spent a majority of their lives mulling over some problem until they've worked out every possible imperfection, and once they've achieved something they consider close to that level of perfection, that's when they say they're done.

These people are miserable to work with if you need things done quickly and can tolerate even slight imperfection.

That operating regime is, incidentally, 95% of the work we actually get paid to do.

petterroea4 hours ago

I've missed the same even since before AI because I've done far too much work that's simple but time intensive. It's frustrating, and I miss problems that keep me up all night.

Reverse engineering is imo the best way of getting the experience of pushing your thinking in a controlled way, at least if you have the kind of personality where you are stubborn in wanting to solve the problem.

Go crack an old game or something!

jillesvangurp4 hours ago

You can't change the world, you can change yourself. Many people don't like change. So, people get frustrated when the world inevitably changes and they fail to adapt. It's called getting older. Happens to us all.

I'm not immune to that and I catch myself sometimes being more reluctant to adapt. I'm well aware and I actively try to force myself to adapt. Because the alternative is becoming stuck in my ways and increasingly less relevant. There are a lot of much younger people around me that still have most of their careers ahead of them. They can try to whine about AI all they want for the next four decades or so but I don't think it will help them. Or they can try to deal with the fact that these tools are here now and that they need to learn to adapt to them whether they like it or not. And we are probably going to see quite some progress on the tool front. It's only been 3 years since ChatGPT had its public launch.

To address the core issue here. You can use AI or let AI use you. The difference here is about who is in control and who is setting the goals. The traditional software development team is essentially managers prompting programmers to do stuff. And now we have programmers prompting AIs to do that stuff. If you are just a middle man relaying prompts from managers to the AI, you are not adding a lot of value. That's frustrating. It should be because it means apparently you are very replaceable.

But you can turn that around. What makes that manager the best person to be prompting you? What's stopping them from skipping that entirely? Because that's your added value. Whatever you are good at and they are not is what you should be doing most of your time. The AI tools are just a means to an end to free up more time for whatever that is. Adapting means figuring that out for yourself and figuring out things that you enjoy doing that are still valuable to do.

There's plenty of work to be done. And AI tools won't lift a finger to do it until somebody starts telling them what needs doing. I see a lot of work around me that isn't getting done. A lot of people are blind to those opportunities. Hint: most of that stuff still looks like hard work. If some jerk can one shot prompt it, it isn't all that valuable and not worth your time.

Hard work usually involves thinking hard, skilling up, and figuring things out. The type of stuff the author is complaining he misses doing.

erelong8 hours ago

You were walking to your destination which was three miles away

You now have a bicycle which gets you there in a third of the time

You need to find destinations that are 3x as far away than before

tevli2 hours ago

but if you enjoy walking, what then?

theworstname8 hours ago

If it's this easy to convince you to stop being creative, to stop putting in effort to think critically, then you don't deserve the fulfilment that creativity and critical thinking can give you. These vibe coding self pity articles are so bizarre.

charcircuit5 hours ago

If you are thinking hard I think you are software engineering wrong. Even before AI. As an industry all the different ways of doing things have already played out. Even doing big reactors or performance optimizations often can not be 100% predicted in their effectiveness. You will want to just go ahead and implement these things over spending more time thinking. And as AI gets stronger the just try a bunch of approaches will beat the think hard approach by an even bigger margin.

globular-toast4 hours ago

Why do anything at all then? It's all been done before. This line of thinking sounds like depression to me. Why decorate my house? I know I could do it, but it's all been done before, why bother?

charcircuit3 hours ago

Just because we know the best way to add 2 ints together that doesn't mean it's pointless to do addition with ints. We don't need people trying to spend a lot of extra time to come up with alternate ways to do it. The right function using addition may be valuable to a certain population.

noodleweb4 hours ago

I miss this too, I have had those moments of reward where something works and I want to celebrate. It's missing too for me.

With AI the pros outweigh the cons at least at the moment with what we collectively have figured out so far. But with that everyday I wonder if it's possible now to be more ambitious than ever and take on much bigger problem with the pretend smart assistant.

armchairhacker8 hours ago

Personally: technical problems I usually think for a couple days at most before I need to start implementing to make progress. But I have background things like future plans, politics, philosophy, and stories, so I always have something to think about. Close-up technical thinking is great, but sometimes step back and look at the bigger picture?

I don't think AI has affected my thinking much, but that's because I probably don't know how to use it well. Whenever AI writes a lot of code, I end up having to understand if not change most of it; either because I don't trust the AI, I have to change the specification (and either it's a small change or I don't trust the AI to rewrite), the code has a leaky abstraction, the specification was wrong, the code has a bug, the code looks like it has a bug (but the problem ends up somewhere else), I'm looking for a bug, etc. Although more and more often the AI saves time and thinking vs. if I wrote the implementation myself, it doesn't prevent me from having to think about the code at all and treating it like a black box, due to the above.

harrisonjackson8 hours ago

I believe it is a type of burnout. AI might have accelerated both the work and that feeling.

I found that doing more physical projects helped me. Large woodworking, home improvement, projects. Built-in bookshelves, a huge butcher block bar top (with 24+ hours of mindlessly sanding), rolling workbenches, and lots of cabinets. Learning and trying to master a new skill, using new design software, filling the garage with tools...

nubinetwork3 hours ago

> the number of times I truly ponder a problem for more than a couple of hours has decreased tremendously

Isn't that a good thing? If you're stuck on the same problem forever, then you're not going to get past it and never move on to the next thing... /shrug

tomquirk5 hours ago

The answer to this is to shift left into product/design.

Sure, I'm doing less technical thinking these days. But all the hard thinking is happening on feature design.

Good feature design is hard for AI. There's a lot of hidden context: customer conversations, unwritten roadmaps, understanding your users and their behaviour, and even an understanding of your existing feature set and how this new one fits in.

It's a different style of thinking, but it is hard, and a new challenge we gotta embrace imo.

margorczynski3 hours ago

> Good feature design is hard for AI

For now. Go back a year and take a look how the AI/LLM coding tools looked and worked back then.

tolerance7 hours ago

I’d love to be able to see statistics that show LLM use and reception according to certain socioeconomic factors.

77773322155 hours ago

Anything in particular you expect to see?

phromo8 hours ago

I am thinking harder than ever due to vibe coding. How will markets shift? What will be in demand? How will the consumer side adapt? How do we position? Predicting the future is a hard problem... The thinker in me is working relentlessly since December. At least for me the thinker loves an existential crisis like no other.

est3 hours ago

I wrote a blog about this as well

Hard Things in Computer Science, and AI Aren't Fixing Them

https://blog.est.im/2026/stderr-04

felipelalli2 hours ago

Me: I put your text into AI and ask it to summarize. We really do have a critical problem of mental laziness.

rcvassallo838 hours ago

Thinking harder than I have in a long time with AI assisted coding.

As I'm providing context I get to think about what an ideal approach would look like and often dive into a research session to analyze pros and cons of various solutions.

I don't use agents much because it's important to see how a component I just designed fits into the larger codebase. That experience provides insights on what improvements I need to make and what to build next.

The time I've spent thinking about the composability, cohesiveness, and ergonomics of the code itself have really paid off. The codebase is a joy to work in, easy to maintain and extend.

The LLMs have helped me focus my cognitive bandwidth on the quality and architecture instead of the tedious and time consuming parts.

frgturpwd3 hours ago

It seems like what you miss is actually a stable cognitive regime built around long uninterrupted internal simulation of a single problem. This is why people play strategy video games.

mightymosquito6 hours ago

While I see where you are coming from but I think what has really gone for a toss is the utility of thinking hard.

Thinking hard has never been easier.

I think AI for an autodidact is a boon. Now I suddenly have a teacher who is always accessible and will teach me whatever I want for as long as I want exactly the way I want and I don;t have to worry about my social anxiety kicking in.

Learn advanced cryptography? AI, figure out formal verification - AI etc.

johanvts6 hours ago

I dont think LLMs really took away much thinking, for me they replaced searching stackexchange to find incantations. Now I can get them instantly and customized to my situation. I miss thinking hard too, but I dont blame that on AI, its more that as a dev you are paid to think the absolute minimal amount needed to solve an issue or implement a feature. I dont regret leaving academia, but being paid to think I will always miss.

msephton5 hours ago

I'm not sure I agree. Actually, I don't agree. You only stop thinking hard if you decide to stop thinking hard. Nobody, no tool, is forcing you to stop thinking, pushing, reaching. If the thinking ceiling has changed, which I think it has, then it's entirely up to you to either move with it or stay still.

userbinator7 hours ago

In my experience you will need to think even harder with AI if you want a decent result, although the problems you'll be thinking about will be more along the lines of "what the hell did it just write?"

The current major problem with the software industry isn't quantity, it's quality; and AI just increases the former while decreasing the latter. Instead of e.g. finding ways to reduce boilerplate, people are just using AI to generate more of it.

porcoda9 hours ago

> At the end of the day, I am a Builder. I like building things. The faster I build, the better.

This I can’t relate to. For me it’s “the better I build, the better”. Building poor code fast isn’t good: it’s just creating debt to deal with in the future, or admitting I’ll toss out the quickly built thing since it won’t have longevity. When quality comes into play (not just “passed the tests”, but is something maintainable, extensible, etc), it’s hard to not employ the Thinker side along with the Builder. They aren’t necessarily mutually exclusive.

Then again, I work on things that are expected to last quite a while and aren’t disposable MVPs or side projects. I suppose if you don’t have that longevity mindset it’s easy to slip into Build-not-Think mode.

[deleted]5 hours agocollapsed

jbrooks842 hours ago

You are doing something wrong. Ai has not taken away thinking hard

AdieuToLogic9 hours ago

Cognitive skills are just like any other - use them and they will grow, do not and they will decline. Oddly enough, the more one increases their software engineering cognition, the less the distance between "The Builder" and "The Thinker" becomes.

zkmon7 hours ago

When people missed working hard, they turned to fake physical work (gyms). So people now need some fake thinking work.

Except for eating and sleeping, all other human activities are fake now.

soanvig6 hours ago

you forget about fake sleeping being loaded with fake dopamine hits before sleep AND broken sleep schedules; and eating fake ultraprocessed food instead of wholefoods.

zkmon3 hours ago

Thanks for correcting. That completes the fake life pattern.

So, people fake things to get a fake life. Reminds me a Russian joke about factory workers. "They pretend to pay, and we pretend to work".

rrvsh6 hours ago

We've always been doing fake thinking work since the beginning, see: puzzles

ertucetin5 hours ago

It’s the journey, not the destination, but with AI it’s only the destination, and it takes all the joy.

tbs19805 hours ago

tevli4 hours ago

Exactly what I've been thinking. outsourcing tasks and thinking of problems to AI just seems easier these days; and you still get to feel in charge because you're the one still giving instructions.

rozumem6 hours ago

I can relate to this. Coding satisifies my urge to build and ship and have an impact on the world. But it doesn't make me think hard. Two things which I've recently gravitated to outside of coding which make me think: blogging and playing chess.

Maybe I subconsciously picked these up because my Thinker side was starved for attention. Nice post.

ccppurcell5 hours ago

In my experience, the so-called 1% are mostly just thinkers and researchers who have dedicated a lot more time from an earlier age to thinking and/or researching. There are a few geniuses out there but it's 1 in millions not in hundreds.

ggm8 hours ago

A lot of productive thinking happens when asleep, in the shower, in flow walking or cycling or rowing.

It's hard to rationalise this as billable time, but they pay for outcome even if they act like they pay for 9-5 and so if I'm thinking why I like a particular abstraction, or see analogies to another problem, or begin to construct dialogues with mysel(ves|f) about this, and it happens I'm scrubbing my back (or worse) I kind of "go with the flow" so to speak.

Definitely thinking about the problem can be a lot better than actually having to produce it.

moorebob2 hours ago

My mindset last year: I am now a mentor to a junior developer

My mindset this year: I am an engineering manager to a team of developers

If the pace of AI improvement continues, my mindset next year will need to be: I am CEO and CTO.

I never enjoyed the IC -> EM transition in the workplace because of all the tedious political issues, people management issues and repetitive admin. I actually went back to being an IC because of this.

However, with a team of AI agents, there's less BS, and less holding me back. So I'm seeing the positives - I can achieve vastly more, and I can set the engineering standards, improve quality (by training and tuning the AI) and get plenty of satisfaction from "The Builder" role, as defined in the article.

Likewise I'm sure I would hate the CEO/CTO role in real life. However, I am adapting my mindset to the 2030s reality, and imagining being a CEO/CTO to an infinitely scalable team of Agentic EMs who can deliver the work of hundreds of real people, in any direction I choose.

How much space is there in the marketplace if all HN readers become CEOs and try to launch their own products and services? Who knows... but I do know that this is the option available to me, and it's probably wise to get ahead of it.

martin19754 hours ago

I've been writing C/C++/Java for 25 years and am trying to learn forex disciplined, risk managed forex trading, It's a whole new level of hard work/thinking.

Dr_Birdbrain9 hours ago

I think this problem existed before AI. At least in my current job, there is constant, unrelenting demand for fast results. “Multi-day deep thinking” sounds like an outrageous luxury, at least in my current job.

renegade-otter9 hours ago

Which is a reason for software becoming worse across the board. Just look at Windows. The "go go go" culture is ruinous to products.

cpncrunch8 hours ago

Even 30 years ago when I started in the industry, most jobs required very little deep thinking. All of mine has been done on personal projects. Thats just the reality of the typical software engineering job.

tbmtbmtbmtbmtbm9 hours ago

this is why productivity is a word that should really just be reserved for work contexts, and personal time is better used for feeding "The Thinker"

zepesm4 hours ago

That's why i'm still pushing bytes on C64 demoscene (and recommend such a niche as a hobby to anyone). It's great for the sanity in modern ai-driven dev-world ;)

raincole9 hours ago

I really don't believe AI allows you to think less hard. If it did, it would be amazing, but the current AI hasn't got to that capability. It forces you to think about different things at best.

saturatedfat7 hours ago

I think for days at a time still.

I don’t think you can get the same satisfaction out of these tools if what you want to do is not novel.

If you are exploring the space of possibilities for which there are no clear solutions, then you have to think hard. Take on wildly more ambitious projects. Try to do something you don’t think you can do. And work with them to get there.

koakuma-chan9 hours ago

What a bizarre claim. If you can solve anything by thinking, why don't you become a scientist? Think of a theory that unites quantum physics and general relativity.

muyuu4 hours ago

this also used to happened to me when I in a position that involved a lot of research earlier on and then after the product was a reality, and it worked, it tapered off to be small improvements and maintenance

I can imagine many positions work out this way in startups

it's important to think hard sometimes, even if it means taking time off to do the thinking - you can do it without the socioeconomic pressure of a work environment

sbinnee7 hours ago

What OP wants to say is that they miss the process of thinking hard for days and weeks and one day this brilliant idea popping up on their bed before sleep. I lost my "thinking hard" process again too today at work against my pragmatism, or more precisely my job.

scionni6 hours ago

I have a very similar background and a very similar feeling when i think of programming nowadays.

Personally, I am going deeper in Quantum Computing, hoping that this field will require thinkers for a long time.

Meneth3 hours ago

I knew this sort of thing would happen before it was popular. Accordingly:

Never have I ever used an LLM.

tbmtbmtbmtbmtbm9 hours ago

Make sure you start every day with the type of confidence that would allow you to refer to yourself as an intellectual one-percenter

fatfox6 hours ago

Just sit down and think hard. If it doesn’t work, think harder.

[deleted]5 hours agocollapsed

voidUpdate4 hours ago

If you miss the experience of not using LLMs, then just... don't? Is someone forcing you to code with LLM help?

globular-toast4 hours ago

I think a lot of people are struggling with this. Look at the obesity epidemic. Nobody is forcing you to buy ultraprocessed foods. Nobody is forcing you to overeat. You can still cook with fresh vegetables at home. But many/most people in Western countries struggle with their weight.

An even better analogy is the slot machine. Once you've "won" one time it's hard to break the cycle. There's so little friction to just having another spin. Everyone needs to go and see the depressed people at slot machines at least once to understand where this ends.

[deleted]7 hours agocollapsed

emsign2 hours ago

I miss hard thinking people.

[deleted]7 hours agocollapsed

dhananjayadr7 hours ago

The author's point is, If you use AI to solve the problem and after the chat gives you the solution you say “oh yes, ok, I understand it, I can do it”(and no, you can’t do it).

Animats9 hours ago

"Sometimes you have to keep thinking past the point where it starts to hurt." - Fermi

z3t47 hours ago

I always search the web, ask others, or read books in order to find a solution. When I do not find an answer from someone else, that's where I have to think hard.

soanvig6 hours ago

That's weird as I do the opposite: think by myself, then look for help if I don't know.

hpone915 hours ago

Just give Umineko a play/readthrough to get your deep thinking gray cells working again.

zatkin9 hours ago

I feel that AI doesn't necessarily replace my thinking, but actually helps to explore deeper - on my behalf - alternative considerations in the approach to solving a problem, which in turn better informs my thinking.

yehoshuapw6 hours ago

have a look at https://projecteuler.net/

for "Thinker" brain food. (it still has the issue of not being a pragmatic use of time, but there are plenty interesting enough questions which it at least helps)

thorum5 hours ago

> but the number of problems requiring deep creative solutions feels like it is diminishing rapidly.

If anything, we have more intractable problems needing deep creative solutions than ever before. People are dying as I write this. We’ve got mass displacement, poverty, polarization in politics. The education and healthcare systems are broken. Climate change marches on. Not to mention the social consequences of new technologies like AI (including the ones discussed in this post) that frankly no one knows what to do about.

The solution is indeed to work on bigger problems. If you can’t find any, look harder.

keithnz9 hours ago

I feel like I'm doing much nicer thinking now, I'm doing more systems thinking, not only that I'm iterating on system design a lot more because it is a lot easier to change with AI

sfink9 hours ago

I definitely relate to this. Except that while I was in the 1% in university who thought hard, I don't think my success rate was that high. My confidence in the time was quite high, though, and I still remember the notable successes.

And also, I haven't started using AI for writing code yet. I'm shuffling toward that, with much trepidation. I ask it lots of coding questions. I make it teach me stuff. Which brings me to the point of my post:

The other day, I was looking at some Rust code and trying to work out the ownership rules. In theory, I more or less understand them. In practice, not so much. So I had Claude start quizzing me. Claude was a pretty brutal teacher -- he'd ask 4 or 5 questions, most of them solvable from what I knew already, and then 1 or 2 that introduced a new concept that I hadn't seen. I would get that one wrong and ask for another quiz. Same thing: 4 or 5 questions, using what I knew plus the thing just introduced, plus 1 or 2 with a new wrinkle.

I don't think I got 100% on any of the quizzes. Maybe the last one; I should dig up that chat and see. But I learned a ton, and had to think really hard.

Somehow, I doubt this technique will be popular. But my experience with it was very good. I recommend it. (It does make me a little nervous that whenever I work with Claude on things that I'm more familiar with, he's always a little off base on some part of it. Since this was stuff I didn't know, he could have been feeding me slop. But I don't think so; the explanations made sense and the the compiler agreed, so it'd be tough to get anything completely wrong. And I was thinking through all of it; usually the bullshit slips in stealthily in the parts that don't seem to matter, but I had to work through everything.)

Bengalilol4 hours ago

Cognitive debt lies ahead for all of us.

tietjens6 hours ago

I wish the author would give some examples of what he wants to think hard about.

macmac_mac4 hours ago

reading this made me realize i used to actually think hard about bugs and design tradeoffs because i had no choice

spacecadet2 hours ago

Every day man... Thinking hard on something is a conscious choice.

Besibeta10 hours ago

The problem with the "70% solution" is that it creates a massive amount of hidden technical debt. You aren't thinking hard because you aren't forced to understand the edge cases or the real origin of the problem. It used to be the case that you will need plan 10 steps ahead because refactoring was expensive, now people just focus in the next problem ahead, but the compounding AI slop will blow up eventually.

andsoitis10 hours ago

would you agree that there's more time to think about what problems are worth solving?

woah8 hours ago

Just work on more ambitious projects?

marcus_holmes6 hours ago

I think it's just another abstraction layer, and moves the thinking process from "how do I solve this problem in code?" to "how do I solve this problem in orchestration?".

I recently used the analogy of when compilers were invented. Old-school coders wrote machine code, and handled the intricacies of memory and storage and everything themselves. Then compilers took over, we all moved up an abstraction layer and started using high-level languages to code in. There was a generation of programmers who hated compilers because they wrote bad, inelegant, inefficient, programs. And for years they were right.

The hard problems now are "how can I get a set of non-deterministic, fault-prone, LLM agents to build this feature or product with as few errors as possible, with as little oversight as possible?". There's a few generic solutions, a few good approaches coming out, but plenty of scope for some hard thought in there. And a generic approach may not work for your specific project.

gethly2 hours ago

Skill issue

capl5 hours ago

That’s funny cause I feel the opposite: LLMs can automate, in a sloppy fashion, building the first trivial draft. But what remains is still thinking hard about the non trivial parts.

jurgenaut233 hours ago

Man, this resonates SO MUCH with me. I have always loved being confronted with a truly difficult problem. And I always had that (obviously misguided, but utterly motivating) feeling that, with enough effort, no problem could ever resist me. That it was just a matter of grinding a bit further, a bit longer.

This is why I am so deeply opposed to using AI for problem solving I suppose: it just doesn’t play nice with this process.

makerdiety2 hours ago

If you don't have to think, then what you're building isn't really news worthy.

So, we have an inflation of worthless stuff being done.

conception7 hours ago

“We now buy our bread… it comes sliced… and sure you can just go and make your sandwich and it won’t be a rustic, sourdough that you spent months cultivating. Your tomatoes will be store bought not grown heirlooms. In the end… you have lost the art of baking bread. And your sandwich making skills are lost to time… will humanity ever bake again with these mass factories of bread? What have we lost! Woe is me. Woe is me.”

chadcmulligan7 hours ago

That is a very good analogy - sliced shop bread is tasteless and not that good for you compared to sourdough. Likewise awful store bought tomatoes taste like nothing compared to heirloom tomatoes and arguably have different nutritional content.

Shop bread and tomatoes though can be manufactured without any thought of who makes them, though they can be reliably manufactured without someone guiding an LLM which is perhaps where the analogy falls down, and we always want them to be the same, but software is different in every form.

mw8888 hours ago

Give the AI less responsibility but more work. Immediate inference is a great example: if the AI can finish my lines, my `if` bodies, my struct instantiations, type signatures, etc., it can reduce my second-by-second work significantly while taking little of my cognitive agency.

These are also tasks the AI can succeed at rather trivially.

Better completions is not as sexy, but in pretending agents are great engineers it's an amazing feature often glossed over.

Another example is automatic test generation or early correctness warnings. If the AI can suggest a basic test and I can add it with the push of a button - great. The length (and thus complexity) of tests can be configured conservatively relative to the AI of the day. Warnings can just be flags in the editors spotting obvious mistakes. Off-by-one errors for example, which might go unnoticed for a while, would be an achievable and valuable notice.

Also, automatic debugging and feeding the raw debugger log into an AI to parse seems promising, but I've done little of it.

...And go from there - if a well-crafted codebase and an advanced model using it as context can generate short functions well, then by all means - scale that up with discretion.

These problems around the AI coding tools are not at all special - it's a classic case of taking the new tool too far too fast.

duskdozer2 hours ago

This is the way I would consider using them; I just haven't really been able to figure out what I would need to get a reasonably fast and useful local setup without spending a ton of money.

kypro2 hours ago

Maybe this is just me, but I don't miss thinking so much. I personally quite like knowing how to do things and being able to work productively.

For me it's always been the effort that's fun, and I increasingly miss that. Today it feels like I'm playing the same video game I used to enjoy with all the cheats on, or going back to an early level after maxing out my character. In some ways the game play is the same, same enemies, same map, etc, but the action itself misses the depth that comes from the effort of playing without cheats or with a weaker character and completing the stage.

What I miss personally is coming up with something in my head and having to build it with my own fingers with effort. There's something rewarding about that which you don't get from just typing "I want x".

I think this craving for effort is a very human thing to be honest... It's why we bake bread at home instead just buying it from a locally bakery that realistically will be twice as good. The enjoyment comes from the effort. I personally like building furniture and although my furniture sucks compared to what you might be able buy at store, it's so damn rewarding to spend days working on something then having a real physical thing that you can use that you build from hand.

I've never thought of myself as someone who likes the challenge of coding. I just like building things. And I think I like building things because building things is hard. Or at least it was.

larodi8 hours ago

Well thinking hard is still there if you work on hard abstract problems. I keep thinking very hard, even though 4 CCs pump code while I do this. Besides, being a Gary Kasparov, playing on several tables, takes thinking.

sergiotapia9 hours ago

With AI, I now think much harder. Timelines are shorter, big decisions are closer together, and more system interactions have to be "grokked" in my head to guide the model properly.

I'm more spent than before where I would spend 2 hours wrestling with tailwind classes, or testing API endpoints manually by typing json shapes myself.

saulpw8 hours ago

The ziphead era of coding is over. I'll miss it too.

anonymous3449 hours ago

yes but you solved problems already solved by someone else. how about something that hasn't been solved, or yet even noticed? that gives the greatest satisfaction

pixelmelt8 hours ago

Would like to follow your blog, is there an rss feed?

dudeinjapan6 hours ago

If you feel this way, you arent using AI right.

For me, Claude, Suno, Gemini and AI tools are pure bliss for creation, because they eliminate the boring grunt work. Who cares how to implement OAuth login flow, or anything that has been done 1000 times?

I do not miss doing grunt work!

vasco7 hours ago

You don't have to miss it, buy a differential equation book and do one per day. Play chess on hard mode. I mean there's so many ways to make yourself think hard daily, this makes no sense.

It's like saying I miss running. Get out and run then.

sublinear8 hours ago

> I have tried to get that feeling of mental growth outside of coding

A few years before this wave of AI hit, I got promoted into a tech lead/architect role. All of my mental growth since then has been learning to navigate office politics and getting the 10k ft view way more often.

I was already telling myself "I miss thinking hard" years before this promotion. When I build stuff now, I do it with a much clearer purpose. I have sincerely tried the new tools, but I'm back to just using google search if anything at all.

All I did was prove to myself the bottleneck was never writing code, but deciding why I'm doing anything at all. If you want to think so hard you stay awake at night, try existential dread. It's an important developmental milestone you'd have been forced to confront anyway even 1000 years ago.

My point is, you might want to reconsider how much you blame AI.

rustystump9 hours ago

At the day job there was a problem with performance loading data in an app.

7 months later waffling on it on and off with and without ai I finally cracked it.

Author is not wrong though, the number of times i hit this isnt as often since ai. I do miss the feeling though

bowsamic5 hours ago

I specifically spend my evenings reading Hegel and other hard philosophy as well as writing essays just to force myself to think hard

hahahahhaah9 hours ago

I think AI didn't do this. Open source, libraries, cloud, frameworks and agile conspired to do this.

Why solve a problem when you can import library / scale up / use managed kuberneted / etc.

The menu is great and the number of problems needing deep thought seems rare.

There might be deep thought problems on the requirements side of things but less often on the technical side.

foxes5 hours ago

I think I miss my thinking..

bigstrat20039 hours ago

Dude, I know you touched on this but seriously. Just don't use AI then. It's not hard, it's your choice to use it or not. It's not even making you faster, so the pragmatism argument doesn't really work well! This is a totally self inflicted problem that you can undo any time you want.

donatj9 hours ago

Spoken like someone who doesn't have their company measuring their AI usage and regularly laying people off.

Aeolun9 hours ago

Need to be in the top 5% of AI users while staying in your budget of $50/month!

llmthrow08279 hours ago

If you can't figure out how to game this, you're both not thinking hard and not using AI effectively.

layer86 hours ago

That sucks, but honestly I’d get out of there as fast as possible. Life is too short to live under unfulfilling work conditions for any extended amount of time.

[deleted]8 hours agocollapsed

renewiltord8 hours ago

I have a Claude code set up in a folder with instructions on how to access iMessage. Ask it questions like “What did my wife say I should do next Friday?”

Reads the SQLite db and shit. So burn your tokens on that.

CuriouslyC9 hours ago

It's not hard to burn tokens on random bullshit (see moltbook). If you really can deliver results at full speed without AI, it shouldn't be hard to keep cover.

d--b6 hours ago

Why not think hard about what to build instead of how to build it?

the_afan hour ago

The article is interesting. I don't know how I feel about it, though I'm both a user of AI (no choice anymore in the current job environment) and vaguely alarmed by it; I'm in the camp of those who fear for the future of our profession, and I know the counterarguments but I'm not convinced.

A couple of thoughts.

First, I think the hardness of the problems most of us solve is overrated. There is a lot of friction, tuning things, configuring things right, reading logs, etc. But are the problems most of us are solving really that hard? I don't think so, except for those few doing groundbreaking work or sending rockets to space.

Second, even thinking about easier problems is good training for the mind. There's that analogy that the brain is a "muscle", and I think it's accurate. If we always take the easy way out for the easier problems, we don't exercise our brains, and then when harder problems come up what will we do?

(And please, no replies of the kind "when portable calculators were invented...").

cranberryturkey7 hours ago

There's an irony here -- the same tools that make it easy to skim and summarize can also be used to force deeper thinking. The problem isn't the tools, it's the defaults.

I've found that the best way to actually think hard about something is to write about it, or to test yourself on it. Not re-read it. Not highlight it. Generate questions from the material and try to answer them from memory.

The research on active recall vs passive review is pretty clear: retrieval practice produces dramatically better long-term retention than re-reading. Karpicke & Blunt (2011) showed that practice testing outperformed even elaborative concept mapping.

So the question isn't whether AI summarizers are good or bad -- it's whether you use them as a crutch to avoid thinking, or as a tool to compress the boring parts so you can spend more time on the genuinely hard thinking.

kovkol4 hours ago

I mean I spent most of my career been being pressured to move from type 3 to any one of the other 2 so I don't blame AI for this (it doesn't help, though, especially if you delegate to much to it).

ares6237 hours ago

Rich Hickey and the Clojure folks coined the term Hammock Driven Development. It was tongue in cheek but IMO it is an ideal to strive towards.

rvz7 hours ago

Great, so does that mean that it is time to vibe code our own alternatives of everything such as the Linux kernel because the AI is sure 'smarter' than all of us?

Seen a lot of DIY vibe coded solutions on this site and they are just waiting for a security disaster. Moltbook being a notable example.

That was just the beginning.

kamaal8 hours ago

To me thinking hard involved the following steps-

1. Take a pen and paper.

2. Write down what we know.

3. Write down where we want to go.

4. Write down our methods of moving forward.

5. Make changes to 2, using 4, and see if we are getting closer to 3. And course correct based on that.

I still do it a lot. LLM's act as assist. Not as a wholesale replacement.

drawnwren8 hours ago

"Before you read this post, ask yourself a question: When was the last time you truly thought hard? ... a) All the time. b) Never. c) Somewhere in between."

What?

yieldcrv4 hours ago

man, setting up worktrees for parallelized agentic coding is hard, setting up containerized worktrees is hard so you can run with dangerous permissions on without nuking host system

deciding whether to use that to work on multiple features on the same code base, or the same feature in multiple variations is hard

deciding whether to work on a separate project entirely while all of this is happening is hard and mentally taxing

planning all of this up for a few hours and watching it go all at once autonomously is satisfying!

tehjoker8 hours ago

Why not find a subfield that is more difficult and requires some specialization then?

ars8 hours ago

I think hard all the time, AI can only solve problems for me that don't require thinking hard. Give it anything more complex and it's useless.

I use AI for the easy stuff.

Der_Einzige9 hours ago

Instant upvote for a Philiip Mainlander quote at the end. He's the OG "God is Dead" guy and Nietzsche was reacting (very poorly) to Mainlander and other pessimists like Schopenhauer when he followed up with his own, shittier version of "god is dead"

Please read up on his life. Mainlander is the most extreme/radical Philosophical Pessimist of them all. He wrote a whole book about how you should rationally kill yourself and then he killed himself shortly after.

https://en.wikipedia.org/wiki/Philipp_Mainl%C3%A4nder

https://dokumen.pub/the-philosophy-of-redemption-die-philoso...

Max Stirner and Mainlander would have been friends and are kindred spirits philosophically.

https://en.wikipedia.org/wiki/Bibliography_of_philosophical_...

[deleted]4 hours agocollapsed

themafia9 hours ago

> Yes, I blame AI for this.

Just don't use it. That's always an option. Perhaps your builder doesn't actually benefit from an unlimited runway detached from the cost of effort.

tayo429 hours ago

> I tried getting back in touch with physics, reading old textbooks. But that wasn’t successful either. It is hard to justify spending time and mental effort solving physics problems that aren’t relevant or state-of-the-art

I tried this with physics and philosophy. I think i want to do a mix of hard but meaningful. For academic fields like that its impossible for a regular person to do as a hobby. Might as well just do puzzles or something.

[deleted]9 hours agocollapsed

defraudbah5 hours ago

another AI blame/praise/adapt.. you definitely didn't think hard about this one, did you

okokwhatever2 hours ago

I get it and somehow also agree with the division (thinker/builder) but I feel this is only the representation of a new society where less humans are necessary to think deeply. No offense here, it's just my own unsatisfacted brain trying to adapt to a whole new era.

LoganDark7 hours ago

Every time I try to use LLMs for coding, I completely lose touch with what it's doing, it does everything wrong and it can't seem to correct itself no matter how many times I explain. It's so frustrating just trying to get it to do the right thing.

I've resigned to mostly using it for "tip-of-my-tongue" style queries, i.e. "where do I look in the docs". Especially for Apple platforms where almost nothing is documented except for random WWDC video tutorials that lack associated text articles.

I don't trust LLMs at all. Everything they make, I end up rewriting from scratch anyway, because it's always garbage. Even when they give me ideas, they can't apply them properly. They have no standards, no principle. It's all just slop.

I hate this. I hate it because LLMs give so many others the impression of greatness, of speed, and of huge productivity gains. I must look like some grumpy hermit, stuck in their ways. But I just can't get over how LLMs all give me the major ick. Everything that comes out of them feels awful.

My standards must be unreasonably high. Extremely, unsustainably high. That must also be the reason I hardly finish any projects I've ever started, and why I can never seem to hit any deadlines at work. LLMs just can't reach my exacting, uncompromising standards. I'm surely expecting far too much of them. Far too much.

I guess I'll just keep doing it all myself. Anything else really just doesn't sit right.

duskdozer44 minutes ago

There's clearly a gap in how or for what LLM-enthusiasts and I would use LLMs. When I've tried it, I've found it just as frustrating as you describe, and it takes away the elements of programming that make it tolerable for me to do. I don't even think I have especially high standards - I can be pretty lazy for anything outside of work.

Thanemate3 hours ago

I am one of those junior software developers who always struggled with starting their own projects. Long story short, I realized that my struggle stems from my lack of training in open-ended problems, where there are many ways to go about solving something, and while some ways are better than others, there's no clear cut answer because the tradeoffs may not be relevant with the end goal.

I realized that when a friend of mine gave me Factorio as a gift last Christmas, and I found myself facing the exact same resistance I'm facing while thinking about working on my personal projects. To be more specific, it's a fear and urge of closing the game and leaving it "for later" the moment I discover that I've either done something wrong or that new requirements have been added that will force me to change the way my factories connect with each other (or even their placement). Example: Tutorial 4 has the players introduced to research and labs, and this feeling appears when I realize that green science requires me to introduce all sorts of spaghetti just to create the mats needed for green science!

So I've done what any AI user would do and opted to use chatGPT to push through the parts where things are either overwhelming, uncertain, too open-ended, or everything in between. The result works, because the LLM has been trained to Factorio guides, and goes as far as suggesting layouts to save myself some headache!

Awesome, no? Except all I've done is outsource the decision of how to go about "the thing" to someone else. And while true, I could've done this even before LLM's by simply watching a youtube video guide, the LLM help doesn't stop there: It can alleviate my indecisiveness and frustration with dealing with open-ended problems for personal projects, can recommend me project structure, can generate a bullet pointed lists to pretend that I work for a company where someone else creates the spec and I just follow it step by step like a good junior software engineer would do.

And yet all I did just postponed the inevitable exercise of a very useful mental habit: To navigate uncertainty, pause and reflect, plan, evaluate a trade-off or 2 here and there. And while there are other places and situations where I can exercise that behavior, the fact remains that my specific use of LLM removed that weight off my shoulders. I became objectively someone who builds his project ideas and makes progress in his Factorio playthrough, but the trade-off is I remain the same person who will duck and run the moment resistance happens, and succumb to the urge of either pushing "the thing" for tomorrow or ask chatGPT for help.

I cannot imagine how someone would claim that removing an exercise from my daily gym visit will not result in weaker muscles. There are so many hidden assumptions in such statements, and an excessive focus of results in "the new era where you should start now or be left behind" where nobody's thinking how this affects the person and how they ultimately function in their daily lives across multiple contexts. It's all about output, output, output.

How far are we from the day where people will say "well, you certainly don't need to plan a project, a factory layout, or even decide, just have chatGPT summarize the trade-offs, read the bullet points, and choose". We're off-loading portion of the research AND portion of the execution, thinking we'll surely be activating the neurosynapses in our brains that retains habits, just like someone who lifts 50% lighter weights at the gym will expect to maintain muscle mass or burn fat.

IhateAI7 hours ago

I refer to it as "Think for me SaaS", and it should be avoided like the plague. Literally, it will give your brain a disease we haven't even named yet.

It's as if I woke up in a world where half of resturaunts worldwide started changing their name to McDonalds and gaslighting all their customers into thinking McDonalds is better than their "from scratch" menu.

Just dont use these agentic tools, they legitimately are weapons who's target is your brain. You can ship just as fast with autocomplete and decent workflows, and you know it.

Its weird, I dont understand why any self respecting dev would support these companies. They are openly hostile about their plans for the software industry (and many other verticles).

I see it as a weapon being used by a sect of the ruling class to diminsh the value of labor. While im not confident they'll be successful, I'm very disappointed in my peers that are cheering them on in that mission. My peers are obviously being tricked by promises of being able join that class, but that's not what's going to happen.

You're going to lose that thinking muscle and therefor the value of your labor is going to be directly correlated to the quantity and quality of tokens you can afford (or be given, loaned!?)

Be wary!!!

77773322155 hours ago

Short term thinkers versus long term thinkers. Just look at the end goal of these companies and you'll see why you shouldn't give them anything.

To say it will free people of the boring tasks is so short sighted....

16bitvoid7 hours ago

I'm with you. It scares me how quickly some of my peers' critical thinking and architectural understanding have noticeably atrophied over the last year and a half.

everyone7 hours ago

Guy complains about self vibe coding.. stop doing it then!! Do you really think it's practical? Your job must be really easy if it is.

ychompinator6 hours ago

[dead]

hicsuntcp3 hours ago

[dead]

utopiah8 hours ago

Pre-processed food consumer complains about not cooking anymore. /s

... OK I guess. I mean sorry but if that's revelation to you, that by using a skill less you hone it less, you were clearly NOT thinking hard BEFORE you started using AI. It sure didn't help but the problem didn't start then.

eggsandbeer8 hours ago

[dead]

wetpaws8 hours ago

[dead]

bubbi5 hours ago

[dead]

whywhywhywhy3 hours ago

I feel tired working with AI much faster than I did when I used to code, dunno if it's just that I don't really need to think much at all other than keep in mind the broad plan and have an eye out if a red flag of the wrong direction shows in the transcript, don't even bother reading the code anymore since Opus 4.5 I haven't felt the need to.

Manually coding engaged my brain much more and somehow was less exhausting, kinda feels like getting out of bed and doing something vs lazing around and ending up feel more tired despite having to do less.

jurgenaut233 hours ago

Something that people underestimate a lot is that we aren’t “brains in a jar” and the elevated states of consciousness, such as “flow”, require a deep involvement of the body. As such manual coding is much more likely to bring you into the zone than irregular interactions with an LLM.

I actually believe that there are much better ways to incorporate AI into software development than any of the mechanisms we’ve seen so far. For instance, it would make a lot more sense that you actually write the software manually and get the usual autocomplete suggestions, along with some on the fly reviews, an extension proposals, such as writing the body of a function that you’re calling from the core function you’re writing now.

hn-front (c) 2024 voximity
source