jackdoe3 hours ago
God damn metanoia.
I feel like the internet is programming me.
At this point it is impossible to tell if AI writes like people or people write like AI.
jnpnj3 hours ago
I personally noted that I'm starting to use some LLM idioms "it's not just .. it's .." and I don't like it. I'm actually trying to stop using computers and read books to replenish my mind with more diverse idioms.
jackdoe3 hours ago
same, I also try not to read claude's output that much, and I have a copy of Gibson's Mona Lisa and just open it while it is thinknig, for music and even for CS stuff, I search with before:2022 on youtube
but the ship has sailed :)
there is no hiding from it
of course the content we consume modifies us, but now everybody "reads" the same book, whatever they read.
jnpnjan hour ago
> before:2022 on youtube
funny trick. similarly when I use LLMs I try to make them emulate people's writing patterns from previous eras.
jackdoe26 minutes ago
it does, but it doesnt; there is a subtle collapse i think
wood_spirit2 hours ago
I write bad, but my text editor is putting little grammar and spelling squiggly lines under everything and I click through them and end up with very AI-like text. My emails even end up with emdash in them. It’s to shrug. You don’t know if text today is completely prompted or is just cleaned up by modern grammar and spell checkers?
throwanem34 minutes ago
Sure I do. You're almost good enough, at pretending to lousy construction, to have fooled me. Use more words next time; the semiliterate invariably mistake volume for quality.
grebc3 hours ago
Tim’s definitely artificial.
tra34 hours ago
> Agents are opening pull requests, reviewing each other's work, and closing them without a human ever touching the keyboard, with a continuously live log monitoring loop to rapidly fix issues.
I know gas town made a splash here a while back and some colleagues promote software factories, but I haven’t seen much real output..have any of you?
I prefer the guided development approach where it’s a pretty detailed dialog with the LLM. The results are good but it’s hardly hands off.
If I squint I can almost see this fully automated development life cycle, so why aren’t there real life examples out there?
Flux1594 hours ago
I think the reason we're not seeing many examples yet is that the full loop doesn't work completely autonomously yet. There's still a human in the loop at some critical points - specifically testing against a spec (runtime testing if say working on web or mobile app before shipping to users). LLMs can do compile time testing and validation, unit tests, and can write your end to end tests, but if you're shipping software to users, there's still a human somewhere involved. This isn't even mentioning marketing and actually getting your software into the hands of users - which while it can be automated, a lot of marketing with AI is still sloppy.
jcims4 hours ago
No idea how automated it is but it's clearly accelerated since last Dec.
throwanem30 minutes ago
How do you know that there aren't? If you had a "robot software factory" that worked, and you were certain it was a source of not just lifechanging or generational but potentially centennial wealth - well.
There was a time in my life when I too would give such a thing away free, on the idea that those who might do some good with it make up for the ones who will certainly turn it to great evil. Especially after 30 years' exposure, some consensual, to Bay Area/Silicon Valley "culture," I am no longer so sweetly naïve. Nor so nice.
notpachet4 hours ago
Counterargument. The author is primarily looking at AI trend lines. Let's say our industry continues moving along alternate, equally compelling, trend lines: increasing global volatility, chaos in the energy markets, growing likelihood of great power conflict this century, climate collapse, mass migration, societal unrest, yada yada.
What happens to all of these AI-native companies if the AI bubble is not able to survive in these conditions? If your current development process is built on the metabolic equivalent of 400kg of leaves per day[0], then when the allegorical asteroid hits, you're going to be outperformed by smaller, nimbler companies with much lower resource requirements. Those companies may be better suited for survival in hostile macro conditions.
In other words, I think a lot of companies believe that they're trimming their metabolic fat by replacing engineers with AI. Lower salary costs! But at the same time, they're also increasing their reliance on brittle energy infrastructure that may not survive this century. (Not to mention the brittleness of the semiconductor fabrication pipeline, RAM availability, etc)
Towaway693 hours ago
Predicting the future isn't about being right tomorrow, it's about selling you something today. - read that somewhere
Folks using AI aren't interested in the future, they are interested in buying today and maximizing profits today. If something goes wrong tomorrow, then that's when the problems are dealt with: tomorrow.
AI is an incredibly fragile technology, as you say, it's depended on so many things going right, amazing stuff that it works at all. That fragility includes price, once that goes up and developer price goes down, the winds of change might blow again.
AI also forces folks to be online to code, without being online, companies cannot extend their products. Git was the first version (open source) control system that worked offline. We're literally turning back the hands of time with AI.
AI is another vendor lock-in with the big providers being the sole key-holders to the gates of coding heaven. Folks are blindly running into the hands of vendors who will raise prices as soon as their investors demand their money back.
AI is "improving" code bases to make subtle errors and edge cases harder to detect debugging without using AI will be impossible. Will a human developer actually be able to understand a code base that has been coded up by an AI? That's a problem for tomorrow, today we're making the profits and pumping up the shareholder value.
AI prompts are depended on versions of LLMs - change the LLM and the prompt might will generate different code. Upgrade LLMs or change prompts and suddenly generated code degrades without warning. But prompts are single-use one-way technology: once the generated code is in the code base, there is no need for the prompt - so that's non-issue, only for auditors.
Having come from levers, to punch cards, to transistors, to keyboards, to mice and finally AI, programming has fundamentally forgotten there is a second dimension. Most fields have moved to visual representation of data - graphs, photos, images, plans etc. Programming is fundamentally a single dimension activity with lines and lines of algorithmic code. Hard to understand and harder to visualize (see UML). Now AI comes along and entrenches this dependency on text-based programming, as if the keyboard is the single most (and only) important tool for programming.
It's a lack of imagine of exploring alternatives for programming that has lead us here. Having non-understandable AI tools generating subtly failing code that we blindly deploy to our servers is not an approach that promises look term stability.
grebc4 hours ago
The one thing that’s true in that article is the output of bad coders/programmers/developers/engineers is certainly increasing.
Good luck to anyone cleaning up the mess.
tacker20002 hours ago
Actually i know an engineer at a startup that was hired to clean up the original slopcoded MVP. So there is also opportunity in this space.
dingdongditchmean hour ago
learned a new word today: "slopcoded MVP", love it! Thanks.