Hacker News

speckx
AI makes you boring marginalia.nu

aeturnuman hour ago

I've seen people say something along the lines of "I am not interested in reading something that you could not be bothered to actually write" and I think that pretty much sums it up. Writing and programming are both a form of working at a problem through text and when it goes well other practitioners of the form can appreciate its shape and direction. With AI you can get a lot of 'function' on the page (so to speak) but it's inelegant and boring. I do think AI is great at allowing you not to write the dumb boiler plate we all could crank out if we needed to but don't want to. It just won't help you do the innovative thing because it is not innovative itself.

Uehrekaan hour ago

> Writing and programming are both a form of working at a problem through text…

Whoa whoa whoa hold your horses, code has a pretty important property that ordinary prose doesn’t have: it can make real things happen even if no one reads it (it’s executable).

I don’t want to read something that someone didn’t take the time to write. But I’ll gladly use a tool someone had an AI write, as long as it works (which these things increasingly do). Really elegant code is cool to read, but many tools I use daily are closed source, so I have no idea if their code is elegant or not. I only care if it works.

fhd242 minutes ago

Users typically don't read code, developers (of the software) do.

If it's not worth reading something where the writer didn't take the time to write it, by extension that means nobody read the code.

Which means nobody understands it, beyond the external behaviour they've tested.

I'd have some issues with using such software, at least where reliability matters. Blackbox testing only gets you so far.

But I guess as opposed to other types of writing, developers _do_ read generated code. At least as soon as something goes wrong.

tptacek38 minutes ago

Developers do not in fact tend to read all the software they use. I have never once looked at the code for jq, nor would I ever want to (the worst thing I could learn about that contraption is that the code is beautiful, and then live out the rest of my days conflicted about my feelings about it). This "developers read code" thing is just special pleading.

hexaga17 minutes ago

You're a user of jq in the sense of the comment you're replying to, not a developer. The developer is the developer _of jq_, not developers in general.

fhd210 minutes ago

Yes, that's exactly how I meant it. I might _rarely_ peruse some code if I'm really curious about it, but by and large I just trust the developers of the software I use and don't really care how it works. I care about what it does.

tptacek17 minutes ago

We're talking about Show HN here.

arscan35 minutes ago

> But I’ll gladly use a tool someone had an AI write, as long as it works (which these things increasingly do).

It works, sure, but is it worth your time to use? I think a common blind spot for software engineers is understanding how hard it is to get people to use software they aren’t effectively forced to use (through work or in order to gain access to something or ‘network effects’ or whatever).

Most people’s time and attention is precious, their habits are ingrained, and they are fundamentally pretty lazy.

And people that don’t fall into the ‘most people’ I just described, probably won’t want to use software you had an LLM write up when they could have just done it themselves to meet their exact need. UNLESS it’s something very novel that came from a bit of innovation that LLMs are incapable of. But that bit isn’t what we are talking about here, I don’t think.

mindcrime27 minutes ago

probably won’t want to use software you had an LLM write up when they could have just done it themselves to meet their exact need

Sure... to a point. But realistically, the "use an LLM to write it yourself" approach still entails costs, both up-front and on-going, even if the cost may be much less than in the past. There's still reason to use software that's provided "off the shelf", and to some extent there's reason to look at it from a "I don't care how you wrote it, as long as it works" mindset.

came from a bit of innovation that LLMs are incapable of.

I think you're making an overly binary distinction on something that is more of a continuum, vis-a-vis "written by human vs written by LLM". There's a middle ground of "written by human and LLM together". I mean, the people building stuff using something like SpecKit or OpenSpec still spend a lot of time up-front defining the tech stack, requirements, features, guardrails, etc. of their project, and iterating on the generated code. Some probably even still hand tune some of the generated code. So should we reject their projects just because they used an LLM at all, or ?? I don't know. At least for me, that might be a step further than I'd go.

arscan7 minutes ago

> There's a middle ground of "written by human and LLM together".

Absolutely, but I’d categorize that ‘bit’ as the innovation from the human. I guess it’s usually just ongoing validation that the software is headed down a path of usefulness.

rubslopes9 minutes ago

I agree with your sentiment, and it touches on one of the reasons I left academia for IT. Scientific research is preoccupied with finding the truth, which is beautiful but very stressful. If you're a perfectionist, you're always questioning yourself: "Did I actually find something meaningful, or is it just noise? Did I gaslight myself into thinking I was just exploring the data when I was actually p-hacking the results?" This took a real toll on my mental health.

Although I love science, I'm much happier building programs. "Does the program do what the client expects with reasonable performance and safety? Yes? Ship it."

1shooner43 minutes ago

>Code has a pretty important property that ordinary prose doesn’t have

But isn't this the distinction that language models are collapsing? There are 'prose' prompt collections that certainly make (programmatic) things happen, just as there is significant concern about the effect of LLM-generated prose on social media, influence campaigns, etc.

JohnMakinan hour ago

Sometimes (or often) things with horrible security flaws "work" but not in the way that they should and are exposing you to risk.

rescripting44 minutes ago

If you refuse to run AI generated code for this reason, then you should refuse to run closed source code for the same reason.

JohnMakin36 minutes ago

I don't see how the two correlate - commercial, closed source software usually have teams of professionals behind them with a vested and shared interest in not shipping crap that will blow up in their customers' face. I don't think the motivations of "guy who vibe coded a shitty app in an afternoon" are the same.

And to answer you more directly, generally, in my professional world, I don't use closed source software often for security reasons, and when I do, it's from major players with oodles of more resources and capital expenditure than "some guy with a credit card paid for a gemini subscription."

pixl9744 minutes ago

Hell, I'd read an instruction manual that AI wrote as long as it accurately describes.

I see a lot of these discussions where a person gets feelings/feels mad about something and suddenly a lot of black and white thinking starts happening. I guess that's just part of being human.

exit11 minutes ago

similarly, i think that something that someone took the time to proof-read/verify can be of value, even if they did not directly write it.

this is the literary equivalent of compiling and running the code.

ueanan hour ago

> I've seen people say something along the lines of "I am not interested in reading something that you could not be bothered to actually write" and I think that pretty much sums it up.

Amen to that. I am currently cc'd on a thread between two third-parties, each hucking LLM generated emails at each other that are getting longer and longer. I don't think either of them are reading or thinking about the responses they are writing at this point.

overtone1000an hour ago

Honest conversation in the AI era is just sending your prompts straight to each other.

rkomornan hour ago

It's bad enough they didn't bother to actually write it, but often it seems like they also didn't bother to read it either.

cranberryturkey39 minutes ago

This is the dark comedy of the AI communication era — two LLMs having a conversation with each other while their human operators have already checked out. The email equivalent of two answering machines leaving messages for each other in the 90s.

The real cost isn't the tokens, it's the attention debt. Every CC'd person now has to triage whether any of those paragraphs contain an actual decision or action item. In my experience running multiple products, the signal-to-noise ratio in AI-drafted comms is brutal. The text looks professional, reads smoothly, but says almost nothing.

I've started treating any email over ~4 paragraphs the same way I treat Terms of Service — skim the first sentence of each paragraph and hope nothing important is buried in paragraph seven.

supriyo-biswas29 minutes ago

> the signal-to-noise ratio in AI-drafted comms is brutal

This is also the case for AI generated projects btw, the backend projects that I’ve been looking at often contains reimplementations of common functionality that already exists elsewhere, such as in-memory LRU caches when they should have just used a library.

techblueberryan hour ago

What's interesting is how AI makes this problem worse but not actually "different", especially if you want to go deep on something. Like listicles were always plentiful, even before AI, but inferior to someone in substack going deep on a topic. AI generated music will be the same way, there's always been an excessive abundance of crap music, and now we'll just have more more of it. The weird thing is how it will hit the uncanny valley. Potentially "Better" than the crap that came before it, but significantly worse than what someone who cares will produce.

DJing is an interesting example. Compared with like composition, Beatmatching is "relatively" easy to learn, but was solved with CD turntables that can beatmatch themselves, and yet has nothing to do with the taste you have to develop to be a good DJ.

madcaptenoran hour ago

The short version of "I am not interested in reading something that you could not be bothered to actually write" is "ai;dr"

enobrevan hour ago

I feel like dealing with robo-calls for the past couple years had led me to this conclusion a bit before this boom in ai-generated text. When I answer my phone, if I hear a recording or a bot of some sorts, I hang up immediately with the thought "if it were important, a human would have called". I've adjusted this slightly for my kid's school's automated notifications, but otherwise, I don't have the time to listen to robots.

CuriouslyC27 minutes ago

The truth is now that nobody will bother to read anything you write AI or not mostly, creating things is like buying a lottery ticket in terms of audience. Creating something lovingly by hand and pouring countless hours into it is like a golden lottery ticket that has 20x odds, but if it took 50x longer to produce, you're getting significantly outperformed by people who just spam B+ content.

furyofantares42 minutes ago

> "I am not interested in reading something that you could not be bothered to actually write"

At this point I'd settle if they bothered to read it themselves. There's a lot of stuff posted that feels to me like the author only skimmed it and expects the masses to read it in full.

doomslayer99932 minutes ago

Exactly, I think perplexity had the right idea of where to go with AI (though obviously fumbled execution). Essentially creating more advanced primitives for information search and retrieval. So it can be great at things we have stored and need to perform second order operations on (writing boilerplate, summarizing text, retrieving information).

UltraSane25 minutes ago

It actually makes a lot more sense to share the LLM prompt you used than the output because it is less data in most cases and you can try the same prompt in other LLMs.

giancarlostoro41 minutes ago

Except its not. What's a programmer without a vision? Code needs vision. The model is taking your vision. With writing a blog post, comment or even book, I agree.

mupuff1234an hour ago

Good code is boring code.

JohnMakinan hour ago

> The cool part about pre-AI show HN is you got to talk to someone who had thought about a problem for way longer than you had

Honestly, I agree, but the rash of "check out my vibe coded solution for perceived $problem I have no expertise in whatever and built in an afternoon" and the flurry of domain experts responding like "wtf, no one needs this" is kind of schadenfreude, but I feel guilty a little for enjoying it.

ghostbrainalphaan hour ago

Don't you think their is an opposite of that effect too?

I feel like I can breeze past the easy, time consuming infrastructure phase of projects, and spend MUCH more time getting to high level interesting problems?

JohnMakinan hour ago

I am saying a lot of the time these type of posts are a nonexistent problem, a problem that is already solved, or just thinking about a "problem" that isn't really a problem at all and results from a lack of understanding.

The most recent one I remember commenting on, the poor guy had a project that basically tried to "skip" IaC tools, and his tool basically went nuts in the console (or API, I don't remember) in one account, then exported it all to another account for reasons that didn't make any sense at all. These are already solved problems (in multiple ways) and it seemed like the person just didn't realize terraformer was already an existing, proven tool.

I am not trying to say these things don't allow you to prototype quickly or get tedious, easy stuff out of the way. I'm saying that if you try to solve a problem in a domain that you have no expertise in with these tools and show other experts your work, they may chuckle at what you tried to do because it sometimes does look very silly.

rickandmorty9926 minutes ago

I'm building an education platform. 95% is vibe coded. What isn't vibe coded though is the content. AI is really uninspiring with how to teach technical subjects. Also, the full UX? I do that. Marketing plan? 90% is me.

But AI does the code. Well... usually.

People call my project creative. Some are actually using it.

I feel many technical things aren't really technical things they are simply a problem where "have a web app" is part of the solution but the real part of the solution is in the content and the interaction design of it, not in how you solved the challenge technically.

dormento24 minutes ago

> or just thinking about a "problem" that isn't really a problem at all and results from a lack of understanding

You might be on to something. Maybe its self-selection (as in people who want to engage deeply with a certain topic but lack domain expertise might be more likely to go for "vibecodable" solutions).

JohnMakin12 minutes ago

I compare it to a project I worked on when I was very junior a very long time ago - I built by hand this complicated harness of scripts to deploy VM's on bare metal and do stuff like create customizable, on-the-fly test environments for the devs on my team. It worked fine, but it was a massive time sink, lots of code, and was extremely difficult to maintain and could have weird behavior or bad assumptions quite often.

I made it because at that point in my career I simply didn't know that ansible existed, or cloud solutions that were very cheap to do the same thing. I spent a crazy amount of effort doing something that ansible probably could have done for me in an afternoon. That's what sometimes these projects feel like to me. It's kind of like a solution looking for a problem a lot of the time.

I just scanned through the front page of the show HN page and quickly eyeballed several of these type of things.

josefrescoan hour ago

While I agree overall, I'm going to do some mild pushback here: I'm working on a "vibe" coded project right now. I'm about 2 months in (not a weekend), and I've "thought about" the project more than any other "hand coded" project I've built in the past. Instead of spending time trying to figure out a host of "previously solved issues" AI frees my human brain to think about goals, features, concepts, user experience and "big picture" stuff.

logicprogan hour ago

This is precisely it. If anything, AI gives me more freedom to think about more novel ideas, both on the implementation and the final design level, because I'm not stuck looking up APIs and dealing with already solved problems.

ryandrakean hour ago

It's kind of freeing to put a software project together and not have to sweat the boilerplate and rote algorithm work. Boring things that used to dissuade me. Now, I no longer have that voice in my head saying things like: "Ugh, I'm going to have to write yet another ring buffer, for the 14th time in my career."

doublerabbitan hour ago

The boring parts where you learn. "Oh, I did that, this is now not that and it does this! But it was so boring building a template parser" - You've learnt.

Boring is suppose to be boring for the sake of learning. If you're bored then you're not learning. Take a look back at your code in a weeks time and see if you still understand what's going on. Top level maybe, but the deep down cog of the engine of the application, doubt so. Not to preach but that's what I've discovered.

Unless you already have the knowledge, then fine. "here's my code make it better" but if it's the 14th time you've written the ring buffer, why are you not using one of the previous thirteen versions? Are you saying that the vibed code is more superior then your own coding?

logicprog21 minutes ago

If learning about individual cogs is what's important, and once you've done that it's okay to move on and let AI do it, than you can build the specific thing you want to learn about in detail in isolation, as a learning project — like many programmers already do, and many CS courses already require — perhaps on your own, or perhaps following along with a substantial book on the matter; then once you've gained that understanding, you can move on to other things in projects that aren't focused on learning about that thing.

jen729w33 minutes ago

I think this is a terrible argument.

I assume you use JavaScript? TypeScript or Go perhaps?

Pfft, amateur. I only code in Assembly. Must be boring for you using such a high-level language. How do you learn anything? I bet you don't even know what the cog of the engine is doing.

logicprog31 minutes ago

> The boring parts where you learn.

It's okay not to memorize everything involved in a software project. Sometimes what you want to learn or experiment with is elsewhere, and so you use the AI to handle the parts you're less interested in learning at a deep and intimate level. That's okay. This mentality that you absolutely have to work through manually implementing everything, every time, even when it's not related to what you're actually interested in, wanted to do, or your end-goal, just because it "builds character" is understandable, and it can increase your generality, but it's not mandatory.

Additionally, if you're not doing vibe coding, but sort of pair-programming with the AI in something like Zed, where the code is collaboratively edited and it's very code-forward — so it doesn't incentivize you to stay away from the code and ignore it, the way agents like Claude Code do — you can still learn a ton about the deep technical processes of your codebase, and how to implement algorithms, because you can look at what the agent is doing and go:

"Oh, it's having to use a very confusing architecture here to get around this limitation of my architecture elsewhere; it isn't going to understand that later, let alone me. Guess that architectural decision was bad."

"Oh, shit, we used this over complicated architecture/violated local reasoning/referential transparency/modularity/deep-narrow modules/single-concern principles, and now we can't make changes effectively, and I'm confused. I shouldn't do that in the future."

"Hmm, this algorithm is too slow for this use-case, even though it's theoretically better, let's try another one."

"After profiling the program, it's too slow here, here, and here — it looks like we should've added caching here, avoided doing that work at all there, and used a better algorithm there."

"Having described this code and seeing it written out, I see it's overcomplicated/not DRY enough, and thus difficult to modify/read, let's simplify/factor out."

"Interesting, I thought the technologies I chose would be able to do XYZ, but actually it turns out they're not as good at that as I thought / have other drawbacks / didn't pan out long term, and it's causing the AI to write reams of code to compensate, which is coming back to bite me in the ass, I now understand the tradeoffs of these technologies better."

Or even just things like

"Oh! I didn't know this language/framework/library could do that! Although I may not remember the precise syntax, that's a useful thing I'll file away for later."

"Oh, so that's what that looks like / that's how you do it. Got it. I'll look that up and read more about it, and save the bookmark."

> Unless you already have the knowledge, then fine. "here's my code make it better" but if it's the 14th time you've written the ring buffer, why are you not using one of the previous thirteen versions? Are you saying that the vibed code is more superior then your own coding?

There are a lot of reasons one might not be able to, or want to, use existing dependencies.

Refreeze522415 minutes ago

> The boring parts where you learn.

Exactly this. Finding that annoying bug that took 15 browser tabs and digging deep into some library you're using, digging into where your code is not performant, looking for alternative algorithms or data structures to do something, this is where learning and experience happen. This is why you don't hire a new grad for a senior role, they have not had time to bang their heads on enough problems.

You get no sense of how or why when using AI to crank something out for you. Your boss doesn't care about either, he cares about shipping and profits, which is the true goal of AI. You are an increasingly unimportant cog in that process.

xenadu0221 minutes ago

Like any tool it can be put to productive use or it can be used to crank out absolute garbage.

lapetitejortan hour ago

Yesterday I had two hours to work on a side project I've been dreaming about for a week. I knew I had to build some libraries and that it would be a major pain. I started with AI first, which created a script to download, extract, and build what needed. Even with the script I indeed encountered problems. But I blitzed through each problem until the libraries were built and I could focus on my actual project, which was not building libraries! I actually reached a satisfying conclusion instead of half-way through compiling something I do not care about.

j2kun14 minutes ago

Can you elaborate on the implied claim that you've never built a project that you spent more than two months thinking about? I could maybe see this being true of an undergraduate student, but not a professional programmer.

jtr1an hour ago

I tend to agree, this has been my experience with LLM-powered coding, especially more recently with the advent of new harnesses around context management and planning. I’ve been building software for over ten years so I feel comfortable looking under the hood, but it’s been less of that lately and more talking with users and trying to understand and effectively shape the experience, which I guess means I’m being pushed toward product work.

biophysboyan hour ago

That's the key: use AI for labor substitution, not labor replacement. Nothing necessarily wrong with labor saving for trivial projects, but we should be using these tools to push the boundaries of tech/science!

imirican hour ago

You don't fit the profile OP is complaining about. You might not even be "vibe" coding in the strictest sense of that word.

For every person like you who puts in actual thought into the project, and uses these tools as coding assistants, there are ~100 people who offload all of their thinking to the tool.

It's frightening how little collective thought is put into the ramifications of this trend not only on our industry, but on the world at large.

tptacekan hour ago

That may be, but it's also exposing a lot of gatekeeping; the implication that what was interesting about a "Show HN" post was that someone had the technical competence to put something together, regardless of how intrinsically interesting that thing is; it wasn't the idea that was interesting, it was, well, the hazing ritual of having to bloody your forehead of getting it to work.

AI for actual prose writing, no question. Don't let a single word an LLM generates land in your document; even if you like it, kill it.

mjr00an hour ago

> That may be, but it's also exposing a lot of gatekeeping

"Gatekeeping" became a trendy term for a while, but in the post-LLM world people are recognizing that "gatekeeping" is not the same as "having a set of standards or rules by which a community abides".

If you have a nice community where anyone can come in and do whatever they want, you no longer have a community, you have a garbage dump. A gate to keep out the people who arrive with bags of garbage is not a bad thing.

gary_09 minutes ago

I would argue the term "gatekeeping" is being twisted around when it comes to AI. I see genuine gatekeeping when people with a certain skill or qualification try to discourage newcomers by making their field seem mysterious and only able to be done by super special people, and intimidating or making fun of newbies who come along and ask naive questions.

"Gatekeeping" is NOT when you require someone to be willing learn a skill in order to join a community of people with that skill.

And in fact, saying "you are too stupid to learn that on your own, use an AI instead" is kind of gatekeeping on its own, because it implicitly creates a shrinking elite who actually have the knowledge (that is fed to the AI so it can be regurgitated for everyone else), shutting out the majority who are stuck in the "LLM slum".

tptacek7 minutes ago

Making ham radio operators learn Morse Code was "requiring someone to be willing to learn a skill". Also pure gatekeeping.

strogonoffan hour ago

While at first glance LLMs do help expose and even circumvent gatekeeping, often it turns out that gatekeeping might have been there for a reason.

We have always relied on superficial cues to tell us about some deeper quality (good faith, willingness to comply with code of conduct, and so on). This is useful and is a necessary shortcut, as if we had to assess everyone and everything from first principles every time things would grind to a halt. Once a cue becomes unviable, the “gate” is not eliminated (except if briefly); the cue is just replaced with something else that is more difficult to circumvent.

I think that brief time after Internet enabled global communication and before LLMs devalued communication signals was pretty cool; now it seems like there’s more and more closed, private or paid communities.

kakamadafuka2 minutes ago

Really? You think LLMs are a bigger shift in how internet communities are than big corporations like Google, Facebook etc.? I personally see much less change last few years than I did 15 years ago.

matheusmoreira4 minutes ago

Nothing wrong with some degree of gatekeeping though. A measured amount of elitism is a force for good.

c22an hour ago

Most ideas aren't interesting. Implementations are interesting. I don't care if you worked hard on your implementation or not, but I do care if it solves the problem in a novel or especially efficient way. These are not the hallmarks of AI solutions.

enraged_camelan hour ago

In the vast majority of contexts I don’t want “novel” and “interesting” implementations, I want boring and proven ones.

matthewowenan hour ago

I think that having some difficulty and having to "bloody your forehead" acts as a filter that you cared enough to put a lot of effort into it. From a consumer side, someone having spent a lot of time on something certainly isn't a guarantee that it is good, but it provides _some_ signal about the sincerity of the producer's belief in it. IMO it's not gatekeeping to only want to pay attention to things that care went into: it's just normal human behavior to avoid unreasonable asymmetries of effort.

kspacewalk2an hour ago

It's not a hazing ritual, it's a valuable learning experience. Yes, it's nice to have the option of foregoing it, but it's a tradeoff.

pwythonan hour ago

Did you just "it's not x, it's y" me?

tptacekan hour ago

So the point of a "Show HN" is to showcase your valuable learning experience?

discreteeventan hour ago

What the article is saying is:

"the author (pilot?) hasn't generally thought too much about the problem space, and so there isn't really much of a discussion to be had. The cool part about pre-AI show HN is you got to talk to someone who had thought about a problem for way longer than you had. It was a real opportunity to learn something new, to get an entirely different perspective."

tptacekan hour ago

Right, so it's about the person and how they've qualified themselves, and not about what they've built.

I feel like I've been around these parts for a while, and that is not my experience of what Show HN was originally about, though I'm sure there was always an undercurrent of status hierarchy and approval-seeking, like you suggest.

discreteeventan hour ago

It's not about status. It's about interest. A joiner is not going to have an interesting conversation about joinery with someone who has put some flatpak furniture together.

tptacek7 minutes ago

Oh, is that what Show HN is? A community of craftspeople discussing their craft? I hadn't realized.

overgard33 minutes ago

I think the valuable learning experience can be what makes a Show HN worth viewing, if it's worth viewing. (I don't feel precious about it though.. I didn't think Show HN was particularly engaging before AI either)

iainctduncan10 minutes ago

gatekeeping is just a synonym for curration by people who don't like the currators choice.

And we are going to need more curration so goddamned badly....

bcrosby95an hour ago

What if the AI produces writing that better accomplishes my goal than writing it myself? Why do you feel differently about these two acts?

For what it's worth, the unifying idea behind both is basically a "hazing ritual", or more neutrally phrased, skin in the game. It takes time and energy to look at things people produce. You should spend time and energy making sure I'm not looking at a pile of shit. Doesn't matter if it's a website or prose.

Obviously some people don't. And that's why the signal to noise ratio is becoming shit very quickly.

tptacekan hour ago

It doesn't, is the problem. If it did, I would feel differently.

bondarchukan hour ago

It's not about having to put in effort for the sake of it, the point is that building something by hand you will gain insight into the problem, which insight then becomes a valuable contribution.

overgardan hour ago

Gatekeeping can be a good thing -- if you have to put effort into what you create, you're going to be more selective about what ideas you invest in. I wouldn't call that "bloodying your forehead", I'd call it putting work into something before demanding attention

AstroBenan hour ago

> what was interesting about a "Show HN" post was that someone had the technical competence to put something together

Wouldn't the masses of Show HN posts that have gotten no interest pre-AI refute that?

oytisan hour ago

Some people here enjoy solutions to difficult technical problems? It's not product hunt

almostdeadguyan hour ago

I can't believe the mods at /r/screenprinting took down my post on the CustomInk shirt I ordered.

kouru22517 minutes ago

This issue exists in art and I want to push back a little. There has always been automation in art even at the most micro level.

Take for example (an extreme example) the paintbrush. Do you care where each bristle lands? No of course not. The bristles land randomly on the canvas, but it’s controlled chaos. The cumulative effect of many bristles landing on a canvas is a general feel or texture. This is an extreme example, but the more you learn about art the more you notice just how much art works via unintentional processes like this. This is why the Trickster Gods, Hermes for example, are both the Gods of art (lyre, communication, storytelling) and the Gods of randomness/fortune.

We used to assume that we could trust the creative to make their own decisions about how much randomness/automation was needed. The quality of the result was proof of the value of a process: when Max Ernst used frottage (rubbing paper over textured surfaces) to create interesting surrealist art, we retroactively re-evaluated frottage as a tool with artistic value, despite its randomness/unintentionality.

But now we’re in a time where people are doing the exact opposite: they find a creative result that they value, but they retroactively devalue it if it’s not created by a process that they consider artistic. Coincidentally, these same people think the most “artistic” process is the most intentional one. They’re rejecting any element of creativity that’s systemic, and therefore rejecting any element of creativity that has a complexity that rivals nature (nature being the most systemic and unintentional art.)

The end result is that the creative has to hide their process. They lie about how they make their art, and gatekeep the most valuable secrets. Their audiences become prey for creative predators. They idolize the art because they see it as something they can’t make, but the truth is there’s always a method by which the creative is cheating. It’s accessible to everyone.

serf38 minutes ago

AI doesn't make people boring, boring people use AI to make projects they otherwise never would have.

Non-boring people are using AI to make things that are ... not boring.

It's a tool.

Other things we wouldn't say because they're ridiculous at face value:

"Cars make you run over people." "Buzzsaws make you cut your fingers off." "Propane torches make you explode."

An exercise left to the reader : is a non-participant in Show HN less boring than a participant with a vibe coded project?

lasgawean hour ago

The more interesting question is whether AI use causes the shallowness, or whether shallow people simply reach for AI more readily because deep engagement was never their thing to begin with.

embedding-shapean hour ago

More interesting question than what? And also, say you have an answer to that question, what insight do you have now that you didn't have before?

igor47an hour ago

Well the claim was that AI makes you boring. The counter is that interesting people remain interesting, it's just that a flood of previously already boring people are pouring into tech. We could make some predictions that depend on how you model this. For instance, the absolute number of interesting projects posted to HN could increase or decrease, and likewise for the relative number vs total projects. You might expect different outcomes

swiftcoderan hour ago

AI enables the stereotypical "idea guy" to suddenly be a "builder". Of course, they are learning in realtime that having the idea was always the easy part...

cyanydeezan hour ago

I'm going to guess the same way Money makes rich people turn into morons, AI will turn idiots into...oh...no

0xbadcafebee7 minutes ago

[delayed]

jcalvinowensan hour ago

Based on a lot of real world experience, I'm convinced LLM-generated documentation is worse than nothing. It's a complete waste of everybody's time.

The number of people who I see having E-mail conversations where person A uses an LLM to turn two sentences into ten paragraphs, and person B uses an LLM to summarize the ten paragraphs into two sentences, is becoming genuinely alarming to me.

ryandrakean hour ago

> The number of people who I see having E-mail conversations where person A uses an LLM to turn two sentences into ten paragraphs, and person B uses an LLM to summarize the ten paragraphs into two sentences, is becoming genuinely alarming to me.

I remember in the early days of LLMs this was the joke meme. But, now seeing it happen in real life is more than just alarming. It's ridiculous. It's like the opposite of compressing a payload over the wire: We're taking our output, expanding it, transmitting it over the wire, and then compressing it again for input. Why do we do this?

zinodaur12 minutes ago

Using AI to write your code doesn't mean you have to let your code suck, or not think about the problem domain.

I review all the code Claude writes and I don't accept it unless I'm happy with it. My coworkers review it too, so there is real social pressure to make sure it doesn't suck. I still make all the important decisions (IO, consistency, style) - the difference is I can try it out 5 different ways and pick whichever one I like best, rather than spending hours on my first thought, realizing I should have done it differently once I can see the finished product, but shipping it anyways because the tickets must flow.

The vibe coding stuff still seems pretty niche to me though - AI is still too dumb to vibe code anything that has consequences, unless you can cheat with a massive externally defined test suite, or an oracle you know is correct

max853926 minutes ago

Before vibe coding, I was always interested in trying different new things. I’d spend a few days researching and building some prototypes, but very few of them survived and were actually finished, at least in a beta state. Most of them I left non-working, just enough to satisfy my curiosity about the topic before moving on to the next interesting one.

Now, these days, it’s basically enough to use agent programming to handle all the boring parts and deliver a finished project to the public.

LLMs have essentially broken the natural selection of pet projects and allow even bad or not very interesting ideas to survive, ideas that would never have been shown to anyone under the pre-agent development cycle.

So it’s not that LLMs make programming boring, they’ve allowed boring projects to survive. They’ve also boosted the production of non-boring ones, but they’re just rarer in the overall amount of products

BiraIgnacioan hour ago

One of the down sides of Vibe-Coded-Everything, that I am seeing, is reinforcing the "just make it look good" culture. Just create the feature that the user wants and move on. It doesn't matter if next time you need to fix a typo on that feature it will cost 10x as much as it should.

That has always been a problem in software shops. Now it might be even more frequent because of LLMs' ubiquity.

Maybe that's how it should be, maybe not. I don't really know. I was once told by people in the video game industry that games were usually buggy because they were short lived. Not sure if I truly buy that but if anything vibe coded becomes throw away, I wouldn't be surprised.

taudean hour ago

AI writing will make people who write worse than average, better writers. It'll also make people who write better than average, worse writers. Know where you stand, and have the taste to use wisely.

EDIT: also, just like creating AGENT.md files to help AI write code your way for your projects, etc. If you're going to be doing much writing, you should have your own prompt that can help with your voice and style. Don't be lazy, just because you're leaning on LLMs.

latexran hour ago

> AI writing will make people who write worse than average, better writers.

Maybe it will make them output better text, but it doesn’t make them better writers. That’d be like saying (to borrow the analogy from the post) that using an excavator makes you better at lifting weights. It doesn’t. You don’t improve, you don’t get better, it’s only the produced artefact which becomes superficially different.

> If you're going to be doing much writing, you should have your own prompt that can help with your voice and style.

The point of the article is the thinking. Style is something completely orthogonal. It’s irrelevant to the discussion.

pseudosavant23 minutes ago

A table saw doesn’t make you a better carpenter. It makes you faster - for better or worse.

LLMs and agents work the same way. They’re power tools. Skill and judgment determine whether you build more, or lose fingers faster.

parpfishan hour ago

i think a lot of people that use AI to help them write want it specifically BECAUSE it makes them boring and generic.

and that's because people have a weird sort of stylistic cargo-culting that they use to evaluate their writing rather than deciding "does this communicate my ideas efficiently"?

for example, young grad students will always write the most opaque and complicated science papers. from their novice perspective, EVERY paper they read is a little opaque and complicated so they try to emulate that in their writing.

office workers do the same thing. every email from corporate is bland and boring and uses far too many words to say nothing. you want your style to match theirs, so you dump it into an AI machine and you're thrilled that your writing has become just as vapid and verbose as your CEO.

AstroBenan hour ago

Here's my definition of good writing: it's efficient and communicates precisely what you want to convey in an easy to understand way

AI is almost the exact opposite. It's verbose fluff that's only superficially structured well. It's worse than average

(waiting for someone to reply that I can tell the AI to be concise and meaningful)

taude17 minutes ago

Here's AI responding to you:

"You're describing the default output, and you're right — it's bad. But that's like judging a programming language by its tutorial examples.

The actual skill is in the prompting, editing, and knowing when to throw the output away entirely. I use LLMs daily for technical writing and the first draft is almost never the final product. It's a starting point I can reshape faster than staring at a blank page.

The real problem isn't that AI can't produce concise, precise writing — it's that most people accept the first completion and hit send. That's a user problem, not a tool problem."

wagwangan hour ago

Highly doubt that since its the complete opposite for coding. Whats missing for people of all skill levels is that writing helps you organize your thoughts, but that can happen at prompt time?

notahackeran hour ago

Good code is marked by productivity, conformance to standards, and absence of bugs. Good writing is marked by originality and personality and not overusing the rhetorical crutches AI overrelies on to try to seem engaging.

buu70041 minutes ago

Time, effort, and skill being equal, I'd suggest that AI access generally improves the quality of any given output. The issue is that AI use is only identifiable when at least one of those inputs is low, which makes it easy to have a knee-jerk reaction against AI in general. AI is just a tool; it's up to humans how to use it.

runarbergan hour ago

This claim sounds plausible, but it is also testable. Do you know whether this has actually been tested in an experimental setting?

PaulHoulean hour ago

After telling Copilot to lose the em-dash, never say “It’s not A, it’s B” and avoid alternating one-sentence and long paragraphs it had the gall to tell me it wrote better than most people.

TheDongan hour ago

We don't know if the causality flows that way. It could be that AI makes you boring, but it could also be that boring people were too lazy to make blogs and Show HNs and such before, and AI simply lets a new cohort of people produce boring content more lazily.

daxfohlan hour ago

And the irony is it tries to make you feel like a genius while you're using it. No matter how dull your idea is, it's "absolutely the right next thing to be doing!"

tonymetan hour ago

you can prompt it to stop doing that, and to behave exactly how you need it. my prompts say "no flattery, no follow up questions, PhD level discourse, concise and succinct responses, include grounding, etc"

nemomarxan hour ago

I've seen a few people use ai to rewrite things, and the change from their writing style to a more "polished" generic LLM style feels very strange. A great averaging and evening out of future writing seems like a bad outcome to me.

skissanean hour ago

I sometimes go in the opposite direction - generate LLM output and then rewrite it in my own words

The LLM helps me gather/scaffold my thoughts, but then I express them in my own voice

ExtremisAndyan hour ago

This is exactly how I use them too! What I usually do is give the LLM bullet points or an outline of what I want to say, let it generate a first attempt at it, and then reshape and rewrite what I don’t like (which is often most of it). I think, more than anything, it just helps me to quickly get past that “staring at a blank page” stage.

jonpurdyan hour ago

I do something similar: give it a bunch of ideas I have or a general point form structure, have it help me simplify and organize those notes into something more structured, then I write it out myself.

It's a fantastic editor!

taudean hour ago

that's a perfect use, imhno, of AI-assisted writing. Someone (er-something) to help you bounce ideas, and organize....

PaulHoulean hour ago

I had to write a difficult paragraph that I talked through with copilot. I think it made one sentence I liked but found GPTZero caught it. I would up with 100% sentences I wrote but that I reviewed extensively with Copilot and two people.

embedding-shapean hour ago

Yeah, if anything it might make sense to do the opposite. Use LLMs to do research, ruthlessly verify everything, validate references and help you guide you in some structure, but then actually write your own words manually with your little fingers and using your brain.

add-sub-mul-divan hour ago

Are you joking? The facts and references are the part we know it will hallucinate.

PaulHoulean hour ago

You can check the references.

quijoteunivan hour ago

I have an opinion of people that have opinions on AI

baal80spaman hour ago

It's not them, it's you.

overgard35 minutes ago

Totally agree with this. Smart creators know that inspiration comes from doing the work, not the other way around. IE, you don't wait for inspiration and then go do the work, you start doing the work and eventually you become inspired. You rarely just "have a great idea", it comes from immersing yourself in a problem, being surrounded with constraints, and finding a way to solve it. AI completely short circuits that process. Constraints are a huge part of creativity, and removing them doesn't mean you become some unstoppable creative force, it probably just means you run out of ideas or your ideas kind of suck.

glitchcan hour ago

It used to be that all bad writing was uniquely bad, in that a clear line could be drawn from the work to the author. Similarly, good writing has a unique style that typically identifies the author within a few lines of prose.

Now all bad writing will look like something generated by an LLM, grammatically correct (hopefully!) but very generic, lacking all punch and personality.

The silver lining is that good authors could also use LLMs to hide their identity while making controversial opinions. In an internet that's increasingly deanonymized, a potentially new privacy enhancing technique for public discourse is a welcome addition.

Kalpakaan hour ago

The boring part isn't AI itself. It's that most people use AI to produce more of the same thing, faster.

The interesting counter-question: can AI make something that wasn't possible before? Not more blog posts, more emails, more boilerplate — but something structurally new?

I've been working on a system where AI agents don't generate content. They observe. They watch people express wishes, analyze intent beneath the words, notice when strangers in different languages converge on the same desire, and decide autonomously when something is ready to grow.

The result doesn't feel AI-generated because it isn't. It's AI-observed. The content comes from humans. The AI just notices patterns they couldn't see themselves.

Maybe the problem isn't that AI makes you boring. It's that most people ask AI to do boring things.

minimaxiran hour ago

> Not more blog posts, more emails, more boilerplate — but something structurally new?

This is a point that often results in bad faith arguments from both AI enthusiasts and AI skeptics. Enthusiasts will say "everything is a remix and the most creative works are built on previous works" while skeptics will say "LLMs are stochastic parrots and cannot create anything new by technical definition".

The truth is somewhere in the middle, which unfortunately invokes the Golden Mean Fallacy that makes no one happy.

pelagicAustralan hour ago

I 100% agree with the sentiment, but as someone that have worked on Government systems for a good amount of time, I can tell you, boring can be just about right sometimes.

In an industry that does not crave bells and whistles, having the ability to refactor, or bring old systems back to speed can make a whole lot of difference for an understaffed, underpaid, unamused, and otherwise cynic workforce, and I am all out for it.

dang44 minutes ago

Recent and related:

Is Show HN dead? No, but it's drowning - https://news.ycombinator.com/item?id=47045804 - Feb 2026 (422 comments)

discreteeventan hour ago

> Original ideas are the result of the very work you’re offloading on LLMs. Having humans in the loop doesn’t make the AI think more like people, it makes the human thought more like AI output.

There was also a comment [1] here recently that "I think people get the sense that 'getting better at prompting' is purely a one-way issue of training the robot to give better outputs. But you are also training yourself to only ask the sorts of questions that it can answer well. Those questions that it will no longer occur to you to ask (not just of the robot, but of yourself) might be the most pertinent ones!"

Both of them reminded me of Picasso saying in 1968 that " Computers are useless. They can only give you answers,"

Of course computers are useful. But he meant that they have are useless for a creative. That's still true.

[1] https://news.ycombinator.com/item?id=47059206

adverbly37 minutes ago

Whoa there. Let's not oversimplify in either direction here.

My take:

1. AI workflows are faster - saving people time

2. Faster workflows involve people using their brain less

3. Some people use their time savings to use their brain more, some don't

4. People who don't use their brain are boring

The end effect here is that people who use AI as a tool to help them think more will end up being more interesting, but those who use AI as a tool to help them think less will end up being more boring.

jrmgan hour ago

The issue with the recent rise in Show HN submissions, from the perspective of someone on the ‘being shown’ side, is that they are from many different perspectives lower quality than they used to be.

They’re solving small problems or problems that don’t really exist, usually in naive ways. The things being shown are ‘shallow’. And it’s patently obvious that the people behind them will likely not support them in any meaning full way as time goes on.

The rise of Vibe Coding is definitely a ‘cause’ of this, but there’s also a social thing going on - the ‘bar’ for what a Show HN ‘is’ is lower, even if they’re mostly still meeting the letter of the guidelines.

fredliuan hour ago

We are in this transition period where we'll see a lot of these, because of the effort of creating "something impressive" is dramatically reduced. But once it stabilizes (which I think is already starting to happen, and this post is an example), and people are "trained" to recognize the real effort, even with AI help, behind creating something, the value of that final work will shine through. In the end, anything that is valuable is measured by the human effort needed to create it.

ossa-ma42 minutes ago

Sorry to hijack this thread to promote but I believe it's for a good and relevant cause: directly identifying and calling out AI writing.

I was literally just working on a directory of the most common tropes/tics/structures that LLMs use in their writing and thought it would be relevant to post here: https://tropes.fyi/

Very much inspired by Wikipedia's own efforts to curb AI contributions: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing

Lmk if you find it useful, will likely ShowHN it once polished.

minimaxir28 minutes ago

This is a very bad idea unless you have 100% accuracy in identifying AI generated writing, which is impossible. Otherwise, your tool will be more-often used to harass people who use those tropes organically without AI.

This behavior has already been happening with Pangram Labs which supposedly does have good AI detection.

ossa-ma16 minutes ago

I agree with the risks. However the primary goal of the site is educational not accusatory. I mostly want people to be able to recognise these patterns.

The WIP features measure breadth and density of these tropes, and each trope has frequency thresholds. Also I don't use AI to identify AI writing to avoid accusatory hallucinations.

I do appreciate the feedback though and will take it into consideration.

minimaxir10 minutes ago

> However the primary goal of the site is educational not accusatory.

How then is it different from the Wikipedia page you linked?

imiric32 minutes ago

The obvious question is: was it vibe coded? :)

As much as I'd like to know whether a text was written by a human or not, I'm saddened by the fact that some of these writing patterns have been poisoned by these tools. I enjoy, use, and find many of them to be an elegant way to get a point across. And I refuse to give up the em dash! So if that flags any of my writing—so be it.

ossa-ma25 minutes ago

Absolutely vibe coded, I'm sure I disclosed it somewhere on the site. As much as I hate using AI for creative endeavours I have to agree that it excels as nextjs/vercel/quick projects like this. I was mostly focused on the curation of the tropes and examples.

Believe me I've had to adjust my writing a lot to avoid these tells, even academics I know are second guessing everything they've ever been taught. It's quite sad but I think it will result in a more personable internet as people try to distinguish themselves from the bots.

imiric9 minutes ago

> It's quite sad but I think it will result in a more personable internet as people try to distinguish themselves from the bots.

I applaud your optimism, but I think the internet is a lost cause. Humans who value communicating with other humans will need to retreat into niche communities with zero tolerance for bots. Filtering out bot content will likely continue to be impossible, but we'll eventually settle on a good way to determine if someone is human. I just hope we won't have to give up our privacy and anonymity for it.

iambatemanan hour ago

We are going to have to find new ways to correct for low-effort work.

I have a report that I made with AI on how customers leave our firm…The first pass looked great but was basically nonsense. After eight hours of iteration, the resulting report is better than I could’ve made on my own, by a lot. But it got there because I brought a lot of emotional energy to the AI party.

As workers, we need to develop instincts for “plausible but incomplete” and as managers we need to find filters that get rid of the low-effort crap.

pbmangoan hour ago

Along these same lines, I have been trying to become better at knowing when my work could benefit from reversion to the "boring" and general mean and when outsourcing thought or planning would cause a reversion to the mean (downwards).

This echos the comments here about enjoying not writing boilerplate. The there is that our minds are programmed to offload work when we can and redirecting all the saved boilerplate to going even deeper on parts of the problem that benefit from original hard thinking is rare. It is much easier to get sucked into creating more boilerplate, and all the gamification of Claude code and incentives of service providers increase this.

darod16 minutes ago

"You don’t get build muscle using an excavator to lift weights. You don’t produce interesting thoughts using a GPU to think." Great line!

stopachka44 minutes ago

> Original ideas are the result of the very work you’re offloading on LLMs.

I largely agree that if someone put less work in making a thing, than it takes you to use it, it's probably not going to be useful. But I disagree with the premise that using LLMs will make you boring.

Consider the absurd version of the argument. Say you want to learn something you don't know: would using Google Search make you more boring? At some level, LLMs are like a curated Google Search. In fact if you use Deep Research et al, you can consume information that's more out of distribution than what you _would_ have consumed had you done only Google Searches.

fdefitte29 minutes ago

The article nails it but misses the flip side. AI doesn't make you boring, it reveals who was already boring. The people shipping thoughtless Show HN projects with Claude are the same people who would have shipped thoughtless projects with Rails scaffolding ten years ago. The tool changed, the lack of depth didn't.

mym1990an hour ago

Most ideas people have are not original, I have epiphanies multiple times a day, the chance that they are something no one has come up with before are basically 0. They are original to me, and that feels like an insightful moment, and thats about it. There is a huge case for having good taste to drive the LLMs toward a good result, and original voice is quite valuable, but I would say most people don't hit those 2 things in a meaningful way(with or without LLMs).

spijdaran hour ago

Most ideas people have aren't original, but the original ideas people do have come after struggling with a lot of unoriginal ideas.

> They are original to me, and that feels like an insightful moment, and thats about it.

The insight is that good ideas (whether wholly original or otherwise) are the result of many of these insightful moments over time, and when you bypass those insightful moments and the struggle of "recreating" old ideas, you're losing out on that process.

daxfohlan hour ago

I want to say this is even more true at the C-suite level. Great, you're all-in on racing to the lowest common denominator AI-generated most-likely-next-token as your corporate vision, and want your engineering teams to behave likewise.

At least this CEO gets it. Hopefully more will start to follow.

jihadjihadan hour ago

> Ideas are then further refined when you try to articulate them. This is why we make students write essays. It’s also why we make professors teach undergraduates. Prompting an AI model is not articulating an idea.

I agree, but the very act of writing out your intention/problem/goal/whatever can crystallize your thinking. Obviously if you are relying on the output spat out by the LLM, you're gonna have a bad time. But IMO one of the great things about these tools is that, at their best, they can facilitate helpful "rubber duck" sessions that can indeed get you further on a problem by getting stuff out of your own head.

Retr0idan hour ago

This aligns with an article I titled "AI can only solve boring problems"[0]

Despite the title I'm a little more optimistic about agentic coding overall (but only a little).

All projects require some combination of "big thinking" and tedious busywork. Too much busywork is bad, but reducing it to 0 doesn't necessarily help. I think AI can often reduce the tedious busywork part, but that's only net positive if there was an excess of it to begin with - so its value depends on the project / problem domain / etc.

[0]: https://www.da.vidbuchanan.co.uk/blog/boring-ai-problems.htm...

acjohnson55an hour ago

This is too broad of a statement to possibly be true. I agree with aspects of the piece. But it's also true that not every aspect of the work offloaded to AI is some font of potential creativity.

To take coding, to the extent that hand coding leads to creative thoughts, it is possible that some of those thoughts will be lost if I delegate this to agents. But it's also very possibly that I now have the opportunity to think creatively on other aspects of my work.

We have to make strategic decisions on where we want our attention to linger, because those are the places where we likely experience inspiration. I do think this article is valuable in that we have to be conscious of this first before we can take agency.

dabedee44 minutes ago

Anecdotally, I haven't been excited by anything published on show HN recently (with the exception being the barracuda compiler). I think it's a combination of what the author describes: surface-level solutions and projects mostly vibe-coded whose authors haven't actually thought that hard about what real problem they are solving.

matsemannan hour ago

If you spent 3 hours on a show HN before, people most likely wouldn't appreciate it, as it's honestly not much to show. The fact that you now can have a more polished product in the same timeframe thanks to AI doesn't really change that. It just changes the baseline for what's expected. This goes for other things as well, like writings or art. If you normally spent 2 hours on a blog post, and you now can do it in 5 minutes, that most likely means it's a boring post to read. Spend 2 hours still, just with the help of AI it should now be better.

ahmeneeroe-v2an hour ago

This is a great way to think about it. Put in the same effort and get farther.

AI is a bicycle, not a motorcycle.

Forgeties79an hour ago

AI is great for getting my first bullet points into paragraph form, rephrasing/seeing different ways of writing to keep me moving when I’m stuck, and just general copy-editing (grammar really). Like you said it generally doesn’t save me a ton of time, but I get a quality copy done maybe a little bit faster and I find it just keeps me working on something rather than constantly stopping and starting when I hit a mental wall. Sometimes I just need/wanf to get it done, and for that LLM’s can be great.

solarisosan hour ago

This resonates with what I’m seeing in B2B outreach right now. AI has lowered the cost of production so much that 'polished' has become a synonym for 'generic.' We’ve reached a point where a slightly messy, hand-written note has more value than a perfectly structured AI essay because the messiness is the only remaining signal of actual human effort.

crawshawan hour ago

It is a good theory, but does it hold up in practice? I was able to prototype and thus argue for and justify building exe.dev with a lot of help from agents. Without agents helping me prove out ideas I would be doing far more boring work.

grimgrinan hour ago

I land on this thread to ctrl-f "taste" and will refresh and repeat later

That is for sure the word of the year, true or not. I agree with it, I think

redwood9 minutes ago

AI is group think. Group think makes you boring. But then the same can be said about mass culture. Why do we all know Elvis, Frank Sinatra, Marilyn Monroe, the Beatles, etc? when there were countless others who came before them and after them? Because they happened to emerge at the right time in our mass culture.

Imagine how dynamic the world was before radio, before tv, before movies, before the internet, before AI? I mean imagine a small town theater, musician, comedian, or anything else before we had all homogenized to mass culture? It's hard to know what it was like but I think it's what makes the great appeal of things like Burning Man or other contexts that encourage you to tune out the background and be in the moment.

Maybe the world wasn't so dynamic and maybe the gaps were filled by other cultural memes like religion. But I don't know that we'll ever really know what we've lost either.

How do we avoid group think in the AI age? The same way as in every other age. By making room for people to think and act different.

minimaxir38 minutes ago

> Prompting an AI model is not articulating an idea. You get the output, but in terms of ideation the output is discardable. It’s the work that matters.

This is reductive to the point of being incorrect. One of the misconceptions of working with agents is that the prompts are typically simple: it's more romantic to think that someone gave Claude Code "Create a fun Pokemon clone in the web browser, make no mistakes" and then just ship the one-shot output.

As some counterexamples, here are two sets of prompts I used for my projects which very much articulate an idea in the first prompt with very intentional constraints/specs, and then iterating on those results:

https://github.com/minimaxir/miditui/blob/main/agent_notes/P... (41 prompts)

https://github.com/minimaxir/ballin/blob/main/PROMPTS.md (14 prompts)

It's the iteration that is the true engineering work as it requires enough knowledge to a) know what's wrong and b) know if the solution actually fixes it. Those projects are what I call super-Pareto: the first prompt got 95% of the work done...but 95% of the effort was spent afterwards improving it, with manual human testing being the bulk of that work instead of watching the agent generated code.

[deleted]an hour agocollapsed

notatoadan hour ago

i think about this a lot wit respect to AI-generated art. calling something "derivative" used to be a damning criticism. now, we've got tools whose whole purpose is to make things that are very literally derivative of the work that has come before them.

derivative work might be useful, but it's not interesting.

elifan hour ago

And AI denial makes you annoying.

Your preference is no more substantial than people saying "I would never read a book on a screen! It's so much more interesting on paper"

There's nothing wrong with having pretentious standards, but don't confuse your personal aversion with some kind of moral or intellectual high ground.

imiric43 minutes ago

I think there should be 10x more hardcore AI denialists and doomers to offset the obnoxiousness and absurdity of the other side. As usual, the reality is somewhere in the middle, perhaps slightly on the denialist side, but the pro-AI crowd has completely lost the plot.

elif40 minutes ago

I'm all for well founded arguments against premature AGI empowerment..

But what I'm replying to, and the vast majority of the AI denial I see, is rooted in a superficial, defensive, almost aesthetic knee jerk rejection of unimportant aspects of human taste and preference.

imiric20 minutes ago

The article does not fit the description of blind AI denialism, though. The author even acknowledges that the tool can be useful. It makes a well articulated case that by not putting any thought into your work and words, and allowing the tool to do the thinking for you, the end product is boring. You may agree or disagree with this opinion, but I think the knee jerk rejection is coming from you.

gAIan hour ago

I'm self-aware enough to know that AI is not the reason I'm boring.

hnlmorgan hour ago

I’ve been bashing my head against the wall with AI this week because they’ve utterly failed to even get close to solving my novel problems.

And that’s when it dawned on me just how much of AI hype has been around boring, seen-many-times-before, technologies.

This, for me, has been the biggest real problem with AI. It’s become so easy to churn out run-of-the-mill software that I just cannot filter any signal from all the noise of generic side-projects that clearly won’t be around in 6 months time.

Our attention is finite. Yet everyone seems to think their dull project is uniquely more interesting than the next persons dull project. Even though those authors spent next to zero effort themselves in creating it.

It’s so dumb.

Oarchan hour ago

Just earlier I received a spew of LLM slop from my manager as "requirements". He clearly hadn't even spent two minutes reviewing whether any of it made sense, was achievable or even desirable. I ignored it. We're all fed up with this productivity theatre.

WolfeReaderan hour ago

"productivity theatre" is a brilliant phrase. Thank you!

ryandrake43 minutes ago

Honestly, most people are boring. They have boring lives, write boring things, consume boring content, and, in the grand scheme of things, have little-to-no interesting impact on the world before they die. We don't need AI to make us boring, we're already there.

turnsoutan hour ago

I think it's simpler than that. AI, like the internet, just makes it easier to communicate boring thoughts.

Boring thoughts always existed, but they generally stayed in your home or community. Then Facebook came along, and we were able to share them worldwide. And now AI makes it possible to quickly make and share your boring tools.

Real creativity is out there, and plenty of people are doing incredibly creative things with AI. But AI is not making people boring—that was a preexisting condition.

Sol-an hour ago

The headline should be qualified: Maybe it makes you boring compared to the counterfactual world where you somehow would have developed into an interesting auteur or craftsman instead, which few people in practice would do.

As someone who is fairly boring, conversing with AI models and thinking things through with them certainly decreased my blandness and made me tackle more interesting thoughts or projects. To have such a conversation partner at hand in the first place is already amazing - isn't it always said that you should surround yourself with people smarter than yourself to rise in ambition?

I actually have high hopes for AI. A good one, properly aligned, can definitely help with self-actualization and expression. Cynics will say that AI will all be tuned to keep us trapped in the slop zone, but when even mainstream labs like Anthropic speak a lot about AI for the betterment of humanity, I am still hopeful. (If you are a cynic who simply doesn't belief such statements by the firms, there's not much to say to convince you anyway.)

argeean hour ago

In other words, AI raises the floor. If you were already near the ceiling, relying on it can (and likely will) bring you down. In areas where raising the floor is exceptionally good value (such as bespoke tools for visualizing data, or assistants that intelligently write code boilerplate, or having someone to speak to in a foreign language as opposed to talking to the wall), AI is amazing. In areas where we expect a high bar, such as an editorial, a fast and reliable messaging library, or a classic novel, it's not nearly as useful and often turns out to be a detriment.

latexran hour ago

> As someone who is fairly boring

As determined by whom?

> conversing with AI models and thinking things through with them certainly decreased my blandness

Again, determined by whom?

I’m being genuine. Are those self-assessments? Because those specific judgement are something for other people to decide.

Sol-an hour ago

I think I can observe the world and my relative state therein, no? I know I am unfortunately less ambitious, driven and outgoing than others, which are commonly associated with being interesting. And I don't complain about it, the word has a meaning after all and I'll not delude myself into changing its definition.

Definitely at a certain threshold it is for others to decide what is boring and not, I agree with that.

In any case, my simple point is that AI can definitely raise the floor, as the other comment more succinctly expressed. Irrelevant for people at the top, but good for the rest of us.

latexr12 minutes ago

> I think I can observe the world and my relative state therein, no?

Yes, to an extent. You can, for example, evaluate if you’re sensitive or courageous or hard working. But some things do not concern only you, they necessitate another person, such as being interesting or friendly or generous.

A good heuristic might be “what could I not say about myself if I were the only living being on Earth?”. You can still be sensitive or hard working if you’re alone, but you can’t be friendly because there’s no one else to be friendly to.

Technically you could bore yourself, but in practice that’s something you do to other people. Furthermore, it is highly subjective, a D&D dungeon master will be unbearably boring to some, and infinitely interesting to others.

> I know I am unfortunately less ambitious, driven and outgoing than others

I disagree those automatically make someone boring.

I also disagree with LLMs improving your situation. For someone to find you interesting, they have to know what makes you tick. If what you have to share is limited by what everyone else can get (by querying an LLM), that is boring.

BurningFrogan hour ago

OK, but maybe we only notice the mediocre uses of AI, while the smart uses come across as brilliant people having interesting insights.

palmotea31 minutes ago

> OK, but maybe we only notice the mediocre uses of AI, while the smart uses come across as brilliant people having interesting insights.

Then prove it. Otherwise, you're just assuming AI use must be good, and making up things to confirm your bias.

logicprogan hour ago

I think this is generally a good point if you're using an AI to come up with a project idea and elaborate it.

However, I've spent years sometimes thinking through interesting software architectures and technical approaches and designs for various things, including window managers, editors, game engines, programming languages, and so on, reading relevant books and guides and technical manuals, sketching out architecture diagrams in my notebooks and writing long handwritten design documents in markdown files or in messages to friends. I've even, in some cases, gotten as far as 10,000 lines or so of code sketching out some of the architectural approaches or things I want to try to get a better feel for the problem and the underlying technologies. But I've never had the energy to do the raw code shoveling and debug looping necessary to get out a prototype of my ideas — AI now makes that possible.

Once that prototype is out, I can look at it, inspect it from all angles, tweak it and understand the pros and cons, the limitations and blind spots of my idea, and iterate again. Also, through pair programming with the AI, I can learn about the technologies I'm using through demonstration and see what their limitations and affordances are by seeing what things are easy and concise for the AI to implement and what requires brute forcing it with hacks and huge reams of code and what's performant and what isn't, what leads to confusing architectures and what leads to clean architectures, and all of those things.

I'm still spending my time reading things like Game Engine Architecture, Computer Systems, A Philosophy of Software Design, Designing Data-Intensive Applications, Thinking in Systems, Data-Oriented Design, articles in CSP, fibers, compilers, type systems, ECS, writing down notes and ideas.

So really it seems more to me like boring people who aren't really deeply interested in a subject use AI to do all of the design and ideation for them. And so, of course, it ends up boring and you're just seeing more of it because it lowered the barrier to entry. I think if you're an interesting person with strong opinions about what you want to build and how you want to build it, that is actually interested in exploring the literature with or with out AI help and then pair programming with it in order to explore the problem space, it still ends up interesting.

Most of my recent AI projects have just been small tools for my own usage, but that's because I was kicking the tires. I have some bigger things planned, executing on ideas I have pages and pages, dozens of them, in my notebooks about.

apexalphaan hour ago

Meh.

Being 'anti AI' is just hot right now and lots of people are jumping on the bandwagon.

I'm sure some of them will actually hold out. Just like those people still buying Vinyl because Spotify is 'not art' or whatever.

Have fun all, meanwhile I built 2 apps this weekend purely for myself. Would've taken me weeks a few years ago.

nickysielickian hour ago

> AI models are extremely bad at original thinking, so any thinking that is offloaded to a LLM is as a result usually not very original, even if they’re very good at treating your inputs to the discussion as amazing genius level insights.

This is repeated all the time now, but it's not true. It's not particularly difficult to pose a question to an LLM and to get it to genuinely evaluate the pros and cons of your ideas. I've used an LLM to convince myself that an idea I had was not very good.

> The way human beings tend to have original ideas is to immerse in a problem for a long period of time, which is something that flat out doesn’t happen when LLMs do the thinking. You get shallow, surface-level ideas instead.

Thinking about a problem for a long period of time doesn't bring you any closer to understanding the solution. Expertise is highly overrated. The Wright Brothers didn't have physics degrees. They did not even graduate from high school, let alone attend college. Their process for developing the first airplanes was much closer to vibe coding from a shallow surface-level understanding than from deeply contemplating the problem.

notahackeran hour ago

Have to admit I'm really struggling with the idea that the Wright brothers didn't do much thinking because they were self taught, never mind the idea that figuring out aeronautics from reading every publication they could get their hands on, intuiting wing warping and experimenting by hand-building mechanical devices looks much like asking Claude to make a CRUD app...

nickysielicki30 minutes ago

That's not what I'm saying. My point is that expertise, as in, credentials, institutional knowledge, accepted wisdom, was actively harmful to solving flight. The Wrights succeeded because they built a tool that made iteration cheap (the wind tunnel), tested 200 wing shapes without deference to what the existing literature said should work (Lilienthal's tables were wrong and everyone with "expertise" accepted them uncritically), and they closed the loop with reality by actually flying.

That's the same approach as vibe coding. Not "asking Claude to make a CRUD app.", but using it to cheaply explore solution spaces that an expert's priors would tell you aren't worth trying. The wind tunnel didn't do the thinking for the Wrights, it just made thinking and iterating cheap. That's what LLMs do for code.

The blog post's argument is that deep immersion is what produces original ideas. But what history shows is that deeply immersed experts are often totally wrong and the outsiders who iterate cheaply and empirically take the prize. The irony here is that LLM haters feel it falls victim to the Einstellung effect [1]. But the exact opposite is true: LLMs make it so cheap to iterate on what we thought in the past were suboptimal/broken solutions, which makes it possible to cheaply discover the more efficient and simpler methods, which means humans uniquely fall victim to the Einstellung effect whereas LLMs don't.

[1]: https://en.wikipedia.org/wiki/Einstellung_effect

[deleted]an hour agocollapsed

tonymetan hour ago

When apps were expensive to build , developers at least had the excuse that they were too busy to build something appealing. Now they can cope by pretending to be an artisanal hand-built software engineer, and still fail at making anything appealing.

If you want to build something beautiful, nothing is stopping you, except your own cynicism.

"AI doesn't build anything original". Then why aren't you proving everyone wrong? Go out there and have it build whatever you want.

AI has not yet rejected any of my prompts by saying I was being too creative. In fact, because I'm spending way less time on mundane tasks, I can focus way more time on creativity , performance, security and the areas that I am embarrassed to have overlooked on previous projects.

himata4113an hour ago

I've actually ran into few blogs that were incredibly shallow while sounding profound.

I think when people use AI to ex: compare docker to k8s and don't use k8s is how you get horrible articles that sound great, but to anyone that has experience with both are complete nonsense.

hhsueyan hour ago

Another click bait title produced by a human. Most of your premises could be easily be countered. Every comment is essentially an example.

add-sub-mul-divan hour ago

Also sounds likely that it's the mediocre who gravitate to AI in the first place.

[deleted]30 minutes agocollapsed

clintan hour ago

Yet another boring, repetitive, unhelpful article about why AI is bad. Did the 385th iteration of this need to be written by yet another person? Why did this person think it was novel or relevant to write? Did they think it espouses some kind of unique point of view?

sarmasamosarma25 minutes ago

[dead]

ghostclaw-csoan hour ago

[dead]

ai4prezidentan hour ago

[dead]

wagwangan hour ago

Isnt this just flat out untrue since bots can pass turing tests

wsowensan hour ago

People are often boring in conversation. Therefore, an AI agent doesn't need to be interesting to seem human enough in a Turing test.

saalweachteran hour ago

"There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists."

recursivean hour ago

The turing test isn't designed to select for the most interesting individuals. Some people are less interesting than other people. If the machine is acting similar to a boring human, it can make you more boring, and still pass the test.

hyperhelloan hour ago

The bot doesn’t pass, the human fails.

stuckinhellan hour ago

I mean, can't you just… prompt engineer your way out of this? A writer friend of mine literally just vibes with the model differently and gets genuinely interesting output.

guywithahatan hour ago

I think the point of the article is that on sites like HN, people used to need domain expertise to answer questions. Their answer was based on unique experience, and even if maybe it wasn't optimal it was unique. Now a lot of people just check chatgpt and answer the question without actually knowing what they're talking about. Worse the bar to submit something to Show HN has gotten lower, and people are vibe coding projects in an afternoon nobody wants or cares about. I don't think the article is really about writing style

elliotbnvlan hour ago

I was onboard with the author until this paragraph:

> AI models are extremely bad at original thinking, so any thinking that is offloaded to a LLM is as a result usually not very original, even if they’re very good at treating your inputs to the discussion as amazing genius level insights.

The author comes off as dismissive of the potential benefits of the interactions between users and LLMs rather than open-minded. This is a degree of myopia which causes me to retroactively question the rest of his conclusions.

There's an argument to be made that rubber ducking and just having a mirror to help you navigate your thoughts is ultimately more productive and provides more useful thinking than just operating in a vacuum. LLMs are particularly good at telling you when your own ideas are un-original because they are good at doing research (and also have median of ideas already baked into their weights).

They also strawman usage of LLMs:

> The way human beings tend to have original ideas is to immerse in a problem for a long period of time, which is something that flat out doesn’t happen when LLMs do the thinking. You get shallow, surface-level ideas instead.

Who says you aren't spending time thinking about a problem with LLMs? The same users that don't spend time thinking about problems before LLMs will not spend time thinking about problems after LLMs, and the inverse is similarly true.

I think everybody is bad at original thinking, because most thinking is not original. And that's something LLMs actually help with.

nickjjan hour ago

Look at the world Google is molding.

Here's a guy who has had an online business dependent on ranking well in organic searches for ~20 years and has 2.5 million subs on YouTube.

Traffic to his site was fine to sustain his business this whole time up until about 2-3 years where AI took over search results and stopped ranking his site.

He used Google's AI to rewrite a bunch of his articles to make them more friendly towards what ranks nowadays and he went from being ghosted to being back on the top of the first page of results.

He told his story here https://www.youtube.com/watch?v=II2QF9JwtLc.

NOTE: I've never seen him in my YouTube feed until the other day but it resonated a lot with me because I have a technical blog for 11 years and was able to sustain an online business for a decade until the last 2 years or so. Traffic to my site nose dived. This translates to a very satisfying lifestyle business to almost $0. I haven't gone down the path of rewriting all of my posts with AI to remove my personality yet.

Search engines want you to remove your personal take on things and write in a very machine oriented / keyword stuffed way.

arduanika7 minutes ago

An earlier wave of tech did it a bit, and so there's no chance that this new wave does it a lot.

hn-front (c) 2024 voximity
source