Hacker News

JohnMakin
Ask HN: How should I hire a Junior Developer in today's market?

TLDR; My question to you HN is, how should I hire a junior developer right now? This post is inspired by a flagged post that briefly appeared at the top of HN that speculated whether we had seen the death of the Junior developer because of (reasons not specified) and then pitched their shitty product that was essentially a thin wrapper around chatGPT.

I would argue that junior developers are strengthened by the base technology, but maybe not the way it is currently packaged in the form of AI coding assistants that promise far more expertise than they actually contain. A large part of my day is actually convincing people who could probably be assisted by responsible uses of AI coding tools that they should avoid them, lest they infect my code base with ill advised changes they don't fully understand.

What would you have me do, almighty market? We need to build coding assistants that explain why and how they came to a conclusion rather than feeding -people who do not know what they are doing answers they cannot evaluate as acceptable or safe. It is a looming crisis. I don’t in 5 years need to explain to an army of supposedly senior engineers how basic things work because some model version lost sight of it a few years ago. My question to you HN is, how should I hire a junior developer right now?


rozenmd4 days ago

Have them perform a task similar to what you want to hire them for.

In my case, I had them build a feature in a React app. There was a candidate that solved it faster than I did, while explaining their approach and trade-offs live, so we hired them.

solardev4 days ago

Maybe a contrary opinion, but who cares if they use AI? We shouldn't expect juniors to use a different / artificially handicapped workflow than the rest of us.

Test them on the actual work, whatever it is. Make multiple example questions or one big take home project that's similar to the real work they'd be doing, and allow them to solve it organically, as they would on the job. If they pass it with AI, cool, they managed to use it effectively. If they fail it with AI, that's on them. You don't need to go out of your way to purposely try to confuse AI OR the human. Working together is going to be the new norm and the AI is just another assistant or teammate.

IMO you're not testing "is this person super smart and able to recite leet code from memory without any internet access or reference materials". You're testing "is this person going to work well in our team and contribute meaningfully". You wouldn't disrespect your current coworkers by asking "how much AI did you use in this PR" if it otherwise works and passes tests and seems readable and maintainable, etc. (and if it doesn't, that's a code quality concern, not a question of human vs AI provenance).

The juniors should have the same ability to use the tooling available to the rest of us. If they choose to never understand AI output more deeply, well, the same could be said of any third party lib or Stack post. It's usually not a deal breaker in run of the mill business apps anyway. In fact there's an argument to be made for adopting good enough popular solutions vs reinventing the wheel and adding more tech debt for minimal gains.

sircastor3 days ago

> We shouldn't expect juniors to use a different / artificially handicapped workflow than the rest of us.

When I was at film school, our first year project was required to be done on film. This was 2001 - Digital cameras were newish, but the school had several that students could use.

The reason they required film had to do with the students. Film is expensive, and unforgiving. You need lots of light, you have to cut and tape it together by hand. It makes you slow down and think about what you’re doing and why.

AI is a tool, but developers need to understand their work without it. I think it dangerously enables development without thinking.

jjice3 days ago

I've seen this kind of take get a lot of hate online, but I do agree (ignoring any of the parent post's specifics). I think using AI is fine, but if you get output and you don't have the ability to validate it or look at it critically, you're just going to produce code with subtle bugs.

I worked with a junior at my last job - great guy - who really got into ChatGPT when it came out. There we're _multiple_ times I hopped in a call to debug with him or reviewed a PR of his and had questions about certain very clear bugs. They were always AI generated, but unvalidated.

One specific one that pops out was an htaccess file (which suck, to be fair). He was asked my why his changes didn't work as he wanted them to, and when I looked at it, it wasn't even close to doing what was intended. I asked him where he got the basis for these changes, and he pulled up a ChatGPT prompt.

I understand that an LLM likely has very little htaccess knowledge compared to JavaScript, but the mindset of taking what it said and pasting it in without assessment isn't a very productive one. Same goes with StackOverflow as well, except you can at least get some feedback from other people via votes and comments.

He also had a lot of imposter syndrome, and I can't imagine that relying on LLMs helped much with that either.

smeej3 days ago

This reminds me of the arguments against teaching kids to use calculators in school, at least at the elementary ages. Yes, it's now probably safe to assume they will have one in their pockets their whole lives, but understanding the principles of what's actually happening is still important for the student's personal growth in understanding of the subject.

sky22243 days ago

I know you're on the side of discouraging AI tools (at least that's how I'm interpreting your response), I just want to highlight that I think the calculator argument is different than the AI one.

Yes, they're both new tools here, but the calculator argument didn't have the caveat of sometimes 2+2 won't output 4. LLM tools do have that issue and it's important that junior devs don't overlook this and are able to function on their own.

smeej3 days ago

I think I'm on the side of "use tools to speed up your work, once you actually know how the work ought to be done."

Calculators spit out wrong answers all the time, it's just due to user error, like fat-fingering an operation button or adding a zero somewhere. It's easy to tell when that happens if you understand how it should work, because you're not just using the output blindly.

Same idea with AI tools. It's important that you already know what the output should be, at least well enough to notice when it's wrong. But you probably don't need to type out every line of code and deal with your typos and whatever.

csbbbba day ago

A more constrained test might be justifiable on an interview, but it's got to be clear that it still doesn't directly translate to on-the-job skills.

This might be more of a question about what you expect from a "junior developer" role; are they building skills at a cost to the business, like a student or intern, or doing the best they can do to justify a paycheck?

muzani4 days ago

I just treat ChatGPT more like a framework than anything. Don't interview for problems that can be solved with a framework. You wouldn't test a web dev by asking them to set up a blog.

Red, green, refactor. Make sure ChatGPT fails the test. Make sure a human can pass the test. Then adjust accordingly. If you can't do this, then you should pay for a senior who can.

euvin4 days ago

I would think the answer is to peer into the applicant's thought process through conversation and decide if they're capable enough for the job, LLM or not.

Are you looking for specific criteria on how to judge junior applicants?

JohnMakinop4 days ago

The interview process has famously failed to perform here even in the pre-LLM, remote interview age.

Yea, I am asking for that.

[deleted]4 days agocollapsed

austin-cheney4 days ago

In big corporate software most developers are beginners. Just some have remained beginners for 8-12 years. That is what you want to avoid.

Expected non juniors to be able to write software on their own. This is a colossal huge ask, because most developers cannot do this. Instead they bullshit with tech stacks, configurations, and expect open source tools to do their jobs for them. When real problems occur they are worthless.

So, expect non juniors to actually write software. For junior developers you are looking for potential, the potential to independently write software in the future.

I would look for people who can read, write, and follow instructions. Be extremely critical about this. Can they figure things out or do they require tremendous hand holding? It’s not about what they already know but what they can do and what they can figure out.

Hopefully software will figure this out. It’s why I stopped developing. I got tired of working with people who are grossly incompetent, fully reliant on tech stacks, and extremely superficial and insecure. A bunch of expert developers inventing unnecessary complexity to justify their many years of inexperience in the line of work.

savorypiano3 days ago

Is this true?

I've only had one software job but there was nobody like what you describe. Even the real juniors who only had one software job.

austin-cheney2 days ago

I did that corporate software thing for about 15 years. In that time from the people I worked with were generally Java developers or JavaScript developers. Here are the patterns I saw repeated across various employers:

* QAs were more likely female and developers were more likely male. The QAs, irrespective of sex identity, almost always demonstrated superior communication skills. The difference wasn't even close. I have worked with some developers who could barely write two sentences but the good QAs could draft a precise set of instructions at a moments notice.

* About 15% of the developers could really develop software with confidence. The rest wrote glue code, put text on screen, filled administrative responsibilities, and so forth. The most important thing for the less confident developers was retaining employment, advancing, or just generally seeking admiration. Delivering products or solving real problems are less interesting because it requires more work with less immediate gratification. These less confident developers are the ones most likely to frequently jump employers because that artificially drives up wages, resets their quest for admiration, and prevents exposure for their inability to write original software.

* 10x developers are a real thing. These are people who are 10x, or more, productive than their peers. I have been a 10x developer. These people are not necessarily smarter or even better at writing software. The thing that stands out about these people is that they ask better questions and do things that aren't asked for, such as writing personal tools to automate away manual effort.

* Not everybody is capable of performing software architecture. Software architecture is a test of organization competence. Higher intelligence is absolutely beneficial for this, but far less so than superior conscientiousness. Conscientiousness has a negative correlation to intelligence at around -0.27. That means the smartest guy in the room is most likely not the right person for this job.

* Almost everybody has opinions about what to do but almost nobody measures anything. The mere idea of measuring things scares the shit out of those 85% of less competent developers. If you want to propose stellar ideas have stellar evidence, but even still be prepared for rejection merely because 85% of your peers are cowards who cannot deviate from their tiny capability set.

hn-front (c) 2024 voximity
source