Unpopular Opinion: ChatGPT is no substitute for learning core coding concepts

When chatgpt churns out boilerplate code and ready snippets for your projects, it's easy to fall in that trap of "I am building this" or "I am more productive now" but in the greater scheme of things, "ChatGPT knows" is still no different than "Google knows" or "Wikipedia knows" or "Stack Overflow knows".

At the end of the day, we have just replaced one kind of "reference monster" with another that feels somewhat interactive and intimate, is good at searching and filtering, and gives all information through one interface.

But eventually, you must still learn the technical concepts the hard and old school way. AI is still no substitute for that and it won't be even if AGI ever arrives.

In some ways, LLM is more deceptive than Google/Wikipedia because it gives you the false sense of feeling that you've achieved something or know something when you actually haven't (in the strict technical sense).

8 points | by pyeri 7 hours ago

3 comments

  • pols45 6 hours ago
    Its not comparable to the past. Difference being you can go back and forth with ChatGPT. And I push the probationers I supervise to do that, cause people are very different.

    Some are curious, some are anxious about getting things wrong, some are doing the bare minimum and ya some are full of themselves. Each one exploits the tool differently. But I get them to ask it about these things too (are there things I am missing cause I want to get this right/why is my boss saying I am full of myself/my boss told me he doesn't have time to teach me about X but if I am curious thats something I should look into etc) which you couldn't do with them in the past.

  • rzzzwilson 7 hours ago
    Agree.

    - retired programmer, student of Computer Science, 40 years experience

  • davydm 7 hours ago
    In addition, if you don't know the concepts, you won't catch chatgpt out when it hallucinates bugs (often subtle) into the code.

    Personally, I stay away from ai-gen code because I don't want to debug someone else's code when I could just write my own. AI-gen solutions, from my experience, only fall into a handful of categories:

    1. boiler-plating

    imo, this should be done by bespoke tools, which are crafted by someone who understands the domain - and that's how it's been done for ages. If you can get accurate boiler-plating from ai-gen and there's no tooling to help - go for it, but you'll miss out on understanding what that boiler-plating does if you don't already know

    2. trivial solutions

    IE, solutions that are obvious, like "write me a function which determines if a number is even". Sure, this prompt came from someone who doesn't understand their own language and tooling, so it's already coming from a low bar. But the solution here is an obvious one, doesn't require a generated function, is literally built into the language

    3. completely wrong solutions

    The amount of time I've wasted reading ai synopses when googling a technical issue... Egad. Because they've all been a waste. I spent 1/2 an hour with chatgpt trying to resolve an issue with webpack, and despite me telling it several times that it was in webpack 4, it kept trying to offer incompatible solutions that work in webpack 5. This is just one example, but the number of times I've seen ai outputs and just gone "wtf, no", is not insignificant. These may sometimes be not as harmful as the next case, because sometimes even people with low skill in the area can spot the problems, or they simply don't compile, ie break before they have a chance to manifest havoc.

    4. subtly-wrong solutions

    This was my experience when being presented to by people saying "look what this model does". The problem is that others in the room didn't spot the bugs until I pointed them out, so I'm not sure if that's due to overconfidence in the output or they just have a skill issue. Subtly-wrong solutions without someone to correct them are the worst because they will appear to work most of the time and present weird bugs sporadically. One output I saw was around doing something based on the day of the week, and the output just skipped sundays, for no good reason (was counter to the requirements)

    None of these outcomes are useful to me, and they eclipse, by far (in my experience), the maybe 1% of useful outputs. My conclusion is that it's more time-efficient to do the work myself, because I understand the long tail of debugging and extension and all the things the ai subtly cocks up.