3 min read

LLMs killing comp sci.

my response to: https://x.com/KevinNaughtonJr/status/2005127715251446202?s=20

i would like to present my thoughts here.

i’ve started utilizing Ilms for coding things up fairly recently, maybe about 2 months back. it’s a great tool, that’s for sure, but at times, i feel kind of dumb for not knowing the details of the things being used or the bug that was “bugging” me.

before that, i was using it as kind of a mentor: asking “dumb” questions, repeating the topics, again repeating with more details, telling it to explain without minding about its own word limits (i still do), reading up the sources from where it fetched it.

i started using chatgpt 7-8 months after it was launched. wasn’t aware of it much at that time.

earlier, i was reading the error logs, googling it up, reading articles about it. sometimes the things were not directly related and it would require you to read up to ~5 articles, reading up discussions, long threads on stackoverflow or superuser, spending days, or heck even whole week, and then connect the dots. (highest discussion till now i’ve read was ~900 comments)

although, i still do read error logs, but instead of reading and pasting the meaningful part of the error into google, i paste the whole error with context to the Ilm (i’m using GH copilot)

i’ve specially hit walls when dealing with non-mainstream issues related to kernels, swap files, drivers, operating systems, etc. where not much is being asked and answered. (remember when linux used to break quite often?)

now that the Ilms are being more and more trained, it’s becoming easy to quickly debug and move on, which is a nice thing; saving time and putting it to where it actually matters (hopefully)

the community from where i was learning advised me to not use Ilms while in the learning phase as it would hamper the “figuring out things on your own” aspect. i can see that now. banging my head as to why it’s not working, facing a wall and not instantly giving up, breaking down the issue and tackling it one at a time, building it up and solving it. it would have not been possible if i had outsourced my thinking to the Ilm.

it helped me by building perseverance, which if i had lacked, might have led me to give up on many things that seemed futile at the first glance. because when the system breaks and Ilms won’t be able to solve it out, you have to step in and take matters in your own hand.

there are still things that i’m trying to figure out on how to utilize these agents in my workflow, gaslighting it to behave like a senior programmer.

the current best i’m doing is instructing it to explain it’s approach from the first principles and learning along it rather than just auto-completing my way out with weak understanding of stuff.