Using LLMs effectively.

LLMs have a limit, I have use Claude Pro 3, GitHub Copilot and I now say with great strength that LLMs have a limit and won’t be useful as a sole tool for a very long time.

So what do we do in the meantime? Treat these tools as what they are tools. One limit of this tool is context. Chatbots are still very good at offering solutions for tough problems within the code but the responses, there is only so much value it can offer per response. So if one part is really good and seems to be thoughtful then there is a high change that the other part of the file will have flaws.

Furthermore, don’t just give these LLMS a big file and see here fix it, this is not a good strategy.

Said thoroughly, what I have noticed is, it gets as good as the potential, it’s ceiling. This ceiling is the limit for how good a project can get if you just blindly use the LLMs suggestions.

So, if you’re on a very well-maintained code base with lots of complexity–nesessary complexity– if you use these LLMs blindly or lazily your repo is going to do down to the Max potential of that LLM which isn’t very high.


  • Code should have strict and determined responsibility. Using LLMs can often get the programmer into the bad habit or if it works then it can’t be too bad, this thinking ignores the way good code should be written; responsible and determined code logic.


  • The best thing to do is to use it for the scaffolding of the app, this scaffolding code should be looked at as just a little bit better than a macro output from say, a BASH file.
  • The other use is for the LLM to offer suggestions. These suggestions should be added line by line.

We get paid to do stuff which hasn’t been done before–for the most part– so we cannot expect these LLMs which are trained on data which already exist to autopilot our way to success in our coding ventures.


Leave a Reply