Insights
Tips and Advice for AI Developers
June 25, 2025
June 25, 2025
Using LLMs (Large Language Models; ChatGPT, Claude, Gemini, etc) during software development isn’t just becoming standard — it’s already here. Regulated code generation (codegen) is quickly becoming a core part of modern workflows. And like any new tool, it brings a set of quirks that developers need to manage effectively.
At appsquire, we’ve invested time into integrating LLMs into every phase of our development cycle — from researching technologies and planning architecture, to hands-on coding, bug fixing, and deployment. Along the way, we’ve developed a set of habits and reflexes that help us get the most out of these tools. We’re sharing some of that here, in the hope it helps streamline your own journey with agent-powered workflows.
When debugging, code agents like Cursor, Claude Code, and Zencoder, tend to assume that solving a problem means adding more code. They tend to stack edits on top of each other instead of removing unused components. Over time, this leads to technical debt that can resurface later.
Worse, half-finished or abandoned fixes often end up being the source of new issues. But since the agent just added them, it tends to keep moving forward — attempting new solutions rather than cleaning up its own trail.
Advice #1: Monitor changes and commit often
Revertability is essential. You might be tempted to let the agent undo its own mess if you haven’t committed recently. That can work — but often leads to incomplete reversions or the wrong code getting deleted.
Agents with terminal access can break things twice as fast. Manual environment resets aren’t fun, especially if you’re doing them every few minutes. Paying attention to changes, and reading the explanations provided, helps avoid the feeling of being dropped into an unstable or unknown state every time something breaks.
Advice #2: Be specific when prompting
You can just say, “I have this error: {insert error}... fix it,” and surprisingly, it works more often than not. But it can also lock the model into a loop of trial-and-error.
Instead, offer your best guess at the issue and how to approach it:
“I have this error: {insert error}. I think it’s related to {insert suspected area}. Can you try {insert possible fix}?”
If everything is a mess, passing the code (with context) to a different model — ChatGPT works well here — can give you a second opinion. You can then format the response into a prompt your main agent can work with.
In development, it’s common to encounter errors that seem to appear without reason. When introduced to these errors, LLMs usually respond in one of two ways: they either solve it quickly (if it’s a well-known issue) or they get stuck repeating the same few generic fixes — none of which work. The latter often results in fragmented, unfinished solutions that accumulate into technical debt.
Advice #1: Ask for multiple approaches
As a preventative measure, request multiple solution paths from agents that support this. LLMs will typically outline the pros and cons of each, giving you control over which path to follow. This helps avoid dead ends and minimizes the need for cleanup afterwards.
Advice #2: Ask for an explanation
If the agent is already looping, avoid repeating vague “fix this” prompts. Instead, ask for an explanation of the error and possible causes. This shifts the agent’s focus from patching symptoms to analyzing the broader context — often leading to more thoughtful and effective solutions.
These systems are often trained on older datasets. Even with internet access, dramatic changes in tools or frameworks can cause them to mix outdated practices with new ones — leading to solutions that almost work, but not quite.
Advice: Provide explicit documentation
Most code agents support context injection or custom documentation. Supplying the latest resources doesn’t fully prevent confusion, but it helps reduce errors when you specify exactly which version or pattern should be trusted.
Too much context can overwhelm an agent. Other times, it grabs onto the wrong piece of information and forgets what you actually wanted to highlight.
Advice: Provide useful context
Be intentional. Don’t drop in full logs or files unless they’re directly relevant. This becomes easier as you get more familiar with your stack — you’ll be able to focus the prompt and tailor context more effectively.
LLMs build reasoning step by step, and in the process, key details can get lost. Code agents in particular tend to re-generate functions, styles, or utilities if you don’t remind them of existing project structure. This can lead to disorganized, duplicated, or conflicting code.
Even non-codegen tools can go off-track — referencing incorrect OS details, outdated frameworks, or incompatible libraries — if they lose sight of your current setup.
Advice #1: Add context and tell it to use it
For example, if you use a global styling file and want a new page created, mention that file explicitly and instruct the agent to place all styles there. These systems can be forgetful — repeating important rules saves you hours of rework.
Advice #2: Implement memory files
Most modern agents support some form of persistent memory. Cursor, for instance, has a “Rules” feature that lets you define project-wide behaviours and reminders. Other platforms offer similar systems. Using these keeps your codebase cleaner and your tooling more consistent.
As language models continue to reshape development, best practices will evolve alongside them. It’s up to us — the developers — to guide these tools effectively and make the most of what they offer.
In the coming years, we expect these systems to handle much larger contexts, integrate more seamlessly with live documentation, and reduce the need for constant repetition or correction. Their impact won’t stop at software — they’ll ripple into every field where reasoning and automation matter.
At appsquire, we’re embracing this shift and evolving with the future of agentic development. Keep adapting, keep sharpening your knowledge, and keep building
Do you need help with your project? Get in touch — we can help you build smarter, faster, and with less technical debt.
Unlock Specialized Skills Through Team Augmentation
Friday, August 2, 2024
Getting real value from LLM agents requires disciplined structure
Wednesday, June 25, 2025
We don’t just write code, we architect results.
Tuesday, May 27, 2025
Learn to craft interactive machine learning apps effortlessly with XGBoost and Streamlit!
Friday, November 8, 2024