Repeating patterns, time well saved
I've seen some pretty wild agent prompts, long and complex with equally wild claims of great success. For me that doesn't sit very well with my experience with a goal of reliable performance in a professional setting. LLMs make a best guess on what to do or say next; if you give them a wide range of options or a loosely defined complex task, typically there is a fair amount of guessing between uncertain options, leading to unexpected results. Now, for exploring new methods or the dopamine hit, that works great. Though, what I am focused on is getting things done in a repeatable manner.
One of the most effective ways to reliably save time with LLMs is applying a well-defined change across code that repeats the same pattern. The objective is not to redesign behavior, but to execute a mechanical transformation consistently and efficiently.
This kind of work was traditionally handled with search-and-replace, regular expressions, or custom scripts. LLMs now handle much of this with less upfront effort, particularly when minor variations exist across files.
Example: normalizing links in blog components
Consider a blog where components are written in MDX or JSX. Internal links are created using a framework <Link> component, often with inline customization via an sx prop:
<Link
href="/about"
sx={{
color: "primary",
textDecoration: "underline",
"&:hover": { color: "secondary" },
}}
>
About
</Link>
Over time, these inline styles spread across the blog. A shared component—such as <BlogLink>—lets you centralize styling, accessibility decisions, and future behavior changes, instead of re-implementing link styling component-by-component.
Using an LLM to apply the pattern
-
Constrain scope within the workspace Limit the change to a specific package, folder, or set of files so review stays manageable and the model has less room to wander.
-
Describe the requirement (or show examples when needed) If the change is small, a clear description is usually enough. For more complex changes, provide before/after samples or have the model modify one or two instances to establish the pattern.
-
Start with a single location Ask the model to make the update in one place to observe how it handles that variation. Adjust your prompt or example until the change is correct and minimal.
-
Apply to the rest, explicitly Once the pattern is validated, ask it to update the remaining codebase, or provide a concrete list of files to modify.
-
Review each diff Review each change individually, as the model will sometimes “help” by making unsolicited “improvements” beyond what you requested.
Conclusion
LLMs are most effective as force multipliers for repetition. Once a pattern is validated, they can apply it across a constrained surface area quickly, while you retain control through tight scope and careful review.