Like many people, I have found agents to be increasingly useful.
AI
LLMLike people, some of the biggest power ups I have found are to give the agent a good environment. It works best when it:
- has access to an LSP to allow it to quickly check type errors or just any error really.
- can quickly run lint checks
- has an existing test suite that it can incrementally add new tests for
- in particular, I have found integration tests to be the most useful, that way you can let the agent go wild and not get bogged down by a particular implementation. It should not be scared to change
- Web dev tip: Give it a browser it can use in the integration tests.
- has a working formatter
The beautiful thing is most agents auto run tests, code formatter and linters after every change.
My 2 cents above is based on my experience
So I have been "vibe coding" an app to help me keep track of my running niggles/ injuries.
"vibe coding"
"vibe coding", I also haven't truly been 100% vibe coding... I still look at the code it writes and then based on that either:- completely undo the change and ask it to have another go, maybe slightly tweaking my prompt
- give it a follow up prompt.
So I have been "vibe coding"
The agents really came alive when I made them write some integration tests, using an actual browser. eg:
import { test, expect } from "@playwright/test";
test("homepage loads just added diagnosis", async ({ page }) => {
await page.goto("/");
await expect(page).toHaveTitle("Current Diagnoses");
await expect(page.locator("diag 1")).toBeVisible();
});