“Why should I use AI?”
Ask our AI pundits this question, and you’ll get the same answer (with drastically different inflections):
“You don’t have a choice.”
They’re not wrong. But they’re also not entirely right. We have a choice. It’s just less about why and more about how.
Every generation of programmers has interacted with one or another of tech's tectonic shifts. It comes with the territory; if you stick around here for long, you'll need to constantly evolve, add skills, tweak workflows, and retool your toolkit.
Prior to 2022, Natural Language Processing (NLP) and Machine Learning (ML) hinted at what was possible in machine-assisted software development. But the cost of learning and using those tools was so prohibitive that smaller teams, indie devs, or really any developer not working at the Big Tech companies was unlikely to interact with them regularly.
Then ChatGPT hit: the first foreshock of another earthquake in tech. Financial barriers to entry crumbled. APIs made LLMs cheap and easy to access. And yet, even still, using AI in programming still felt like something some devs chose to do, and others didn't, and that was just fine.
But in 2025, it started to feel different.
Throughout that year, the developer community seemed to sense a creeping feeling that things weren’t changing, but had already, irrevocably, changed. We started to pick up on what had been hiding in between the lines of the not-so-gentle threat: “If you’re not using AI, you’ll be left behind.” Pressure mounted to upskill, reskill, comply adopt. By the end of the year, devs started seeing they could create actual functioning code with LLM agents, not just using AI for helpful tab completion.
2025 also brought about the proliferation of SaaS tools promising to let non-programmers build software with the help of an LLM. Developers have to contend with this.
At Tighten, we've been using LLMs for development workflows and to help clients for years, but nothing made it clearer to us that LLMs were here to stay than when we noticed that clients were coming to bizdev conversations with questions and expectations about AI. We learned to sort these clients into two types:
Now, it’s not exactly a new phenomenon for clients to come to an agency with a solution already in mind. Every software developer has been asked at some point, by someone who is not a dev, to “build this software, this exact way.” But when, two out of three times, a client doesn’t actually need AI to solve their problem, how do we effectively make them understand why that’s the case?
In general, in situations like this, we simplify the problem by returning to first principles: Our pitch at Tighten has been the same since day one (and will be until we close our Slack for the final time): We are best able to bring our full expertise to the table when we understand our clients’ problems, not just their desired solutions. Clients bring the expertise on their industry and customers. We bring the expertise in tech and products. Together, when we combine our expertise, we’re able to articulate the problem in the simplest, clearest, most concise terms.
Then—and only then—do we reach for the right tools to solve it.
We need to underscore this point: The biggest change AI has had on our workflow is that it’s shifted where we deploy our expertise. AI has not replaced the need for that expertise.
We're going to assume you're starting from your unique place of expertise. So then: Why should you use LLMs in your dev workflow?
Every day as a programmer is some combination of solving novel problems and doing the same thing over and over. LLMs struggle with the first thing, but they excel at the second. Things like executing repetitive tasks, upgrading dependencies, building basic templates, taking a pattern you defined with expertise and replicating it throughout the codebase—these are the sort of tasks that you can offload to free you and your team up to work on the stuff LLMs struggle with: thoughtful explorations of creative and novel ideas.
If you understand programming, but don't have time to sit at your computer, prompting an LLM lets you produce code without having to be tethered to your keyboard. High-quality, innovative code may not be where AI excels, especially if you want it produced faster and with less input than an expert dev, but if you have great ideas but not enough time to code them, you may find AI a particularly exciting option.
We all have reason to work outside our comfortable tech stack from time to time, whether by requirement or desire. LLMs can't produce code at the quality where you can trust it without any review, so we won't say you should purely vibe code apps where you don't know how to review the code.
However, an LLM can often be the aid that helps you learn how to be productive in a new language or environment faster than you would've without it. You still need to review—and understand—the code you're producing, but the LLM can be a much easier learning aid than just the documentation alone.
And we've often found that senior-level development experience builds a sort of eye for correct code. If you're adept enough to understand what "correct" looks like at a high level, you're more likely to be able to review code produced in an environment where you don't know the exact syntax.
Solo devs don't get code review. No one is pointing out your blind spots, questioning your assumptions, or asking "why did you do it this way?" LLMs can simulate this friction in situations where it otherwise wouldn't exist at all.
Not quite a staff engineer, not quite a rubber duck either, but somewhere in between: LLMs are good proxies for your own thought. Getting lost in your own architecture? "Explain this code back to me like I'm a junior dev" is a great way to see the gap between your intent and what you actually wrote.
That said: Don't let the obsequiousness of LLMs fool you. If a junior dev wouldn't catch what's architecturally unsound, neither will your LLM.
Every dev has a graveyard of ideas and projects that never got beyond "someday." The time-to-prototype blight has snuffed out a lot of fun and funky ideas we've had over the years. LLMs lower the cost—in terms of time, money, energy—of finding out if an idea actually holds up and is worth pursuing through to production.
The immediate benefits of AI seem so dazzling that it’s easy to miss the risk creeping like black mold into the walls of your codebase.
Your mileage will vary on how manageable or acceptable each of the following issues is. In isolation, maybe you can tolerate one or two. Taken together? We must evaluate the short-term gains of LLM use against the compounding long-term costs these concerns introduce. (Note: The following is not intended to be a comprehensive list. We’ve simply included the concerns our team raises most frequently). Why shouldn't devs use LLMs?
Unfortunately, we can't separate “AI, a neat and useful tool” from “AI, the hyperscaling, trillion-dollar, generational darling of the free market.” If we maintain the current pace of acceleration, AI will demand resources, infrastructure, and societal reshaping at an unprecedented scale.
You know the drill: The hyperscaling of AI is straining electricity grids, depleting drinking water, producing millions of tons of e-waste, poisoning air and water in low-income communities, degrading search, entrenching platform lock-in, and laundering copyrighted data on the way to settling major lawsuits—with every indication that the answer to "who's going to stop them?" is nobody.
That doesn't even begin to cover everything. The takeaway is this: Billion-dollar corporations and the folks writing legislation on their behalf are doing the very thing Dr. Ian Malcolm warned us not to do. We're not going to solve these history-altering concerns here. But it's myopic, irresponsible, and callous to dismiss them as fearmongering.
AI‑assisted software development emphasizes speed over fundamentals, and output volume over architectural solidity. LLMs crank out code that looks right but crumbles under real-world pressure. What appears to be a structurally sound app is revealed to be built on a brittle foundation that will cost a company time and money in the long term, once they have to bring in human experts to do a gut rehab on their LLM-ravaged apps. If the person on the team can't review human code for accuracy, they can't review LLM code either.
When you build with an LLM, you outsource core app logic to models that are optimized for plausible correctness and are “in an important way, indifferent to the truth of their outputs.” Guess what happens next?
Horror stories. We read new ones every day: vibecoding disasters where AI‑generated apps ship with publicly exposed S3 buckets leaking driver’s licenses, or misconfigured databases dumping thousands of private emails. OX Security’s October 2025 report calls this the “Army of Juniors” problem: “AI‑generated code exhibits the characteristics of talented junior developers: highly functional, syntactically correct, but systemically lacking in architectural judgment and security awareness.” The reality is that no AI model is currently able to consistently generate secure, stable, production-ready code without a human expert prompting it on the other side of the keyboard.
Remove that safeguard, and there’s nothing stopping users from exploring an app rife with sinkholes and rotted floorboards.
The paradox: for a tool that’s meant to maximize output, there’s a surprising gap between perceived and actual productivity gains in software development. A January 2026 report from Oxford Economics concluded, “If AI were already replacing labour at scale, productivity growth should be accelerating. Generally, it isn’t.” A report from OpenAI itself showed that ChatGPT Enterprise users report saving only “40–60 minutes per day.” And another report from METR found that “when developers use AI tools, they take 19% longer [to complete tasks] than without.” These studies don’t all agree—but that’s the point: AI offloads menial work, which sounds like a win until you realize it’s the menial work that builds the expertise to do the hard work well. There’s no money‑back guarantee of increased speed with AI.
There's also the issue of feature spamming. The elimination of friction in AI‑assisted development makes it trivial to generate new features on a whim. That doesn’t mean the features are good, let alone necessary. This is what we mean when we talk about being intentional in our use of LLMs: It's not really a gain in productivity to be writing code less, but to use all that freed-up time to heap on features just because you can, as if they were toppings at a froyo shop.
“LLMs are great resources for coding when you already know what you are doing,” wrote Andrew Heiss, an assistant professor at Georgia State University, in an open letter to his class. “If you don’t know what you’re doing, they’re actually really, really, really detrimental to learning.” Why? Because old‑fashioned, active, tactile engagement is how humans build skills—not being handed an answer without understanding the problem.
It’s okay to use AI to explore things we don’t understand, as long as we’re using it with the intention of learning. But new developers who lean on LLMs for code they don’t understand will never build the muscle memory, intuition, or debugging reflexes that come from wrestling with problems head‑on. For senior devs, the risk is different but just as insidious: the more you outsource the thinking, the more your judgment and architectural instincts atrophy. You're not going to sustain a career if you're delivering code that you don’t understand. Again: menial work builds expertise.
Already at Tighten, we’ve experienced the frustration of building a proof‑of‑concept on endpoints deprecated mere months later. To satisfy investor demands, AI giants have no choice but to chase every new idea in the AI arms race, which means they’re likely to drop entire features once they’re no longer strategically useful or good for business.
There’s a reason 70–85% of AI projects fail: If you’re a developer, you’re stuck rebuilding the same integration every time the stack shifts. And right now, the whole ecosystem is propped up by government grants, VC cash, and hyperscaler loss‑leaders—billions in subsidies that won’t last forever. When the free money dries up, someone’s going to foot the bill; if the last few years have taught us anything, it’s going to be the users and businesses locked into these platforms, not the platforms themselves.
Wait for the lava to cool before you build skyscrapers on top of it.
The endless choose-your-own-adventure-style way that we interface with LLMs induces the euphoria of possibility. "What could I build right now?" But if that feeling follows you once you're away from the keyboard, it turns into something more like anxiety: a feeling that you aren't doing enough, that you should be doing more. The core promise of AI is that it "does the work for you.” But this can lead to work time and personal time blurring to such a degree that you’re never really letting yourself off the clock.
Prompt, wait, review. Prompt, wait, review. This intermittent‑dopamine cycle isn’t just external pressure from “you have to keep up”—it’s internal pressure generated by the tool itself. Over time, that boundary‑blurring can erode focus, creativity, and long‑term stamina. If you've found yourself thinking about refactoring components while at the dinner table with your kids, you may want to consider setting some firm boundaries in your relationship with your LLM.
Again, none of these "shouldn'ts" is prohibitive alone. Just as none of the "shoulds" is singularly convincing. As devs, we need to weigh the benefits against the risks before we do a full rebuild of our entire workflow.
That gets us through the why. What about the how? How do we incorporate AI into our workflow in a way that amplifies our capabilities and expertise? That’s where we’re heading next: How to Expertly Use LLMs in Development Workflows.
We appreciate your interest.
We will get right back to you.