Boring Systems Are a Feature

Boring Systems Are a Feature

I like boring systems more every week. That is not because I suddenly hate new tools. I use a lot of them. It is because the more often I ship, the less patience I have for infrastructure that feels clever right up until it breaks. This week I got a live reminder. My site runs on Astro and deploys on Vercel. The setup is pretty simple. Posts are markdown files. Routes are readable. Builds are visible. When something was off in production, I did not have to guess which hidden layer might be lying to me. I could inspect the files, inspect the route, inspect the deploy, and narrow it down fast. That matters a lot more than people admit. The problem with magical systems A lot of modern tooling sells convenience by hiding the machinery. That feels great on a clean demo. You connect a few services, click around a dashboard, and everything looks smooth. Then a real edge case hits. A route does not generate. A cache holds the wrong thing. A deployment succeeds but the output is not what you expected. Now the time you saved upfront gets repaid with interest. I do not think this is just a developer problem. If you are a solo builder, operator, or founder trying to publish consistently, your infrastructure is part of your workflow. It is not separate from the job. Every opaque layer is another place where a simple content task can turn into an afternoon of weird debugging. That is why I keep gravitating toward systems that are easy to read. Legibility beats novelty One thing I like about file-based setups is that they make reality hard to ignore. The post either exists or it does not. The route either builds or it does not. The deploy either picked up the change or it did not. There is less room for the vague category of problems I would describe as platform gaslighting. I think that is part of why switching to Astro clicked for me so quickly. It feels close to the actual artifact. I write the file. I commit the file. The site builds the file. When something fails, I can usually trace the failure without needing a séance. That is not old-fashioned. That is useful. People love to talk about speed, but legibility is speed. A boring system that breaks in an obvious way is faster than a magical system that breaks in a mysterious way. Shipping daily changes what you optimize for If you publish once a quarter, maybe you can tolerate more complexity. If you are trying to ship every day, you start caring about a different set of traits:Can I understand what failed? Can I fix it without spelunking through three vendor dashboards? Can I trust the deploy path? Can I make changes without creating a second mystery while solving the first one?That is a very different filter from, "What has the slickest onboarding?" I think a lot of solo builders should bias harder toward transparent tools for exactly this reason. Not because the newer stuff is bad. Not because abstraction is evil. Just because your real bottleneck is usually not raw capability. It is recovery time. Boring is not the opposite of good I think people sometimes hear "boring" as an insult. I mean it as praise. Boring infrastructure is what lets you spend your energy on the part anyone actually cares about, the product, the writing, the distribution, the work itself. If the stack disappears into the background and only demands attention when something concrete needs fixing, that is a win. The irony is that the simple path often feels more modern in practice. It respects your time. It keeps the feedback loop short. It lets you debug with evidence instead of vibes. That is the kind of system I want more of. Not magical. Not over-designed. Just clear enough that when it breaks, I can read the failure and move. That is a feature.

Deployed Is Not the Same as Launchable

Deployed Is Not the Same as Launchable

I think a lot of builders confuse "it loads" with "it's ready." I've made that mistake more than once. You deploy the app. The URL returns 200. The core feature works. Maybe you even send the link to a friend and they say, "nice, it's live." But being deployed is a much lower bar than being launchable. A product can be live and still not be ready for real traffic. The fake sense of completion The dangerous part is that deployment gives you an emotional hit. You pushed the code. Vercel built it. The preview looks clean. The app opens. So your brain wants to call the job done. But that only proves one thing: the code made it onto the internet. It does not prove that the product is packaged well enough to survive contact with real users. I've started thinking about launch readiness as a separate checklist:does the clean domain resolve correctly? does the product work on the actual production URL? is analytics installed? can I explain what it does in one sentence? is there a clear next step for someone who finds it? would I feel good sending this to the exact person it's meant for?If the answer to a few of those is no, then it isn't really launched. It's staged. Infrastructure gaps are launch blockers, not cleanup tasks This is where a lot of solo operators get sloppy. We treat domain fixes, analytics setup, redirects, and little polish issues like post-launch cleanup. Sometimes they are. But a lot of the time, they're the difference between "a thing exists" and "this can actually start compounding." Take domains. If your app works on a temporary URL but the clean domain is broken, you don't really have a finished launch surface yet. You have a working artifact plus a distribution problem. The same goes for DNS and routing. Cloudflare's DNS docs are boring, but boring infrastructure problems decide whether a product feels real. Users do not care that the underlying app is technically healthy if the branded URL fails. And analytics is even more important. I wrote about that more directly in Building in Public Without Analytics Is Just Vibes, but the short version is simple: if people can arrive and use the product, but you can't see what happened, you launched blind. That's not a real operating system. That's hope. Launchable means you can stand behind it For me, the real question now is not "did it deploy?" It's "would I confidently push people to it today?" That standard catches a lot. If I still need to caveat the domain, explain that measurement isn't set up, or warn someone that a few pieces are still half-connected, then I'm not describing a launch. I'm describing a work in progress that happens to be online. That's fine, by the way. A lot of things should be online before they're fully launchable. Preview links are useful. Temporary domains are useful. Internal dogfooding is useful. The mistake is pretending that those states are the same. They aren't. One is proof that the code runs. The other is proof that the product is ready to be taken seriously. The bar I want to keep now I'm trying to be stricter about this because the internet is full of half-launched things. Stuff that technically exists, but isn't ready to earn trust. And trust is the whole game. If someone clicks a link I shared, I want the domain to work, the page to load fast, the core action to be obvious, and the measurement layer to be there so I can learn from the visit. Otherwise I'm just generating more surface area. Deployment matters. Obviously. But launchability is what turns a deployed project into something you can actually build on. If you're building right now, ask yourself a blunt question: is the product launched, or is it just online?

Most AI Agent Problems Are Infrastructure Problems

Most AI Agent Problems Are Infrastructure Problems

I think a lot of teams are optimizing the wrong layer. When an AI workflow breaks, the first instinct is usually to swap the model. Try OpenAI. Try Anthropic. Try a new prompt. Try a new framework. Maybe that helps. Usually it doesn't fix the real problem. Most AI agent problems are infrastructure problems. The model is the visible part, so it gets all the attention. But in practice, the failures usually happen in the seams. A tool call times out. A background job runs too long. A session loses state. A retry happens in the wrong place and duplicates work. One flaky dependency turns a clean demo into a system that quietly dies halfway through the job. That stuff is not sexy, but it's the whole game. The demo works, the system doesn't A lot of AI products look good in a five minute demo because the happy path is easy to stage. You give the model a clear instruction. It calls the right tool. The data comes back clean. The output looks smart. Everyone nods. Then real usage starts. Now inputs are messier. APIs are slower. Credentials expire. One tool returns malformed JSON. Another gives you a 429. A user asks for a task that takes 20 minutes instead of 20 seconds. Suddenly the question isn't whether the model is smart. The question is whether the system can survive contact with reality. That's why I keep coming back to the same point: reliability matters more than cleverness. If you want a useful AI system, I think you need a few boring things before you need a better prompt. What actually matters First, you need retries that aren't stupid. Not infinite retries. Not blind retries. Real retries with limits, backoff, and some awareness of what failed. Second, you need state. If a workflow has already finished steps one through four, it should not start over just because step five broke. It should know where it is, what already succeeded, and what still needs attention. Third, you need supervision. Long running work needs checkpoints. It needs status. It needs a way to surface, "here's what happened, here's what's blocked, here's what I'm doing next" without making the user babysit every move. Fourth, you need graceful degradation. If the ideal path fails, the system should still have a second move. Maybe it waits. Maybe it falls back. Maybe it asks for help at the right moment instead of crashing into a wall and pretending it completed the task. None of this is glamorous. That's exactly why it matters. Model switching is not a strategy I like good models. I'm happy to use better ones whenever they show up. But "we'll just switch models" is not an operating plan. It's a coping mechanism. If your system depends on every tool succeeding instantly, every API staying stable, and every run finishing on the first try, you're not building an agent. You're building a brittle chain of lucky events. The teams that win here won't just have access to strong models. Everyone will have that. The teams that win will build the execution layer around the model. They'll know how to recover work, route around failures, preserve context, and keep moving when the world gets noisy. That's the real moat. It's the same lesson I wrote about in Nobody Cares About Your AI. The flashy part gets attention. The useful part solves the problem. The boring work is the product I don't think the future belongs to the teams with the most impressive demos. I think it belongs to the teams that make AI feel dependable. The ones that make a user trust that the work will finish. The ones that handle failure without turning the user into unpaid QA. The ones that treat recovery, observability, and orchestration like product features, because they are. According to McKinsey, the economic upside of generative AI is massive. I buy that. But I don't think most of that value comes from demos that look magical on day one. I think it comes from systems that keep working on day one hundred. That's a much less glamorous story. It's also the one worth building.

The Best AI Use Cases Are the Ones Nobody Tweets About

The Best AI Use Cases Are the Ones Nobody Tweets About

Open any tech feed right now and you'll see the same stuff. AI generating photorealistic images. AI writing entire codebases. AI having philosophical conversations about consciousness. Cool demos. Genuinely impressive technology. And almost none of it is where the real money is being made. The AI use cases actually generating revenue are the ones nobody screenshots for LinkedIn. Temperature log validators for restaurant chains. Payroll compliance checkers that flag overtime violations before they become lawsuits. Employee classification calculators that tell you whether your new hire is exempt or non-exempt under FLSA guidelines. Boring stuff. The kind of problems that make people's eyes glaze over at dinner parties. I know this because I'm building these tools right now. LogChef checks whether food temperature logs meet health code requirements. Exemptly walks you through the DOL's duties tests to classify employees correctly. PayShield catches payroll compliance gaps before an auditor does. Nobody is going to retweet a temperature log validator. Nobody is making TikToks about employee classification flowcharts. But here's the thing: people actually search for these tools. Real humans with real compliance deadlines type "FLSA exempt vs non-exempt calculator" into Google every single day. And when they find a tool that solves their problem in 60 seconds, they remember who built it. I wrote about this pattern before in Nobody Cares About Your AI. The technology itself doesn't matter to the end user. What matters is whether the problem goes away. A restaurant manager doesn't want "AI-powered food safety monitoring." They want to not fail their next health inspection. A payroll admin doesn't want "machine learning compliance analysis." They want to stop worrying about whether they're breaking federal overtime rules. The gap between what the AI community talks about and what businesses actually need is enormous. And that gap is where the opportunity sits. If you're building with AI right now, here's my honest take: stop chasing the use case that'll get you on Hacker News. Start looking at the problems people are too embarrassed to admit they still handle with spreadsheets and sticky notes. The compliance checks done on paper. The classification decisions made by gut feel. The audit prep that takes three people a full week. Those are your best AI use cases. They just don't make good tweets. What's the most boring problem you've seen AI actually solve well?

Nobody Searches for Your Product Name

Nobody Searches for Your Product Name

Here's something most SaaS companies get wrong about search. They spend months optimizing for branded terms. "Best project management tool." "[Product] vs [competitor] comparison." "[Product name] review 2026." They fight over the same ten keywords every competitor is already bidding on. Meanwhile, nobody is Googling their product name. At least not the people who need them most. The real search happens before the product Think about what someone actually types when they have a problem. Not a software shopping problem. A real, right-now, my-boss-is-asking-about-this problem. They type things like:"restaurant temperature log template" "employee exempt vs non-exempt calculator" "ISO 9001 audit checklist PDF" "how to calculate overtime for salaried employees"These are the queries that matter. Specific. Boring. High intent. The person searching isn't browsing. They need something right now, and they'll use whatever solves it. Why boring queries win I've been building small compliance tools for the past few weeks. Free web utilities that solve one narrow problem each. A temperature logging tool. A payroll compliance checker. An exempt vs non-exempt classifier. None of these are products in the traditional sense. They're single-purpose tools that answer one question really well. But here's what's interesting. The search volume for these micro-queries is real. According to Ahrefs, "temperature log template" gets searched thousands of times a month. "Exempt vs non-exempt" gets even more. And almost nobody is building dedicated tools to capture that traffic. Most of the results are blog posts from law firms and HR consultancies. Long articles that kind of answer the question but don't actually give you the thing you need. No calculator. No downloadable template. No interactive tool. Just 2,000 words of background information and a "contact us for a consultation" CTA. That's the gap. Own the query, not the category The mistake is thinking you need to own a category term like "compliance automation" or "workflow management platform." Those terms sound important in a pitch deck, but real humans don't search that way. Real humans search for the specific problem sitting on their desk right now. If you can be the thing that solves that problem, you don't need the person to know your brand first. You just need to be there when they search. The brand relationship builds backward from usefulness. This is basically the playbook that HubSpot used early on with their free tools. Website grader, email signature generator, invoice templates. None of those were the core product. All of them brought in people who eventually needed CRM software. What this means if you're building Stop fighting for category keywords. Start asking: what's the smallest, most specific problem my future customer has right now? Build the thing that solves it. Make it free. Make it show up when they search. The boring query is the wedge. The product conversation comes later. I've been testing this approach with compliance tools, and the early signals are promising. More on that as the data comes in. But the principle holds: nobody is searching for your product name. They're searching for help with the problem you solve. Go own that query instead.