Nobody Cares About Your AI

Nobody Cares About Your AI

I've been building compliance tools for the past few weeks. Temperature logs, food safety checklists, inspection prep workflows. Boring stuff by any AI startup's standards. Here's what I've learned: restaurant owners do not care about AI. Not even a little. They care about passing their next health inspection. They care about not getting fined. They care about making sure the walk-in cooler didn't die overnight and ruin $3,000 worth of product. If you walked into a kitchen and said "I built an AI-powered temperature monitoring solution with real-time anomaly detection," the chef would look at you like you just spoke Klingon. If you said "this tells you when your fridge breaks before your food goes bad," now you're talking. The Gap Nobody Tweets About The AI industry has a massive blind spot. We're obsessed with capability and completely uninterested in context. Every week there's a new benchmark, a new model, a new agent framework. Meanwhile, the people who would benefit most from better software are still using paper logs and spreadsheets because nobody bothered to meet them where they are. I'm not saying AI isn't powerful. It is. But power without packaging is just a science project. What Actually Works The compliance tools I've been shipping don't mention AI anywhere. Not in the copy, not in the UI, not in the pitch. They're just tools that solve specific problems for specific people. A temperature log generator that creates the exact form a restaurant needs for their daily checks. A payroll compliance calculator that tells you whether your overtime policy matches your state's rules. A 503 error page builder for when your site goes down and you need a professional page in 30 seconds. None of these are technically impressive. All of them solve a problem someone actually has today. The Boring Niche Advantage McKinsey estimates that generative AI could add $2.6 to $4.4 trillion annually in value across industries. But here's the thing: most of that value will come from boring applications that make existing workflows slightly less painful. Not from chat interfaces. Not from copilots. From small tools that remove friction from tasks people already do. The market for "AI that sounds smart" is crowded. The market for "tool that solves this one annoying problem" is wide open in thousands of niches. Build for the Problem If you're building something right now, try this exercise: describe what you're making without using the words AI, machine learning, model, or agent. If you can't explain the value without those words, you might be building a solution looking for a problem. The best technology disappears. Stripe doesn't sell "AI-powered payment processing." They sell "accept payments online." The AI is in there somewhere. Nobody cares. It just works. That's the standard. Build something that just works for someone who has a real problem today. Let the AI be the how, not the what. What are you building that nobody would describe as "AI" even though it is?

The Tools I Actually Use Daily as a One-Person Operation

The Tools I Actually Use Daily as a One-Person Operation

Tools I Actually Use The AI and productivity tools that are actually worth my time—and why most of the hype is noise.I've tried dozens of AI tools. Most of them are forgettable. Not because they're bad, but because they don't stick. They solve a problem I don't have, or they create more friction than they remove. Here's what's actually in my daily rotation—and what I've learned about separating signal from noise. What's Actually in My Stack Claude is my primary workhorse. I use it for writing, coding, research, and thinking through problems. It's not perfect, but it's consistent in ways other tools aren't. The context window matters more than I expected. Cursor has replaced VS Code for most of my development work. AI-assisted coding isn't about replacing thinking—it's about removing the mechanical friction that slows down experimentation. I still write plenty of code manually. Cursor just handles the boring parts faster. Notion remains my system of record. I've tried Obsidian, Roam, and a dozen alternatives. Notion wins because it's good enough at everything and excellent at nothing. That sounds like criticism, but it's actually the point. I don't want to optimize my note-taking system. I want to take notes. Process Street (full disclosure: I work here) handles my recurring workflows and SOPs. The value isn't the software itself—it's the discipline of documenting processes that would otherwise live in my head. Most people skip this step. That's a mistake. Zapier connects everything else. I have maybe 15 active Zaps. Most are simple: new form submission → Slack notification, new blog post → social share, etc. The magic isn't in complexity. It's in not having to remember to do repetitive tasks. What I've Stopped Using ChatGPT Plus—I let my subscription lapse. It's not worse than Claude, but I don't need two general-purpose AI assistants. Pick one. Use it well. Most "AI writing" tools—If a tool promises to "write blog posts for you," it's probably producing generic content that sounds like everyone else. I use AI to think and draft, not to replace my voice. Complex automation setups—I used to build elaborate multi-step workflows. Now I default to simple. If a Zap has more than 3 steps, I question whether I'm solving the right problem. The Pattern The tools that stick share a few traits:They remove friction, not add it. If I have to think about using the tool, I won't. They integrate with my existing workflow. I don't want to rebuild my life around software. They have clear failure modes. When they break, I know immediately and can fix them.What I'm Testing Now I'm experimenting with a few tools that might earn a permanent spot:Perplexity for research—still deciding if it's better than Claude for this use case Replit for quick prototyping—interesting, but not sure it beats local development yet Various image generation tools—mostly for blog headers and social contentThe bar for adding a new tool is high. It needs to solve a real problem I have today, not a hypothetical problem I might have someday. The Real Lesson The best tool is the one you'll actually use. Not the one with the most features. Not the one that gets the most hype on Twitter. I've seen people spend more time optimizing their productivity stack than doing actual work. Don't be that person. Pick simple tools. Use them consistently. Move on.What's in your actual daily stack? Not what you think you should use—what you actually open every day. I'd genuinely love to know.

Most AI Agents Aren't Actually Agents

Most AI Agents Aren't Actually Agents

Everyone's building "AI agents" right now. The timeline is full of them. Companies are raising millions to ship them. The problem? Most of them aren't actually agents. They're chatbots with API access. That's it. What People Call Agents Here's the pattern I see everywhere:User types a message LLM decides which function to call Function returns some data LLM formats a response DoneThat's not agency. That's function calling with a conversational wrapper. The LLM picks a tool, the tool runs, the result comes back. If it works, great. If it breaks, the conversation dies. If the user needs three things done in sequence, they're manually prompting through each step. This is useful. It's even impressive sometimes. But it's not an agent. What Real Agents Need Real agentic systems operate with autonomy. They handle the messy parts without constant human supervision. That means: Error recovery. When something breaks (and it will), the agent doesn't just apologize and give up. It retries with backoff. It falls back to alternative approaches. It routes around failures without making the user debug what went wrong. State management. The agent needs to remember what it's doing across multiple tool calls. Not just "what did the user ask for?" but "what have I tried, what worked, what's left to do, and what's blocking me right now?" Retry logic. APIs timeout. Rate limits hit. Sometimes data isn't ready yet. A real agent knows when to try again, when to wait, and when to give up. Supervision and checkpointing. For multi-step work, the agent should be able to pause, show you what it's done so far, and resume if something goes sideways. You don't want it to redo 20 steps because step 21 failed. Context persistence. If the system restarts, the agent should be able to pick up where it left off. Not "sorry, you'll need to start over." Graceful degradation. When a preferred tool is unavailable, the agent should try another approach. When data is incomplete, it should work with what it has or ask for the missing pieces. This is infrastructure work. It's not fun. It's not what people demo. But without it, you don't have an agent. You have a chatbot that calls APIs. The Infrastructure Problem The hard part of building agents isn't the LLM. That's the easy part. The hard part is everything around it. You need a task queue that can handle retries. You need a way to checkpoint progress so work doesn't get lost. You need monitoring so you know when an agent is stuck. You need logging so you can debug failures after the fact. You need to handle rate limits from every API your agent touches. You need to deal with inconsistent error responses. You need to decide what to do when a tool returns malformed data or no data at all. You need a way to supervise long-running workflows. You need to surface status updates without spamming the user. You need to decide when to ask for help and when to keep trying. None of this is LLM work. It's systems engineering. What I'm Seeing in Practice I build agents daily. The pattern is always the same. I spend 10% of my time writing prompts and configuring LLM calls. I spend 90% of my time on infrastructure:Handling tool failures Managing state across multiple turns Implementing retry logic Building supervision layers Writing recovery flows for when things go wrongThe prompt is never the problem. The problem is making the system robust enough to actually finish the job. When I look at "AI agent" demos online, I see polished function calling. I don't see error handling. I don't see state management. I don't see retry logic. That's fine for demos. It's not fine for production. The Real Opportunity If most "AI agents" are just chatbots with API access, there's a huge opportunity for anyone willing to build the infrastructure. The companies that win won't be the ones with the best prompts. They'll be the ones with the most resilient execution layers. They'll build systems that:Recover from failures without human intervention Maintain state across sessions and restarts Coordinate multi-step workflows reliably Degrade gracefully when things break Surface meaningful status without overwhelming usersThis is less glamorous than training models or writing clever prompts. But it's what separates working agents from chatbots. LinksOpenClaw - agent infrastructure I'm actively building with LangGraph documentation - one approach to stateful agent workflows Modal - infrastructure for long-running agent workloadsBuilding agents that actually work means caring more about the infrastructure than the LLM. The wrapper matters more than the model. Most people aren't ready for that conversation yet.

The Best Tools I Use Aren't AI Tools

The Best Tools I Use Aren't AI Tools

I spend a lot of time in AI tools. It's part of my job. But the truth is, most of my actual work happens in tools that have nothing to do with AI. The Boring Stack Here's what I use every single day: VSCode for writing. Google Sheets for tracking. Git for version control. Terminal for everything else. No AI. No fancy automation. Just basic tools that do one thing well. When I need to draft something, I open VSCode. When I need to track data, I open Sheets. When something breaks, I check Git. AI tools are great for specific tasks. Claude helps me think through problems. ChatGPT speeds up research. But they're supplements, not replacements. Why Basic Tools Win They're fast. They're reliable. They don't break when the API goes down. They don't require a monthly subscription or a complex setup. They just work. Most work still needs basic infrastructure. You still need a place to write. You still need a way to organize data. You still need version control. AI can help with some of those tasks. But it can't replace the fundamental infrastructure. The AI Hype Problem Everyone wants to talk about AI workflows and AI-first companies. But the reality is that most work still happens in text editors and spreadsheets. If you're building something, don't assume AI will solve everything. Start with the basic tools that work. Add AI where it actually helps. If you're choosing tools, prioritize reliability over novelty. The boring stack exists for a reason. The Bottom Line AI is useful. But it's not the foundation. The foundation is still text files, spreadsheets, and version control. Build on that. Everything else is optional. For more thoughts on building with practical tools, check out my other posts. And if you're looking for workflow automation that actually works, take a look at Process Street.

The Money Is in the Boring Problems

The Money Is in the Boring Problems

The Money Is in the Boring Problems I spent this week building SafeRounds, a free restaurant temperature logging tool. It's not going to get me on Product Hunt's front page. Nobody's going to write a thinkpiece about it. It doesn't use the latest LLM to generate anything. It's just a simple web form where restaurant staff can log fridge temps, freezer temps, and hot hold temps twice a day. That's it. But here's the thing: restaurant owners actually need this. Health inspectors require it. Failing to maintain proper logs can shut you down. And right now, most restaurants are either using paper clipboards (that get lost) or clunky spreadsheets that don't enforce the rules. I see so many people building with AI chasing interesting problems. Translation tools. Creative writing assistants. Novel interfaces for information retrieval. All cool. All technically impressive. But when I look at what actually converts, it's the boring stuff. The compliance checklists. The required documentation. The forms you have to fill out to stay legal. Why? Because these aren't nice-to-haves. They're must-haves. You don't shop around for temperature logs because you're excited about innovation. You need them because the alternative is failing your health inspection. That's a different kind of market. Lower browse time. Higher intent. Immediate utility. And the boring niches are still wide open. Nobody's racing to build better HACCP documentation tools. There's no VC-funded startup disrupting restaurant compliance logs. It's not sexy enough. Which means if you actually solve the problem well, you win by default. I'm planning to build a few more of these. Not because they'll get me followers. Because they'll solve real problems for real businesses. And that compounds differently than viral content. The next one is LogChef — a recipe costing calculator for commercial kitchens. Also boring. Also needed. If you're building something right now, consider this: what's the most boring version of your idea that someone would actually pay for? Start there. Learn more about building small useful tools on my blog, or check out Process Street's approach to workflow automation.