The Automation Balance

A simple framework to navigate risk and volume, avoiding the "tool trap" while building a truly autonomous business.

What To Automate And What To Keep Human

By now you have real pieces of an autonomous business like clearer flows, simple playbooks, decision rules, ownership and a weekly rhythm. This is the point where many founders ask:

“Where should I use automation or AI, and where should I keep things entirely human?”

It is a good question. Use tools in the wrong places, and you create confusion. Use them in the right places, and life gets noticeably easier. This week is about a simple way to decide.

The problem in plain language

There are two common traps.

Trap 1: Tool first, system later
You hear about a new tool, connect it to your inbox or chat, and hope it will clean things up. Instead, it adds more noise, because the underlying flows and rules were unclear.

Trap 2: Delay forever
You keep everything human because you are worried about quality, so you stay buried in tasks that do not really need your level of attention.

Both traps keep you stuck.

You need a way to look at your work and say, with some confidence:

→This should stay human
→This can be supported by AI or simple automation.
→This could eventually be designed as “AI first”

A tactical framework: Volume and risk

A practical way to decide is to look at each part of your work through two lenses: Volume and Risk.

Ask two questions about a step in a flow:

→How often does this happen?
→What happens if it goes wrong?

You can think about three broad categories.

This simple view gives you a starting point. High volume, low risk steps are your first places to bring in tools and AI. High risk, complex situations stay human, with AI as a quiet assistant if you choose.

Traditional, AI supported, and AI first

Once you have this view, you can think of three levels for each flow.

For example, an AI helper might handle the initial response to common support questions using clear rules. The human support owner sees summaries, handles exceptions, and adjusts rules over time.

Designing “AI first” flows makes sense only after you have tested the work in “traditional” and “AI-supported” modes and are confident in your rules.

A simple example: support in a small product company

Imagine a small product company that already has a support flow documented, rules for refunds and common questions, a support owner plus a weekly review

They review their support flow using the volume and risk view. They see:

→Many questions are about login, billing addresses, or simple features. High volume, low risk.
→Some questions relate to data loss or security. Low volume, high risk.
→A few are long, emotional notes from customers about difficult experiences. Low volume, sensitive.

They decide:

→Simple questions: move toward AI-supported, then AI first.
→Sensitive or complex issues: stay human, with AI only helping draft or summarize.

They start with AI supported. They connect an AI helper to their help center articles, for each new ticket, AI suggests a category and a draft reply. The support owner reviews, edits if needed, and sends. Tickets about data or serious complaints are tagged “human only” and AI only helps summarize.

Over time, they notice that for basic questions, the AI draft is usually correct with minor edits. They tune their rules and help content based on the weekly review.

After a while, they decide to let the AI helper send replies automatically for a small set of straightforward questions, such as password resets, while keeping human review for others.

That part of the support flow is now closer to “AI first,” but still under the control of the support owner, who watches patterns and adjusts.

The founder is not in the middle. The system is clear. Tools serve the system.

Subscribe to keep reading

This content is free, but you must be subscribed to Revenue Creator to continue reading.

Already a subscriber?Sign in.Not now