AI Starter Package
Learn/AI 301/Lesson 3
3 of 8 · 30 min

Task Routing and Delegation

What is Task Routing?

In a multi-agent system, incoming tasks rarely arrive pre-labeled. A user might say "fix the login bug" or "write a blog post about our launch." Task routing is the process of classifying each request and dispatching it to the agent best equipped to handle it.

Without routing, you either send everything to one overloaded generalist agent (slow, expensive) or manually assign tasks yourself (defeats the purpose of automation). A good router acts as an intelligent dispatcher that matches work to specialists in milliseconds.

Building a Router

The simplest router uses pattern matching. You define rules that map keywords or regex patterns to agent types:

  • Regex patterns: Match "/fix|bug|error|crash/" to the Coder agent, "/write|draft|blog|copy/" to the Writer agent
  • Keyword scoring: Count how many keywords from each category appear in the task description. The category with the highest count wins
  • Confidence scores: Assign weights to each match. "fix the bug" scores 0.95 for Coder but only 0.1 for Writer

Start with keyword matching. It covers 80% of use cases and is trivially debuggable. Only add LLM-based classification when simple rules fail.

Confidence Thresholds

Every routing decision should produce a confidence score between 0 and 1. This score determines what happens next:

  • High confidence (0.8+): Route directly to the matched agent. No human review needed
  • Medium confidence (0.5-0.8): Route to the agent but flag for review. The agent proceeds, but a human can override
  • Low confidence (<0.5): Escalate. Either ask the user to clarify, or pass to a generalist agent that can triage

The threshold values are not universal. Tune them based on the cost of misrouting in your domain. A misrouted code fix is annoying. A misrouted financial transaction is catastrophic.

Fallback Chains

What happens when no agent matches? You need a fallback chain — an ordered list of increasingly general handlers:

  • Level 1: Specialist agent (e.g., Coder, Writer, Researcher)
  • Level 2: Generalist agent that can handle broad categories
  • Level 3: Human escalation queue with context summary
  • Level 4: Graceful rejection with a clear message explaining what the system cannot do

Never silently drop a task. Every request must either be handled or explicitly rejected with an explanation.

Dynamic Routing with Learning

Static rules get you started, but production routers should improve over time. Track two metrics for every routing decision:

  • Task completion rate: Did the routed agent successfully finish the task?
  • Reroute frequency: How often does a task get reassigned after initial routing?

When an agent consistently fails a task type, lower its confidence weight for that pattern. When an unexpected agent succeeds at a new task type, add that pattern to its routing rules. Over weeks, the router naturally adapts to your actual workload.

Practical Exercise

Build a router.cjs that handles 8 task types: code, review, test, docs, research, design, deploy, and support. For each incoming task string, the router should:

  • Score it against all 8 agent categories using keyword matching
  • Return the top match with its confidence score
  • Fall back to a "generalist" agent if no score exceeds 0.4
  • Log every routing decision with the input, selected agent, and confidence

Test it with 20 sample tasks and verify that at least 16 route correctly. Misroutes reveal gaps in your keyword lists.

Ready to build production routing?

The AI Brain Pro package includes a pre-built task router with 30+ agent types, confidence scoring, and adaptive learning built in.

View Pricing →