If you've read our overview of MCP for advertising, you know the high-level story: AI agents call standardized tools instead of clicking dashboards. This post is the practical follow-up — what does it actually look like to ship one?

By the end of this walkthrough you'll have a working AI ad agent that can create campaigns, check performance, pause underperformers, and reallocate budget — all triggered by natural language. We'll use Python and the Anthropic SDK, but the concepts apply to any LLM that supports tool calls.

Architecture Overview

An MCP-based ad agent has four moving parts:

  1. Tool definitions — JSON schemas describing what the agent can do (create_campaign, get_metrics, pause_adset, etc.)
  2. Tool implementations — actual functions that hit the Meta Marketing API
  3. Agent loop — sends user input + tool results to the LLM, executes any tool calls it requests
  4. Guardrails — spend caps, approval flows, audit logging

The LLM doesn't need to know how the Meta API works. It only sees the tool descriptions and decides which to call.

Step 1: Define Your Tools

Start with the smallest useful surface area. Here's a minimal toolset that covers most day-to-day ad management:

tools = [
    {
        "name": "list_campaigns",
        "description": "List all active campaigns in the ad account.",
        "input_schema": {
            "type": "object",
            "properties": {
                "status": {"type": "string", "enum": ["ACTIVE", "PAUSED", "ALL"]}
            }
        }
    },
    {
        "name": "get_campaign_metrics",
        "description": "Get spend, impressions, clicks, conversions, and ROAS for a campaign over a date range.",
        "input_schema": {
            "type": "object",
            "properties": {
                "campaign_id": {"type": "string"},
                "date_preset": {
                    "type": "string",
                    "enum": ["today", "yesterday", "last_7d", "last_30d"]
                }
            },
            "required": ["campaign_id"]
        }
    },
    {
        "name": "pause_campaign",
        "description": "Pause a campaign. Use when ROAS is consistently below target.",
        "input_schema": {
            "type": "object",
            "properties": {"campaign_id": {"type": "string"}},
            "required": ["campaign_id"]
        }
    },
    {
        "name": "update_campaign_budget",
        "description": "Increase or decrease daily budget on a campaign.",
        "input_schema": {
            "type": "object",
            "properties": {
                "campaign_id": {"type": "string"},
                "daily_budget_cents": {"type": "integer"}
            },
            "required": ["campaign_id", "daily_budget_cents"]
        }
    }
]

Notice the tool descriptions read like instructions to a junior media buyer. The LLM uses these descriptions to decide when to call each tool, so write them carefully.

Step 2: Implement the Tools

Each tool is a thin wrapper around the Meta Marketing API. Here's one:

import requests, os

ACCESS_TOKEN = os.environ["META_ACCESS_TOKEN"]
AD_ACCOUNT = os.environ["META_AD_ACCOUNT_ID"]

def get_campaign_metrics(campaign_id: str, date_preset: str = "last_7d") -> dict:
    url = f"https://graph.facebook.com/v19.0/{campaign_id}/insights"
    params = {
        "access_token": ACCESS_TOKEN,
        "date_preset": date_preset,
        "fields": "spend,impressions,clicks,actions,purchase_roas"
    }
    r = requests.get(url, params=params, timeout=15)
    r.raise_for_status()
    data = r.json().get("data", [{}])[0]
    return {
        "spend": float(data.get("spend", 0)),
        "impressions": int(data.get("impressions", 0)),
        "clicks": int(data.get("clicks", 0)),
        "roas": float(data.get("purchase_roas", [{}])[0].get("value", 0))
    }

Keep these functions deterministic and side-effect-aware. Anything that changes account state (pause, budget update, campaign creation) should log who/what/when before executing.

Step 3: The Agent Loop

Now wire the LLM to the tools. The pattern is: send a message, check for tool calls, execute them, send results back, repeat until the LLM stops calling tools.

from anthropic import Anthropic

client = Anthropic()

def run_agent(user_message: str):
    messages = [{"role": "user", "content": user_message}]

    while True:
        response = client.messages.create(
            model="claude-opus-4-7",
            max_tokens=4096,
            tools=tools,
            messages=messages
        )

        if response.stop_reason == "end_turn":
            return response.content[0].text

        # Execute any tool calls
        tool_results = []
        for block in response.content:
            if block.type == "tool_use":
                result = TOOL_REGISTRY[block.name](**block.input)
                tool_results.append({
                    "type": "tool_result",
                    "tool_use_id": block.id,
                    "content": str(result)
                })

        messages.append({"role": "assistant", "content": response.content})
        messages.append({"role": "user", "content": tool_results})

That's the whole agent. The LLM plans the work, calls tools, observes results, and decides what to do next.

Step 4: Try It

Now you can prompt the agent in natural language:

run_agent("""
Check the last 7 days of performance across all active campaigns.
Pause anything with ROAS below 1.5.
For campaigns with ROAS above 3, increase daily budget by 25%
(but never above $200/day per campaign).
Then summarize what you did.
""")

The agent will:

  1. Call list_campaigns(status="ACTIVE")
  2. For each, call get_campaign_metrics(campaign_id, "last_7d")
  3. Decide which to pause and which to scale
  4. Call pause_campaign and update_campaign_budget as needed
  5. Return a plain-English summary

Step 5: Guardrails

Before pointing this at a real ad account, add the things that will save you when (not if) the LLM does something unexpected.

Spend caps

Wrap update_campaign_budget with a hard maximum. If the LLM tries to set $5,000/day on a campaign that should be $50, you want a hard stop, not a Slack apology.

Confirmation for destructive actions

Pause is reversible; delete usually isn't. For destructive operations, return a "needs confirmation" response from the tool and require a second prompt before executing.

Audit log

Log every tool call to a database with: timestamp, prompt that triggered it, tool name, arguments, result. When you eventually need to answer "why did the agent pause this campaign on Tuesday," you'll have the trace.

Rate limiting

Meta's Marketing API has rate limits per ad account. Wrap your client with backoff and queue logic, especially for accounts running many campaigns.

From Local to MCP

What we built above is a single-process agent — the tools live in the same Python codebase as the LLM client. To go full MCP, you wrap the same tool implementations in an MCP server, then any MCP-aware client (Claude Desktop, your custom app, another agent) can connect to it.

The MCP server is essentially a JSON-RPC interface over stdio or HTTP. The tool definitions are the same; only the transport changes. Once you have an MCP server running, your agent works in any MCP-aware host without code changes.

Production Checklist

Skip the Boilerplate

Everything in this walkthrough is what Ads Agents handles for you out of the box: the Meta Marketing API wrapper, MCP server, guardrails, audit logging, and CAPI integration. Our REST API exposes the same surface so you can plug it into your own agent or LLM stack without writing the integration layer from scratch.

If you're building this yourself, the walkthrough above will get you to a working prototype in an afternoon. The hard part isn't the agent — it's the production hardening.

Ready to automate your ads?

Let AI manage your Facebook & Instagram campaigns. Start free, upgrade when you're ready.

Get Started Free →