AI Marketing Framework: From Pile of Parts to Working System

A working thesis for building AI marketing systems in the agentic age.

Last updated 21 January 2026

In Could AI Replace An Entire Marketing Team?, I explored whether AI could replace the entire marketing function. I learned that AI can reason like marketers, a capability that has evolved from prompting technique to native architecture to autonomous agents.

But having AI capabilities is like having engine parts. Parts don’t make an engine. You need architecture.

This article presents the architecture that emerged: a framework thesis for building AI marketing systems that actually work. It’s for CMOs, growth leaders, and marketing operators architecting AI-driven workflows, not tool shopping.

I’m actively building parts of this framework at growthsetting.com.

What’s Covered

Why Architecture Matters Now

Marketing budgets remain flat at 7.7% of revenue while 59% of CMOs report insufficient budget to execute strategy.

This pressure is forcing hard choices: Gartner’s 2025 CMO Spend Survey found 39% plan to reduce agency spend and 39% plan to reduce labor costs.

The promise of AI is clear. Twenty-two percent of CMOs said GenAI has enabled them to reduce reliance on external agencies.

But here’s the gap: Gartner’s October 2025 research found that 45% of martech leaders say AI agents don’t meet expectations. Half cite lack of technical stack readiness. Half cite talent shortages.

The tools exist. The architecture to connect them does not.

The Coordination Challenge

Most organisations deploy AI in silos. Content team uses ChatGPT for drafts. Analytics team uses AI for reporting. Paid media team uses AI for ad copy.

Each team optimises locally, in isolation. The result: 15 to 20 different AI applications without systematic orchestration.

The 2025 Marketing Technology Landscape now includes 15,384 solutions, with 77% of new additions being AI-native. More tools haven’t solved the problem.

According to McKinsey’s State of AI 2025, only 6% of organisations qualify as “AI high performers” (those attributing 5% or more EBIT impact to AI use). These high performers are more than three times more likely to redesign workflows and pursue transformative change.

The difference isn’t the tools. It’s the operational layer that connects AI marketing outputs.

The Three Foundational Layers

Every effective marketing system requires three universal capabilities, regardless of methodology. I identified these by analysing established frameworks from STP to AARRR to the HubSpot Flywheel. All share common foundations.

LAYER 3: OPTIMISATION (Adaptation) Measure → Listen → Optimise → Grow AARRR, PLG Flywheel LAYER 2: EXECUTION (Automation) Create → Convert → Amplify → Nurture AIDA, RACE, Flywheel, ABM, Bullseye LAYER 1: FOUNDATION (Intelligence) Define → Understand → Position STP, Marketing Mix, Porter’s Five Forces Feedback Loop
Figure 1: The three foundational layers. Each layer builds on the previous, with optimisation feeding back to foundation.

Each layer builds on the previous. You cannot execute effectively without a foundation. You cannot optimise what you don’t measure.

Framework Origin Primary Layer
STP (Segmentation, Targeting, Positioning) Kotler & Armstrong Foundation
4Ps / 7Ps Marketing Mix McCarthy / Booms & Bitner Foundation
AIDA (Attention, Interest, Desire, Action) E. St. Elmo Lewis, 1898 Execution
RACE (Reach, Act, Convert, Engage) Smart Insights Execution
HubSpot Flywheel HubSpot Execution
AARRR / Pirate Metrics Dave McClure, 500 Startups Optimisation
Product-Led Growth Flywheel ProductLed.org Optimisation

The 11 Marketing Engines

I mapped marketing functions across the three layers and identified 11 distinct engines. Each engine represents a core marketing function that can be systematised with AI.

Think of each engine as a container for the atomic and composite jobs-to-be-done I introduced in Part 1.

Foundation Layer: DEFINE, UNDERSTAND, POSITION

Foundation engines set the strategic context that informs everything else.

  • DEFINE establishes brand identity and messaging frameworks.
  • UNDERSTAND builds audience intelligence through behavioural analysis and segmentation.
  • POSITION monitors competitive landscape and manages differentiation, including how your brand appears in AI search results (AIO).

Without Foundation, Execution produces generic output at scale.

Execution Layer: CREATE, CONVERT, AMPLIFY, NURTURE

Execution engines produce and distribute marketing assets.

  • CREATE handles content production across formats.
  • CONVERT builds landing pages and conversion paths.
  • AMPLIFY manages paid media and distribution.
  • NURTURE maintains relationships through email sequences and lifecycle marketing.

These engines benefit most from AI automation because their outputs are measurable and their workflows are repeatable.

Optimisation Layer: MEASURE, LISTEN, OPTIMISE, GROW

Optimisation engines create the feedback loops that make the system intelligent.

  • MEASURE tracks performance and attribution.
  • LISTEN monitors brand mentions and market signals across channels, including AI platforms.
  • OPTIMISE runs tests and interprets results.
  • GROW identifies expansion opportunities.

These engines feed insights back to Foundation, creating compounding improvement over time.

Layer Engine Function AI-Native Operations
Foundation DEFINE Brand identity, messaging, voice Voice training, messaging frameworks, style enforcement
Foundation UNDERSTAND Customer research, ICP development Behavioural analysis, segment identification, preference mapping
Foundation POSITION Competitive analysis, differentiation, AIO Market monitoring, positioning gap analysis, narrative development, AI search optimisation
Execution CREATE Content production across formats Brief-to-content workflows, format adaptation, quality assurance
Execution CONVERT Landing pages, CTAs, conversion paths Copy variations, form design, A/B test generation
Execution AMPLIFY Paid media, distribution, reach Ad creative generation, audience targeting, budget allocation
Execution NURTURE Email, sequences, lifecycle marketing Sequence design, personalisation, send-time optimisation
Optimisation MEASURE Analytics, attribution, reporting Automated insights, anomaly detection, performance summaries
Optimisation LISTEN Social listening, brand monitoring, AI search Sentiment analysis, trend detection, mention categorisation
Optimisation OPTIMISE Testing, iteration, improvement Hypothesis generation, test design, result interpretation
Optimisation GROW Expansion, scaling, new opportunities Market identification, channel discovery, growth modelling

How the System Connects

This isn’t 11 independent engines running in isolation. It’s a coordinated system where each layer informs the others.

The Foundation layer defines strategy and success criteria. The Execution layer produces assets aligned with that strategy. The Optimisation layer analyses performance and market signals. Then it feeds back to Foundation for strategic refinement.

Here’s an example: LISTEN engine detects a competitor narrative gaining traction. It feeds POSITION engine to refine differentiation. That informs DEFINE engine to adjust messaging. This shapes what CREATE engine produces. MEASURE engine tracks whether it’s working.

POSITION now has an external dimension that didn’t exist five years ago: AIO (Artificial Intelligence Optimisation). Your brand narrative must be cited correctly not just in human minds but in the training data and RAG systems of external AI models like Perplexity, ChatGPT, and Gemini.

These AI engines are becoming gatekeepers to your customers. LISTEN monitors how your brand appears in AI-generated responses. POSITION ensures you’re shaping that narrative proactively.

This creates a compounding loop where each cycle is more intelligent than the last.

It’s also why the “pile of parts” approach fails. Content generators (CREATE engine) without strategic context from DEFINE and UNDERSTAND engines produce generic output. Analytics dashboards (MEASURE engine) that don’t feed back to POSITION engine waste insights.

The value isn’t the engines themselves. It’s how they connect.

The Autonomy Progression: L1 to L5

Architecture answers what connects to what. It doesn’t answer: how much should AI control at each point?

Should LISTEN run autonomously 24/7 or wait for human prompts? Should CREATE draft content for approval or publish directly? Should AMPLIFY adjust ad spend on its own or flag every change?

These aren’t binary questions. They’re a spectrum. Different engines belong at different points on that spectrum.

Level Name Description Human Role
L1 Prompt Assistant Single prompts, full human review Creator
L2 Workflow Automation Chained prompts with logic Reviewer
L3 Supervised Autonomy AI executes, human approves key decisions Approver
L4 Guided Autonomy AI proposes and executes within guardrails Monitor
L5 Goal-Based Orchestration AI determines strategy from objectives Director

Feasibility by Engine

Not all engines should operate at the same autonomy level.

Foundation engines (DEFINE, POSITION) and expansion (GROW) cap at L4. Strategic judgment remains human. UNDERSTAND is the exception, it reaches L3 today through tools like synthetic user panels and AI-powered behavioural analysis that execute research autonomously with human approval of methodology and insights.

Execution and Optimisation engines can reach L5 because they’re more tactical, measurable, and bounded.

Engine Achievable Today Emerging (Agentic) Future
DEFINE L1 to L2 L3 L4
UNDERSTAND L1 to L3 L4 L5
POSITION L1 to L2 L3 L4
CREATE L1 to L3 L4 L5
CONVERT L1 to L3 L4 L5
AMPLIFY L2 to L3 L4 L5
NURTURE L2 to L3 L4 L5
MEASURE L2 to L3 L4 L5
LISTEN L2 to L3 L4 L5
OPTIMISE L2 to L3 L4 L5
GROW L1 to L2 L3 L4 to L5

Autonomy in Practice: 2025 Evidence

These autonomy levels aren’t theoretical. They’re validated by current platform capabilities.

L3 (Supervised Autonomy): Google’s AI Max represents the clearest example. The system manages bidding, targeting, and ad creation within a unified campaign structure.

But it includes steering controls: campaign-level negative keywords, brand exclusions, and search themes. The AI drives, but humans can grab the wheel.

Meta’s Advantage+ operates similarly with full automation and minimal overrides, guided by an Opportunity Score that gamifies human alignment with machine recommendations.

L3 for individual operators: The enterprise examples above require significant investment. But L3 is achievable today for solo operators and small teams.

I test-built a CREATE engine that operates at L3 autonomy. It’s a self-contained LLM agnostic system that takes brand guidelines, design tokens, and ICP profiles as inputs. It switches between content templates (listicles, how-to guides, comparisons), and outputs production-ready HTML with schema markup and FAQ sections built in.

No external automation tools. No complex integrations. One operator, one LLM, one systematic workflow.

The CREATE engine in action: from brand guidelines to production-ready HTML

L4 (Guided Autonomy): Braze’s acquisition of OfferFit for $325 million in 2025 signals the arrival of true L4 systems.

OfferFit uses reinforcement learning to autonomously experiment with message, timing, and channel. Unlike A/B testing (which requires human hypothesis generation), these agents autonomously experiment and continuously learn.

You give it a KPI like “maximise renewal rate” and a set of allowed actions. It iterates toward that goal independently, personalising over 100 characteristics simultaneously.

L5 (Goal-Based Orchestration): This level remains largely theoretical for enterprise marketing in 2025. While visions of universal agentic orchestration exist, the fragmentation of the martech landscape (15,384 solutions) creates an integration barrier.

Agents cannot seamlessly communicate across the entire stack. L5 should be characterised as a 5 to 10 year horizon dependent on API standardisation and agent protocol development, not an imminent reality.

The Agentic Divide

The framework describes an optimal path. Reality is more uneven. BCG’s Build for the Future 2025 report identifies a sharp divide in AI maturity.

The Vanguard (5%): These “future-built” organisations treat AI agents as their operating system, not as tools. They’ve redesigned end-to-end workflows so agents own outcomes.

They generate 1.7 times more revenue growth and 1.6 times higher EBIT margins than laggards. For them, this framework describes current operations.

The Experimenters (35%): These organisations are scaling AI but admit they’re not moving fast enough. Most have AI in pilot programmes but haven’t achieved systematic deployment.

The Majority (60%): These organisations report minimal gains and lack the foundational capabilities for scaling. Fragmented pilots, weak data infrastructure, and insufficient governance block progress.

The implication: jumping from L1 to L4 isn’t a technology purchase. It’s an organisational transformation.

Companies need what might be called a “Level 0.5” prerequisite phase covering data hygiene, workflow mapping, and governance frameworks before the higher autonomy levels become achievable.

Data Readiness as a Gate

Gartner’s 2025 AI Hype Cycle identifies AI-ready data as one of the two fastest-advancing technologies, sitting at the Peak of Inflated Expectations.

The reason: 57% of organisations estimate their data is not AI-ready.

This creates a hard dependency. L4 autonomy is technically possible but operationally blocked for most companies.

Through 2026, Gartner predicts organisations will abandon 60% of AI projects unsupported by AI-ready data. The framework should be treated as data-dependent at L3 and above.

Enterprise vs SME: Different Frameworks

The framework operates differently by company size. S&P Global’s 2025 research on AI workforce impact found a significant divergence:

Large Enterprises (-4% net staffing balance): For enterprises, this framework is a replacement model. AI consolidates roles and drives efficiency. Workforce reductions are expected, particularly in the US, Germany, and France.

The greater automation risk reflects the fact that enterprise roles are typically more specialised and therefore easier to automate.

SMEs (+7% to +11% net staffing balance): For smaller companies, this framework is an enablement model. AI lowers the barrier to sophisticated marketing.

One operator can wield capabilities that previously required a 50-person department. SMEs are hiring more humans to manage newfound AI capabilities, not fewer.

This distinction matters for positioning and implementation. The same framework serves different strategic purposes depending on organisational context.

The Framework in Action

Here are three examples showing how layers, engines, and autonomy levels work together for different marketing scenarios.

Each example uses the Jobs To Be Done (JTBD) framework from Part 1: Composite JTBDs are complex workflows; Atomic JTBDs are the discrete tasks within them.

Foundation Execution Optimisation

Example 1: Product Launch

Composite JTBD: “Execute product launch campaign”

DEFINE Foundation CREATE Execution AMPLIFY Execution MEASURE Optimisation
Linear flow: Foundation sets strategy, Execution produces and distributes, Optimisation tracks results
Layer Engine Atomic JTBD Autonomy
Foundation DEFINE Write launch messaging L2
Execution CREATE Produce launch video L2 to L3
Execution AMPLIFY Generate ad variations L3
Optimisation MEASURE Set up tracking dashboard L2

Example 2: Competitive Response

Composite JTBD: “Respond to competitor positioning shift”

LISTEN Optimisation POSITION Foundation DEFINE Foundation CREATE Execution
Feedback loop: Optimisation detects signal, Foundation refines strategy, Execution responds
Layer Engine Atomic JTBD Autonomy
Optimisation LISTEN Detect competitor narrative change L3
Foundation POSITION Update differentiation framework L2
Foundation DEFINE Adjust messaging points L2
Execution CREATE Produce updated content L2 to L3

Example 3: Lead Nurture Campaign

Composite JTBD: “Convert MQLs to SQLs through nurture sequence”

UNDERSTAND Foundation NURTURE Execution CONVERT Execution OPTIMISE Optimisation
Execution-heavy: Foundation segments, Execution nurtures and converts, Optimisation refines continuously
Layer Engine Atomic JTBD Autonomy
Foundation UNDERSTAND Segment leads by behaviour L2 to L3
Execution NURTURE Design email sequence L2 to L3
Execution CONVERT Optimise landing page copy L2 to L3
Optimisation OPTIMISE Test subject lines and timing L2

Each example shows a different pattern.

  • Product Launch flows linearly from Foundation through Execution to Optimisation.
  • Competitive Response creates a feedback loop starting in Optimisation and cycling back through Foundation.
  • Lead Nurture is execution-heavy with continuous optimisation.

The Business Case

Can this framework deliver ROI? The data suggests yes.

McKinsey’s State of AI 2025 found that AI high performers (the 6% seeing 5% or more EBIT impact) are more than three times more likely to have senior leaders owning AI initiatives. They’re also more advanced in redesigning workflows around AI capabilities.

BCG’s research shows effective AI agents can accelerate business processes by 30% to 50%. Their guidance: concentrate roughly 80% of effort on end-to-end workflow redesign rather than deploying AI broadly.

McKinsey’s agentic AI analysis reinforces the architecture imperative: “Agents enhance value creation when used to improve end-to-end processes and journeys through automation and coordination. Their power is limited when used to improve isolated steps.”

This is exactly why this framework matters. But you cannot get there by bolting agents onto legacy processes. You need architecture and orchestration.

The Reality Check

These gains aren’t automatic. Gartner’s October 2025 research found 45% of AI agent implementations don’t meet expectations. Half cite technical stack readiness. Half cite talent gaps.

The talent gap demands a new profile: the Pi-Shaped Marketer. The T-shaped generalist (broad knowledge, one deep specialty) is no longer sufficient.

Operators need two vertical depths: deep domain expertise in marketing and deep technical fluency in AI systems. Without both, you’re either building the wrong things or unable to build at all.

The difference between the 6% of high performers and the rest? It’s not the tools. It’s the architecture that connects them and the talent capable of orchestrating it.

Getting Started

Four principles that distinguish high-performing AI implementations from the rest.

1. Foundation Before Automation.

Don’t automate execution without solid foundation engines. McKinsey’s research shows organisations creating enterprise-level value are more likely to have implemented data strategies before attempting AI deployment.

AI content without AI-informed audience understanding produces generic output at scale.

2. Start Focused, Prove Value, Then Scale.

Begin with one or two engines at L2. Demonstrate ROI. Expand once trust is established.

BCG’s AI Radar survey found the quarter of executives who created significant value did so by focusing on a small set of AI initiatives and scaling them swiftly. The trap: trying to implement all 11 engines at once.

3. Design for Agents, Not Around Them.

McKinsey’s agentic AI research is direct: “Success calls for designing processes around agents, not bolting agents onto legacy processes.”

If you’re planning for L4 to L5, build architecture now with agent-native workflows in mind.

Beware the Integration Tax: an Operator managing 50 disconnected indie agents is less efficient than a human team. Prioritise orchestration platforms that share a universal data layer over isolated best-in-class tools.

4. Measure Capability, Not Just Activity.

McKinsey found high performers are distinguished by tracking KPIs for AI solutions and embedding AI into business processes.

Track autonomy progression by engine. “We’re at L3 for CREATE but L1 for POSITION” is useful. “We use AI” is not.

The Opportunity

The 2025 data tells a clear story.

There’s 88% adoption yet only 49% utilisation of martech capabilities according to Gartner. The martech landscape has 15,384 solutions with 77% being AI-native. Yet 45% of AI agents don’t meet expectations and only 6% of organisations are AI high performers.

The gap between AI’s promise and delivery is an architectural problem. The 6% who are winning have figured this out. The 94% who are struggling are still accumulating parts.

The question isn’t whether AI will transform marketing. It’s whether you’ll architect that transformation intentionally or stumble into another pile of parts.

Key Concepts

Each term links to a dedicated definition page with full context, symptoms, and solutions. See the complete AI Marketing Definitions glossary.

Term Definition
AI Marketing Framework An architecture for building AI marketing systems that organises marketing functions into 11 engines across 3 layers (Foundation, Execution, Optimisation) with defined autonomy levels.
Foundation Layer The strategic base of the framework containing DEFINE, UNDERSTAND, and POSITION engines. Sets strategy, audience intelligence, and market positioning that inform all execution.
Execution Layer The operational core containing CREATE, CONVERT, AMPLIFY, and NURTURE engines. Produces assets, captures demand, distributes content, and maintains relationships.
Optimisation Layer The feedback loop containing MEASURE, LISTEN, OPTIMISE, and GROW engines. Tracks performance, monitors signals, refines execution, and identifies expansion opportunities.
L1 to L5 Autonomy A maturity model for AI marketing systems. L1 is Prompt Assistant. L2 is Workflow Automation. L3 is Supervised Autonomy. L4 is Guided Autonomy. L5 is Goal-Based Orchestration.
Agentic Divide The gap between AI high performers (5% of organisations) and the majority (60%) who report minimal gains. High performers redesign workflows around agents rather than bolting AI onto legacy processes.
AI-Ready Data Data that meets the requirements for AI systems to function effectively. 57% of organisations estimate their data is not AI-ready, creating a hard dependency for L3+ autonomy.
Operator Function The strategic orchestration role that connects atomic jobs into coherent workflows. Determines how AI agents communicate, what they’re allowed to do, and how outputs connect to business outcomes.
Goal-Based Orchestration L5 autonomy where AI determines strategy from objectives. Example: “Increase MQLs 20%” and AI selects channels, content, and timing autonomously.
Pile of Parts Problem The disconnect between AI/martech adoption and strategic integration. Accumulating tools without architecture that connects them to measurable marketing outcomes.
AIO (Artificial Intelligence Optimisation) Ensuring your brand narrative is cited correctly in the training data and RAG systems of external AI models like Perplexity, ChatGPT, and Gemini. AIO is to AI search what SEO is to traditional search.
Integration Tax The hidden cost of managing disconnected AI tools. An Operator managing 50 indie agents without a universal data layer spends more time on integration than execution, negating productivity gains.
Pi-Shaped Marketer The talent profile required for AI marketing systems. Unlike T-shaped generalists (broad knowledge, one deep specialty), Pi-shaped marketers have two vertical depths: deep domain expertise in marketing and deep technical fluency in AI systems.

FAQ

What is the AI Marketing Framework?

The AI Marketing Framework is an architecture for building AI marketing systems. It organizes marketing functions into 11 engines across 3 layers: Foundation (Define, Understand, Position), Execution (Create, Convert, Amplify, Nurture), and Optimisation (Measure, Listen, Optimise, Grow). Each engine can operate at different autonomy levels from L1 (prompt assistant) to L5 (goal-based orchestration).

Why do most AI marketing implementations fail?

According to Gartner, 45% of martech leaders say AI agents fail to meet expectations. Half cite lack of technical stack readiness. Half cite talent shortages. The problem is not the tools but the missing orchestration layer that connects AI marketing outputs. Only 6% of organizations qualify as AI high performers according to McKinsey.

What are the 5 autonomy levels for AI marketing?

L1 is Prompt Assistant where humans create and review everything. L2 is Workflow Automation with chained prompts. L3 is Supervised Autonomy where AI executes and humans approve key decisions. L4 is Guided Autonomy where AI operates within guardrails. L5 is Goal-Based Orchestration where AI determines strategy from objectives.

Which marketing functions can reach full AI autonomy?

Execution engines like Create, Convert, Amplify, and Nurture can reach L5 autonomy because they are tactical, measurable, and bounded. Foundation engines like Define and Position cap at L4 because strategic judgment remains human. Similarly, the Grow engine requires human oversight for expansion decisions.

How much can AI marketing systems reduce costs?

BCG research shows effective AI agents can accelerate business processes by 30% to 50%. McKinsey projects productivity improvements of 3% to 5% annually with potential growth lift of 10% or more. Marketing teams using AI report 44% higher productivity and save an average of 11 hours per week.

What is the difference between AI high performers and everyone else?

McKinsey defines AI high performers as organizations attributing 5% or more EBIT impact to AI use. They represent only 6% of organizations. These high performers are 3.6 times more likely to target enterprise-wide transformation and most are redesigning workflows rather than bolting AI onto legacy processes.

P.S.

I’m a full-stack marketer. Hands-on with AI. I build and orchestrate marketing systems that drive results.

I’m actively building the remaining engines at growthsetting.com together with Maciej Wisniewski.

Now exploring marketing roles, leadership or hands-on. Let’s talk.

ContactLinkedIn