01 / 13
Take-Home Exercise · Head of Demand

An inbound strategy for
Sweet Security.

A 90-day, $120K plan built for how cyber buyers actually buy in 2026, not the 2019 demand-gen playbook.

Dana Ephrat · Head of Demand candidate
Press to begin
The thesis

Security demand isn't a
lead-gen problem.

It's a

01trust
+
02timing
+
03pain recognition

problem.

Which is why a 90-day inbound plan isn't a channel list. It's a sequence designed to take the buyer from passive awareness to technical curiosity to operational relevance to demo-worthy urgency.

Buyers don't search to buy. They search to solve a pain.

Key

6 to 20 touches across peer, technical, and analyst surfaces before a demo feels rational. Gated PDFs don't earn that. Seeing yourself in the problem does.

01

Trigger

K8s migration, alert fatigue, a board question

02

Self-diagnosis

"What are we missing?" Sizing a blind spot

03

Peer research

Communities, podcasts, attack walkthroughs

04

Problem search

"How to detect runtime threats in K8s"

05

Demo

Only after "this is us"

Three motions, not one.

01 · Create

Demand Creation

Plant pain recognition before "category" exists in the buyer's head. Attack walkthroughs, runtime blind-spot framing, AI-workload risk narratives.

Where LinkedIn, YouTube, podcasts, Reddit, technical editorial
02 · Identify

Demand Identification

Find accounts whose buying committee is already in motion. Through signals, not forms.

Where Common Room, intent providers, multi-stakeholder LinkedIn engagement
03 · Capture

Demand Capture

Be the obvious answer when intent surfaces. Mid-funnel: problem-led queries, comparison pages. Bottom-funnel: branded terms, retargeting.

Where Google Search (MOFU + branded), retargeting, comparison pages
The unlock for Sweet

The product itself can be the demand engine.

Adding hands-on experiences to the site creates the "this is us" moment that earns a demo. For example: a runtime exposure check, an attack simulation walkthrough, a posture benchmark against industry peers, or a sandbox environment where buyers explore the product on their own terms.

Four moves that make the spend work.

01

Hands-on experiences on site

Interactive product moments: runtime exposure check, attack simulation, posture benchmark. The buyer sees their own risk, not a brochure.

02

Talk to a Cloud Security Expert

A mid-funnel CTA that is explicitly not sales. A practitioner conversation about the buyer's architecture and pain. Builds trust, surfaces real problems, lowers the cost of the eventual demo.

03

Podcasts & webinars as owned trust

Appear on practitioner podcasts, run quarterly technical webinars with researchers. Repeated exposure across trusted surfaces compounds credibility.

04

Common Room as the signal layer

Detect buying committees forming before they raise a hand. GitHub activity, community presence, peer conversations. Which accounts are in motion right now?

And that's Sweet's sweet spot: four moves that make three motions work as one system, driven by product experience.

Build the audience in M1. Harvest it in M3.

Start TOF-heavy to warm the pool, then shift spend to MOF + BOF as retargeting audiences build. By month 3, over half the budget targets mid + bottom funnel, where pipeline converts.

Month 1
$33K
Audience build
TOFLinkedIn$11K
TOFTechnical distribution$8K
TOFYouTube + Reddit$5K
MOFLinkedIn + Google$2K
BOFLinkedIn + Google$2K
ABMInflu2$5K
Month 2
$42K
Pool warms
TOFLinkedIn$9K
TOFTechnical distribution$8K
TOFYouTube + Reddit$4K
MOFLinkedIn + Google$9K
BOFLinkedIn + Google$7K
ABMInflu2$5K
Month 3
$45K
Harvest
TOFLinkedIn + YouTube + Reddit$8K
MOFLinkedIn + Google$16K
BOFLinkedIn + Google$16K
ABMInflu2$5K
Total spend by funnel stage · $120K
TOF $53K · 44%
MOF $27K · 23%
BOF $25K · 21%
ABM $15K · 12%

$800K pipeline. ~$200K projected ARR.

0
High-intent sessions
MOF + BOF traffic across 3 months
0
High-intent leads
0.45% session to lead
0
MQLs
35% lead to MQL
0
SQLs
35% MQL to SQL
$0
Cumulative pipeline
$100K ACV · 8 SQLs in motion
~$0
Projected closed-won ARR
20% win rate · includes lagged Q2 conversion

Monthly ramp

SQLsNew pipelineCumulative
M11$100K$100K
M23$300K$400K
M34$400K$800K

Key assumptions

Session → Lead0.45%
Lead → MQL35%
MQL → SQL35%
Win rate20%
ACV$100K

Revenue lags pipeline by 3 to 9 months. The ~$200K includes Q1 SQLs that close in Q2/Q3. Q1 success = qualified pipeline that progresses, not deals that close.

What proves this is working, and by when.

Weeks 1 to 4

Early validation

  • High-intent traffic on product + architecture pages
  • Engagement from ICP accounts (not raw volume)
  • Technical content engagement depth
  • Growth in branded search volume
Weeks 5 to 12

Evaluation behavior

  • Demo requests from ICP accounts
  • MQL → SQL conversion rate growth
  • Higher conversion from interactive experiences
  • Multiple stakeholders from same account

I don't shut campaigns down based on short-term conversion metrics alone. In cybersecurity, intent formation takes time. The order is always scale → optimize → kill, never the reverse.

Scale

Double down on what's clearly working.

  • SQLs and qualified demo requests from ICP
  • Fast movement: awareness → evaluation
  • Higher demo conversion from retargeting + product experiences
  • Shorter time-to-demo from target accounts
Optimize

Before killing anything, fix it first.

  • Refine targeting and ICP filters
  • Iterate creative and landing experiences
  • Reroute mid-funnel to interactive CTAs
Kill

Only after optimization has had a fair shot.

  • Low ICP-match traffic, persistently
  • Demos without technical urgency
  • High CPL with no downstream progression
  • No evaluation behavior in 60 to 90 days

Account progression, not last click.

sweet.demand · pipeline progression
Q1 By channel By account
SQLs · ICP
8
▲ on plan
Pipeline
$800K
▲ +$400K M3
MQL → SQL
35%
▲ vs. baseline
Engaged accts
42
▲ 3+ touches
Top accounts by progression
AccountStageTouchesFirst / lastPipeline
Stratus Cloud Opp 14 LinkedInGoogle $120K
Helix AI SQL 9 PodcastLinkedIn $100K
NorthSky Data SQL 11 LinkedInABM $100K
Pipeline by month
SQL Pipeline
✓ Signals I trust
  • SQL creation from ICP accounts
  • Multi-stakeholder engagement within target accounts
  • Repeat high-intent behavior across sessions & channels
  • Pipeline influenced by inbound touchpoints
  • Opportunity creation & pipeline growth
✕ Signals I distrust
  • Raw lead volume
  • MQL counts without downstream progression
  • Vanity metrics (impressions, clicks) without pipeline impact
  • Single-touch attribution models

Two defaults to deliberately skip.

Skip

Non-intent scale channels as primary engine

Meta and TikTok aren't where security buying committees research. The buyer signal isn't there. Spending here optimizes for the wrong audience.

Skip

Pushing cold audiences straight to "Book a demo"

A demo pulls in architects, SOC leads, procurement. Asking before trust exists positions the brand as aggressive. Lower-commitment, higher-credibility entry points first.

Instead: surface the real conversation

"Talk to a Cloud Security Expert" as a mid-funnel CTA.

A practitioner conversation, not sales. A real cyber expert listens to the buyer's architecture and pain, brainstorms possible solutions, including ones that aren't Sweet.

It builds trust and credibility, and dramatically lowers the activation energy for the eventual demo.

Three risks. Three assumptions. Three week-1 tests.

Top 3 risks
01

Sales cycle outruns the quarter

Enterprise cyber cycles can exceed 3 months, limiting visible SQL-to-close conversion in Q1.

02

Smaller-than-expected in-market pool

Active runtime security buyers may be a narrow slice. Pipeline ceiling becomes a market-size problem.

03

Weak retargeting intent depth

Remarketing audiences may not reach evaluation-stage intent fast enough for M3 BOF efficiency.

Assumptions + week-1 tests
A1

Active search demand exists for runtime security

If search volume is too low, the whole capture layer fails.

Test Google search volume analysis + small paid test. Does real market pull exist?
A2

Interactive experiences beat static content

If not, the product-led motion collapses.

Test A/B: static CTA vs. interactive product experience. Measure engagement + demo progression.
A3

SQL conversion is truly multi-touch

If single-touch dominates, the attribution model needs rethinking.

Test HubSpot + SFDC journey analysis. Compare single vs. multi-touch SQL paths.
A short personal note

What was hardest,
and what I'd want to know in week 1.

"

The hardest part of the brief was balancing short-term pipeline expectations with the reality of how enterprise cybersecurity buyers actually buy. Building a traditional SaaS media plan around lead volume is easy. Building one around trust formation, technical validation, and long evaluation cycles, inside a 3-month window, is not.

So I spent most of the time thinking less about channels and more about buyer behavior: where runtime security demand actually starts, what creates enough urgency for a demo, and how to prioritize signals of real evaluation intent rather than passive engagement.

In week 1, before scaling spend, I'd want to understand:

  • Where inbound currently breaks down
  • Which channels historically influenced real opportunities
  • How technical buyers engage before becoming SQLs
  • Whether the main gap is awareness, trust, or conversion efficiency

That context turns a generic B2B demand playbook into a plan that optimizes around how Sweet's buyers actually buy.

Thank you.

Excited to dig in on the questions. The assumptions are where this gets interesting.

Dana Ephrat · Head of Demand candidate