A 90-day, $120K plan built for how cyber buyers actually buy in 2026, not the 2019 demand-gen playbook.
It's a
problem.
Which is why a 90-day inbound plan isn't a channel list. It's a sequence designed to take the buyer from passive awareness to technical curiosity to operational relevance to demo-worthy urgency.
6 to 20 touches across peer, technical, and analyst surfaces before a demo feels rational. Gated PDFs don't earn that. Seeing yourself in the problem does.
K8s migration, alert fatigue, a board question
"What are we missing?" Sizing a blind spot
Communities, podcasts, attack walkthroughs
"How to detect runtime threats in K8s"
Only after "this is us"
Plant pain recognition before "category" exists in the buyer's head. Attack walkthroughs, runtime blind-spot framing, AI-workload risk narratives.
Find accounts whose buying committee is already in motion. Through signals, not forms.
Be the obvious answer when intent surfaces. Mid-funnel: problem-led queries, comparison pages. Bottom-funnel: branded terms, retargeting.
Adding hands-on experiences to the site creates the "this is us" moment that earns a demo. For example: a runtime exposure check, an attack simulation walkthrough, a posture benchmark against industry peers, or a sandbox environment where buyers explore the product on their own terms.
Interactive product moments: runtime exposure check, attack simulation, posture benchmark. The buyer sees their own risk, not a brochure.
A mid-funnel CTA that is explicitly not sales. A practitioner conversation about the buyer's architecture and pain. Builds trust, surfaces real problems, lowers the cost of the eventual demo.
Appear on practitioner podcasts, run quarterly technical webinars with researchers. Repeated exposure across trusted surfaces compounds credibility.
Detect buying committees forming before they raise a hand. GitHub activity, community presence, peer conversations. Which accounts are in motion right now?
And that's Sweet's sweet spot: four moves that make three motions work as one system, driven by product experience.
Start TOF-heavy to warm the pool, then shift spend to MOF + BOF as retargeting audiences build. By month 3, over half the budget targets mid + bottom funnel, where pipeline converts.
Revenue lags pipeline by 3 to 9 months. The ~$200K includes Q1 SQLs that close in Q2/Q3. Q1 success = qualified pipeline that progresses, not deals that close.
I don't shut campaigns down based on short-term conversion metrics alone. In cybersecurity, intent formation takes time. The order is always scale → optimize → kill, never the reverse.
Double down on what's clearly working.
Before killing anything, fix it first.
Only after optimization has had a fair shot.
Meta and TikTok aren't where security buying committees research. The buyer signal isn't there. Spending here optimizes for the wrong audience.
A demo pulls in architects, SOC leads, procurement. Asking before trust exists positions the brand as aggressive. Lower-commitment, higher-credibility entry points first.
A practitioner conversation, not sales. A real cyber expert listens to the buyer's architecture and pain, brainstorms possible solutions, including ones that aren't Sweet.
It builds trust and credibility, and dramatically lowers the activation energy for the eventual demo.
Enterprise cyber cycles can exceed 3 months, limiting visible SQL-to-close conversion in Q1.
Active runtime security buyers may be a narrow slice. Pipeline ceiling becomes a market-size problem.
Remarketing audiences may not reach evaluation-stage intent fast enough for M3 BOF efficiency.
If search volume is too low, the whole capture layer fails.
If not, the product-led motion collapses.
If single-touch dominates, the attribution model needs rethinking.
The hardest part of the brief was balancing short-term pipeline expectations with the reality of how enterprise cybersecurity buyers actually buy. Building a traditional SaaS media plan around lead volume is easy. Building one around trust formation, technical validation, and long evaluation cycles, inside a 3-month window, is not.
So I spent most of the time thinking less about channels and more about buyer behavior: where runtime security demand actually starts, what creates enough urgency for a demo, and how to prioritize signals of real evaluation intent rather than passive engagement.
In week 1, before scaling spend, I'd want to understand:
That context turns a generic B2B demand playbook into a plan that optimizes around how Sweet's buyers actually buy.
Excited to dig in on the questions. The assumptions are where this gets interesting.