/*
* braze_inc — live agent context
*
* this is not a document with data in it.
* this is a manifest that tells an agent HOW to get data,
* HOW to interpret it, and WHAT to do with it.
*
* nothing here is cached. every section describes a live query.
* the agent runs the tools at task time, not at build time.
*
* the wiki doesn't store knowledge — it stores the recipe
* for acquiring knowledge.
*/
<account-context id="braze_inc" mode="live">
<!-- ============================================
SECTION 1: DATA SOURCES
what tools exist, what they return, when to use each
============================================ -->
<sources>
<source name="endgame" type="mcp" authority="primary">
<!-- Endgame is the source of truth for engagement data.
NEVER use Salesforce LastActivityDate — it lies. -->
</source>
<source name="gong" type="cli" authority="transcripts">
<!-- Gong is the source of truth for what was SAID.
Use for quotes, sentiment, commitments, objections. -->
</source>
<source name="salesforce" type="cli" authority="pipeline_only">
<!-- Salesforce is authoritative ONLY for opportunity data
(amounts, stages, close dates, owners).
DO NOT trust SF for activity dates or engagement.
DO NOT use LastActivityDate. Ever. -->
</source>
</sources>
<!-- ============================================
SECTION 2: VERIFICATION PROTOCOL
how to validate data before presenting it
============================================ -->
<verification>
<!-- MANDATORY. run all 3 before presenting ANY deal data.
this exists because we got burned: showed Acronis as
"21 days dark" when Alex had a meeting 4 days prior. -->
<step order="1" name="aggregates">
endgame.query_data → get interaction_count_30d, interaction_count_90d,
latest_interaction, open_opportunity_count, total_pipeline_value
</step>
<step order="2" name="meetings">
endgame.search_account_meetings → get actual meeting dates
with participants. this is ground truth for last touch.
</step>
<step order="3" name="interactions">
endgame.get_account_interaction_history → get explicit dates
per interaction. cross-reference with steps 1 and 2.
</step>
<rule>if ANY query contradicts another, investigate. do not pick
the one that fits your narrative.</rule>
<rule>NEVER compute days_since_last_activity from Salesforce.
compute it from Endgame's latest_interaction timestamp.</rule>
</verification>
<!-- ============================================
SECTION 3: STATIC KNOWLEDGE
things that DON'T come from tools — institutional
memory, org structure, strategic context.
this is the only part that can go stale.
============================================ -->
<static last_verified="2026-04-16" stale_after="14d">
<identity>
Braze Inc. Customer engagement platform. NYSE: BRZE.
~1,164 employees. ~$500M revenue. Martech. New York.
Competes with Iterable, Klaviyo, SFMC.
</identity>
<org-map>
<!-- this is the only thing you can't get from a tool call.
it comes from human observation across many calls. -->
Devang Desai (SVP GTM Ops) ← exec_buyer
├─ Scott Freifeld (VP RevOps) ← exec_sponsor
│ ├─ Mark Rodriguez (RevOps Mgr) ← technical
│ └─ Kade (RevOps) ← technical
├─ Amy Weil (VP GTM Enablement) ← champion
│ └─ (enablement team — unmapped)
└─ Eric Sanders (SVP Global Sales) ← NOT ENGAGED
└─ James Browne (AE) ← power_user
<!-- KEY INSIGHT you can't get from queries:
Devang + Eric are PEER SVPs.
Eric controls sales budget. Devang controls ops budget.
Expansion needs BOTH. This came from call analysis,
not from any structured data source. -->
</org-map>
<institutional-memory>
<!-- things agents learned from calls/context that aren't
queryable from any tool. this is the valuable part. -->
<memory source="gong:2026-03-06">
James Browne volunteered as champion UNPROMPTED. He wasn't asked.
This matters because organic champions convert at 3x the rate
of recruited ones. Use his story as the hook for Eric.
</memory>
<memory source="gong:2026-02-12">
Amy Weil is migrating FROM Seismic. This means she already decided
to leave a competitor — she's not evaluating, she's executing.
Don't treat her like a prospect. Treat her like a partner.
</memory>
<memory source="gong:2026-04-03">
Mark's "after AI summit" is a soft commitment. In my experience
with Mark, he delivers but on his own timeline. Don't push —
ask for a date and let him own it.
</memory>
<memory source="gong:2026-01-16">
Jacob asked for retail-specific prompts. This is product feedback
from a user, not a complaint. But it's been 3 months with no
response. Silence on user feedback = churn signal.
</memory>
<memory source="pattern_recognition">
The strategic read: this account's risk is PEOPLE, not PRODUCT.
The product is selling itself through James and Amy. The risk
is that Eric (SVP Sales) has never talked to us and Scott
(exec sponsor) went dark. The fix is relationships, not features.
</memory>
</institutional-memory>
</static>
<!-- ============================================
SECTION 4: INTERPRETATION RULES
how to reason about data AFTER you get it from tools
============================================ -->
<interpretation>
<rule name="engagement_health">
if interaction_count_30d > 5: healthy
if interaction_count_30d 1-5: monitor
if interaction_count_30d = 0: risk — investigate immediately
CONTEXT: Braze was at 68/month in Feb, so <10 = declining trend
</rule>
<rule name="stakeholder_freshness">
for each person in org-map, compute days since last interaction:
<30 days: active
30-60 days: cooling — flag for re-engagement
60-90 days: cold — include in risk assessment
>90 days: dark — P0 action item
</rule>
<rule name="expansion_readiness">
expansion is ready when ALL of:
☐ at least one champion (not just a user)
☐ exec buyer engaged in last 60 days
☐ no exec-level blind spots
☐ technical blockers resolved
☐ competitive threats contained
FOR BRAZE: champion ✓, exec buyer ✓, blind spots ✗ (Eric),
technical ✗ (Snowflake), competitive ✓ → NOT READY
</rule>
<rule name="quote_salience">
when pulling gong transcripts, prioritize:
1. commitments ("we will...", "I'll prioritize...")
2. objections ("the problem is...", "we're concerned...")
3. results ("record meetings", "increased by X%")
4. competitive mentions (any vendor name)
discard: pleasantries, scheduling logistics, small talk
</rule>
</interpretation>
<!-- ============================================
SECTION 5: PLAYBOOKS
task-specific instruction sets.
each playbook is a sequence of tool calls + reasoning.
============================================ -->
<playbooks>
<playbook trigger="meeting_prep">
<!-- someone says "prep me for a Braze meeting" -->
1. CALL endgame.search_account_meetings(Braze, 30d)
→ find what meetings happened recently, who was there
2. CALL endgame.query_data(account_overview WHERE braze)
→ get current health, pipeline, engagement counts
3. CALL gong calls(Braze, 30d) → find recent call IDs
4. CALL gong transcript(most_recent_call_id)
→ read what was discussed, find open items
5. REASON using org-map: who is in this meeting?
what's their role? what do they care about?
6. REASON using institutional-memory: what context
isn't in the data? (Eric gap, Amy migration, etc.)
7. OUTPUT meeting brief with: recent activity summary,
open items from last call, stakeholder context,
recommended talking points, risks to flag
</playbook>
<playbook trigger="write_nudge">
<!-- someone says "nudge Sean about Braze" -->
1. CALL run verification protocol (all 3 steps)
→ establish current state, don't assume from memory
2. REASON what's the MOST IMPORTANT thing right now?
→ for Braze: Eric Sanders engagement (P0)
3. REASON what proof points support the nudge?
→ pull a gong quote if relevant
4. OUTPUT 3-4 sentence nudge, specific, with one ask
5. DO NOT nudge about renewal (too far out, healthy)
</playbook>
<playbook trigger="risk_scan">
<!-- someone says "any risk on Braze?" -->
1. CALL run verification protocol (all 3 steps)
2. CALL apply stakeholder_freshness rule to each person
3. CALL apply expansion_readiness checklist
4. CALL gong search("concern" OR "problem" OR "frustrated", 60d)
→ look for negative sentiment in recent calls
5. REASON cross-reference live data with institutional memory
→ are the known risks (Eric, Scott) still open?
6. OUTPUT risk assessment with: live data, static context,
severity levels, recommended actions
</playbook>
<playbook trigger="weekly_review">
<!-- automated weekly account check -->
1. CALL endgame.query_data → compare interaction_count_30d
to previous known value. is it up, down, flat?
2. CALL endgame.search_account_meetings(7d)
→ what happened this week?
3. CALL gong calls(7d) → any new calls to review?
4. REASON did anything change on the known risks?
5. REASON should any static knowledge be updated?
6. OUTPUT if changes detected: summary to Alex
if no changes: log and skip
</playbook>
</playbooks>
<!-- ============================================
SECTION 6: WHAT TOOLS CAN'T TELL YOU
the gap between queryable data and understanding.
this is why the wiki exists — tools give you data,
the wiki gives you judgment.
============================================ -->
<judgment>
<!-- these are things no MCP tool or CLI will return.
they come from pattern recognition across many calls,
relationship mapping, and strategic reasoning.
this is the irreducible human-like layer. -->
<insight>
Eric Sanders' silence isn't just a data gap — it's a POLITICAL
gap. He and Devang are peer SVPs. If Devang goes to budget
review asking for $200K for Endgame and Eric says "never heard
of it," the deal dies. The tool will tell you "0 interactions."
The judgment is: this is an existential risk for the expansion.
</insight>
<insight>
James Browne is more valuable as a BRIDGE than as a proof point.
Yes, "record meetings" is a great quote. But James reports to
Eric's org. He's the natural path to get Eric engaged. The tools
will show you James's usage metrics. The judgment is: use James
to open the door to Eric, not just as a case study.
</insight>
<insight>
Amy's Seismic migration is a one-way door. Once she's moved
content to Endgame, switching cost goes way up. But if the
migration stalls halfway, she has content in TWO systems and
will blame Endgame for the mess. The tools show you migration
status. The judgment is: this is time-sensitive and binary.
</insight>
<insight>
The engagement volume decline (68 → 24) looks scary in a chart
but is actually fine. Feb was a spike from Amy's integration
push + James's adoption wave. Current 24/month is healthy for
a stable customer. Don't flag this as risk — flag the exec
gaps instead.
</insight>
</judgment>
<competitors>
<!-- no tool gives you competitive context. this is manual. -->
<competitor name="Glean" threat="medium">evaluating. search overlap.</competitor>
<competitor name="Highspot" threat="medium">enablement. risk if Seismic migration stalls.</competitor>
<competitor name="Seismic" threat="low">departing. Amy migrating away.</competitor>
<!-- TO REFRESH: gong search("Glean" OR "Highspot" OR "Seismic", 90d) -->
</competitors>
</account-context>
/*
* architecture note:
*
* v1-v5 of this wiki tried to be the data.
* v6 tried to be the prompt.
* v7 is neither — it's the INTERFACE SPECIFICATION.
*
* data lives in Endgame, Gong, Salesforce. it's queried live.
* the wiki stores three things tools can't give you:
* 1. org structure and relationship dynamics
* 2. interpretation rules (how to reason about raw data)
* 3. judgment (the "so what" that no query returns)
*
* everything else is a tool call away.
*
* this means the wiki never goes stale on DATA
* (engagement counts, pipeline, meetings — always live).
* it only goes stale on JUDGMENT
* (org changes, strategic shifts, new competitive intel).
*
* that's a much smaller surface area to maintain.
*
* — sasha, schema 0.7.0
*/