# Menra (Full) > Menra is a brand visibility platform for tracking how AI engines (ChatGPT, Perplexity, Claude, Gemini, Google AI Overviews, Copilot, Grok, DeepSeek, Meta AI) cite, mention, and represent your brand. Menra turns AI search visibility into measurable, actionable insight for marketing teams. Subscription is $69/month with kontör-based usage metering across 9 AI platforms. ## Pricing Menra ships a single $69/month subscription. Each subscription includes 1 brand, 5 prompts, 3 AI platforms, 1 region, 1 seat, and roughly 100 kontör per month. Kontör is the metered usage unit — every AI search invocation, citation extraction, or content-audit pass spends kontör. When the in-plan budget runs out, customers top up via the kontör tier ladder: - T1 — $10 / 100 kontör - T2 — $46 / 500 kontör (most popular) - T3 — $85 / 1,000 kontör - T4 — $200 / 2,500 kontör - T5 — $375 / 5,000 kontör - T6 — $700 / 10,000 kontör The legacy three-tier plans (STARTER / PRO / BUSINESS) were retired; the new model is one subscription + kontör. Existing legacy subscribers are grandfathered for a defined window per the billing-resolver service. Top-ups are available for extra platforms (Google AI Overviews, Claude, Copilot, Grok, DeepSeek, Meta AI). Pricing source: https://menra.ai/pricing. ## Brand Personality Trustworthy, modern, smart. Menra is a reliable data partner that empowers marketers and creators with clear, actionable AI visibility insights. The product stays out of the data's way: generous whitespace, soft shadows, legible typography, subtle motion. Menra avoids skeuomorphism, retro/brutalist cosplay, and playful-cartoon aesthetics. ## Pillar — What is GEO? Source: https://menra.ai/guides/what-is-geo Generative Engine Optimization (GEO) is the practice of structuring your brand, content, and technical surface so that generative AI systems — ChatGPT, Perplexity, Claude, Gemini, Google AI Overviews, Copilot — mention, cite, and position your brand correctly when users ask them questions in your category. It overlaps with SEO at the foundation (entity coverage, structured data, internal linking, page authority) but diverges sharply at the ranking and measurement layers. The clearest mental model: search engines rank ten pages, answer engines pick three sources and synthesize them into one paragraph. The unit of work shifts from URL-on-SERP to passage-in-answer. The metric shifts from keyword position to citation share. The cadence shifts from weekly crawl to daily prompt scan. GEO vs SEO — four structural differences: 1. Measurement unit. SEO measures keyword position on a 1-100 scale. GEO measures citation share (% of AI answers on a prompt that mention your brand), citation position (lead source vs. footnote), and answer sentiment. 2. Data cadence. SEO tools crawl weekly; GEO requires daily — sometimes hourly — prompt scans because AI answers shift within 24 hours. 3. Synthesis vs. ranking. Google ranks ten pages; an AI engine retrieves passages, picks three to five sources, and synthesizes them into one paragraph. 4. Per-prompt performance. SEO clusters keywords; GEO clusters prompts. "Best CRM for startups" has twenty user-phrased variants, each with its own citation profile. The four signals AI engines weight: entity coverage (knowledge-graph clarity, schema.org markup, About-page entity declaration), direct-answer density (FAQPage schema, H2-as-question structure, 60-90 word direct answers), citation source authority (G2, Capterra, Reddit, industry pubs — not your own blog), and freshness (six-month refresh cycle on top content). ## Pillar — What is AEO? Source: https://menra.ai/guides/what-is-aeo Answer Engine Optimization (AEO) is the discipline of structuring content so AI answer engines cite you when they synthesize an answer. AEO is the slice of GEO focused on answer engines specifically: the engines that retrieve, synthesize, and cite. Inside the GEO umbrella, AEO sits alongside concerns like AI training-data presence and AI-overview optimization. The retrieval mechanics: most teams build content for the page, not the passage. Answer engines retrieve passages — usually 100 to 500 words pulled out of a longer page — and feed those passages to the synthesis model. Page-level signals (backlinks, domain authority, title tag) decide whether the page is in the candidate pool. Passage-level signals (direct-answer formatting, entity disambiguation, FAQ structure) decide which 200 words inside the page get cited. A 4,000-word "ultimate guide" with the answer buried in section seven loses to a 1,200-word post with the answer in the first paragraph after every H2. The retrieval model needs a cleanly delimited passage; FAQPage schema and H2-as-question structures explicitly mark those boundaries. The four AEO signals: direct-answer density, entity coverage, citation source authority, crawler hygiene. The crawler-hygiene piece means GPTBot, OAI-SearchBot, ClaudeBot, and PerplexityBot must be allowed in robots.txt with crawl-delay 0; ship a hand-curated llms.txt so AI agents can pin context without crawling 200 marketing pages. Common mistakes: optimizing the page not the passage, skipping FAQPage schema, treating one engine as representative (ChatGPT and Perplexity disagree often; Claude and Gemini cite different source pools), ignoring source-of-citation work (AI pulls names from G2, Capterra, Reddit, and industry pubs first; your blog second). ## Pillar — How to track AI mentions of your brand Source: https://menra.ai/guides/track-ai-mentions A five-step instrumentation playbook. Each step maps to the real Menra product flow. Step 1 — Identify your target prompts. Pick fifteen to thirty buyer-intent prompts your customers actually ask AI engines when they're shopping in your category. Skip top-of-funnel definitions; focus on bottom-funnel comparisons and recommendations where citation share converts. Step 2 — Connect AI sources. Configure Menra to scan ChatGPT and Perplexity as the universal floor, then add Claude, Gemini, Google AI Overviews, Copilot, Grok, DeepSeek, and Meta AI as add-on platforms based on where your audience is. Each subscription includes three platforms; the rest are kontör add-ons. Step 3 — Set monitoring frequency. Daily is the minimum useful cadence — AI answers shift within 24 hours. Hourly is overkill for most teams. Schedule scans during low-traffic windows so kontör spend is predictable. Menra runs daily by default. Step 4 — Read citation reports. Each weekly report shows citation share by engine, citation position (lead vs. footnote), sentiment, and the source URLs the AI used to back its mention. Reading the source list is where the actionable insight lives — those are your PR and review-generation targets. Step 5 — Act on competitive gaps. Identify the prompts where competitors win and your brand is absent. The fix splits three ways: restructure top URLs around direct-answer density, earn mentions on the high-authority sources AI is pulling from, and refresh stale content. Re-scan after each sprint and compare deltas. A typical first month: signup and prompt configuration take about 60 minutes. The first useful baseline lands after 7 days of scans. The first restructuring sprint takes about two weeks of marketing-engineering time. By month two, the loop is in production; by month three, citation-share movement is consistent and reportable to leadership. ## Pillar — Menra vs the alternatives Source: https://menra.ai/compare The AI visibility category has six or seven serious tools as of April 2026. They all solve the same core problem — measure how AI engines are talking about your brand — but differ on platform breadth, content scoring, pricing model, and the buyer they're built for. Menra differentiates on three things: nine AI platforms in one product (most alternatives ship a smaller core set and charge for parity at enterprise tier), content AEO scoring in the same tool (most alternatives focus on visibility only), and transparent kontör pricing ($69/mo base + public T1-T6 ladder; no enterprise-sales gating). Per-competitor pages: /compare/profound-alternative, /compare/peec-alternative, /compare/aiclicks-alternative, /compare/athena-alternative, /compare/scrunch-alternative, /compare/otterly-alternative. How to pick: which engines you need (APAC needs DeepSeek and Meta AI; Microsoft-heavy needs Copilot; developer-focused needs Claude); content AEO scoring in the same tool or separate; self-serve vs. procurement-driven; kontör/credit model preference. ## How Menra Works A user defines a brand, a list of prompts (e.g. "best CRM for startups"), and the AI platforms to monitor. Menra runs those prompts daily across the configured engines, captures the responses, extracts citations, ranks share-of-voice against competitors, and produces a weekly report. Customers see in real time which AI engines mention their brand, where competitors win, and which content surfaces the AI cites. ## Recent Blog Posts ### Menra v1.2 — Live Today Source: https://menra.ai/blog/menra-launches-v1-2 — published 2026-04-28 Menra v1.2 ships the Hub creator pool surface (creators discover briefs from brands and submit work that earns stablecoin payouts proportional to citations generated), the GEO/AEO crawlability bundle (per-AI-bot robots.txt rules, llms.txt, JSON-LD on pricing/FAQ/pillar pages), and the single $69/month subscription replacing the legacy three-tier model. ZeroBug release discipline preserved across the four-week run-up: every commit passed turbo typecheck + lint + test before landing on main. ### How GPTBot is Quietly Replacing Googlebot Source: https://menra.ai/blog/how-gpt-bot-changes-seo — published 2026-04-26 Most coverage of AI search talks about what users see (ChatGPT answer box, Perplexity citations, Gemini AI Overview). Less is written about the part that matters for whether your content shows up: the crawlers. The new crawler set includes GPTBot, OAI-SearchBot, ChatGPT-User (OpenAI), ClaudeBot, Claude-Web, Claude-User (Anthropic), PerplexityBot, Google-Extended, Applebot-Extended. Allow them explicitly in robots.txt with crawl-delay 0; ship a hand-curated llms.txt; restructure top URLs around direct-answer density. ### Answer Engine vs Search Engine — Why Your Content Strategy Just Changed Source: https://menra.ai/blog/answer-engine-vs-search-engine — published 2026-04-22 The fastest way to get AEO wrong is to assume it's SEO with extra steps. The query model is different, the ranking signal is different, and the unit of measurement is different. Search engines rank ten pages; answer engines pick three sources and synthesize them into one paragraph. AEO measurement clusters prompts (not keywords); the practical loop is restructure top URLs for direct-answer density, ship FAQPage schema where it fits, audit entity coverage, and stop chasing keyword volume. ## Public Data - Sitemap: https://menra.ai/sitemap.xml - robots.txt: https://menra.ai/robots.txt - llms.txt: https://menra.ai/llms.txt ## Optional - Menra Hub leaderboard: https://hub.menra.ai/leaderboard - Menra Hub pools: https://hub.menra.ai/pools - Pricing: https://menra.ai/pricing - All pillar guides: https://menra.ai/guides/what-is-geo, https://menra.ai/guides/what-is-aeo, https://menra.ai/guides/track-ai-mentions - Compare aggregator: https://menra.ai/compare - Blog index: https://menra.ai/blog