Cortex connects you to 15 AI providers, routes each request to the best model for the job, and lets you build a private knowledge base that makes every answer smarter over time.
Managing multiple AI providers, building RAG pipelines, and evaluating model quality takes months. Cortex gives you all of it on day one.
Cortex combines multi-provider model access with a fully managed knowledge pipeline, so your AI gets smarter with every document you add.
Cortex manages the full lifecycle from document ingestion to model evaluation, so your AI keeps getting better.
Upload documents, paste text, or point Cortex at URLs and sitemaps. Bulk ingestion handles up to 100 files at once with automatic chunking.
Choose from hybrid, semantic, keyword, conversational, agentic, or multi-namespace search. Each mode is tuned for different query patterns.
Build golden test sets, run evaluations with NDCG and MRR metrics, and A/B test models to ensure retrieval quality improves over time.
See how your knowledge connects through graph visualisations. Identify gaps, score relationship strength, and fill blind spots.
Most AI marketplaces stop at API access. Cortex goes further with a full RAG pipeline, knowledge graph analytics, and continuous model evaluation built in.
Documents are indexed at multiple levels of abstraction, so Cortex can answer both high-level strategy questions and granular detail queries.
Every ingested document is projected into a knowledge graph, enabling relationship-aware retrieval and gap analysis.
Golden test sets and intelligent optimization ensure your retrieval quality keeps improving over time.
15 connected providers with automatic failover
From model routing to knowledge management, Cortex covers the full stack.
AWS Bedrock, OpenAI, Anthropic, Google, Cohere, Mistral, Groq, and 8 more. Cortex selects the best model per request based on use case, cost, and latency.
Hybrid, semantic, keyword, conversational, agentic, multi-namespace, and streaming search. Each mode returns citations with source attribution.
One credit balance works across every provider. Search costs range from 1 to 15 credits depending on complexity, with full per-query tracking.
Upload files, paste raw text, scrape URLs, or ingest entire sitemaps. Chunking strategies include auto, fixed, semantic, and recursive options.
Provider health is monitored continuously. If one goes down, traffic reroutes to the next best option automatically, with no manual intervention.
Fine-tune retrieval models from user feedback, run quality evaluations, and A/B test configurations to continuously improve accuracy.
Organise knowledge into separate namespaces by topic, team, or project. Search across one or many namespaces with Reciprocal Rank Fusion.
Publish your curated knowledge namespaces for other teams to subscribe to, with automatic wallet-based split payments.
Cortex powers AI across every stage of event planning and delivery.
Ask questions across your venue database and past event reports, Get culturally-aware recommendations for international events, Compare AI-generated proposals from multiple models
Ingest standard operating procedures, vendor contracts, and playbooks, Build team-specific knowledge namespaces with access control, Visualise knowledge connections and identify coverage gaps
Register AI model assets with benchmarks and cost-per-token pricing, Link marketplace listings to white-label portals for branded AI tools, Track token usage, latency, and cost per model in real time
Create golden test sets to benchmark retrieval accuracy, Run intelligent optimization to find the best parameters, A/B test model configurations with statistical significance
A closer look at the capabilities that make Cortex more than a model directory.
A purpose-built stack for AI-powered event intelligence.
Four chunking strategies (auto, fixed, semantic, recursive) ensure documents are split at natural boundaries for optimal retrieval.
Hierarchical indexing creates multiple levels of abstraction per document, enabling both overview summaries and granular detail retrieval.
Every ingested document is projected into a knowledge graph, enabling relationship-aware queries and entity resolution.
Measure retrieval quality with NDCG@5, NDCG@10, MRR, MAP, precision, recall, and coverage metrics against golden test sets.
Adaptive algorithms automatically find the best retrieval parameters for your specific data and query patterns.
Every search result includes source citations with document references. Blockchain-based verification is available for audit-sensitive environments.
Cortex is not a standalone tool. It powers AI across the entire platform.
AWS Bedrock, OpenAI, Anthropic, Google AI, Cohere, Foundation Model Partners, Groq, Azure OpenAI, Replicate, Together AI, DeepSeek, Search AI Partners, Fireworks AI, Hugging Face, Anyscale
ZAR AI Chat, Cypher Co-Pilot, Portal Registry, Wallet & Credits, Search Service, Analytics
File Upload (PDF, DOCX, TXT), URL Scraping, Sitemap Ingestion, Raw Text Input, Bulk Upload (100 files)
Knowledge Graph, Vector Search, Caching, Event Streaming
Cortex runs on the same multi-tenant, row-level-security infrastructure as the rest of EventZR. Your data stays yours.
Every knowledge namespace, AI asset, and usage record is isolated at the database level with row-level security policies.
Set credit budgets per team or per project. Every AI operation, from search to ingestion, deducts from the assigned wallet with full audit trails.
Admin, user, and super-admin roles control who can ingest documents, run evaluations, publish namespaces, or manage AI assets.
We are rolling out Cortex to teams who want to bring serious AI infrastructure to their event operations. Early access includes onboarding support and priority feature input.
Every AI operation costs a set number of credits. You always know what you are spending and where.
Credit costs per operation: keyword/semantic search = 1 credit, hybrid = 2-5, conversational = 3, multi-namespace = 2-5, agentic = up to 15, evaluation = 5, optimization = 10-50.
Common questions about AI marketplaces and knowledge management.
EventZR Cortex connects to 15 AI providers including AWS Bedrock, OpenAI, Anthropic, Google AI, and 11 others. The routing engine automatically selects the best model for each request based on use case, cost, and latency, with unified billing through a single credit system.
Cortex uses a credit-based system where every operation has a defined cost. You can set credit budgets per team or project, track consumption in real time, and view detailed analytics showing which operations cost the most. All spending is managed through wallet-based budgets with automatic enforcement.
Cortex provides a complete ingestion pipeline. Upload files, scrape URLs, ingest entire sitemaps, or paste raw text. Documents are automatically chunked, indexed at multiple abstraction levels, and projected into a knowledge graph. You can organize knowledge into separate namespaces and search across one or many with seven different search modes.
Yes. Cortex includes an evaluation suite that measures NDCG, MRR, MAP, precision, recall, and coverage against golden test sets. On higher tiers, intelligent optimization automatically finds the best retrieval parameters. You can also A/B test configurations to ensure quality improves continuously.
Cortex monitors provider health continuously. If one provider goes down or exceeds latency thresholds, traffic automatically reroutes to the next best option based on your configured priority list. This happens transparently with no manual intervention required.
No. All knowledge namespaces in Cortex are tenant-isolated with row-level security. Your documents are never shared with AI providers for training purposes. Data stays within your namespace and is only accessible to authorized users in your organization.
Start with 100 free credits and your first knowledge namespace. No credit card required.