Multi-Agent System CoordinationMCP Server LiveListed on Official MCP Registry

VerifiMind PEAS

The Coordination Layer for Multi-Agent Systems

Powered by the Genesis Prompt Engineering Methodology. Orchestrate any combination of AI agents — Manus AI, Claude Code, Cursor, Codex, Perplexity, and more — through structured multi-model validation under human direction.

v0.5.12 Polar Integration — 312 tests, 2,389+ verified engagement hours. 90.7% Value Confirmation Rate | 1,876+ endpoints | Register as Early Adopter →

VerifiMind PEAS - Trust Layer for the Agentic Web
Verified from GCP Log Analysis

Verified Service Metrics

Last updated: 2026-04-09 • Scrapers excluded • Conservative rounding

2,389+
Verified Engagement Hours
All-Time (scrapers excluded)
90.7%
Value Confirmation
Sessions with follow-up
10
MCP Tools
4 Core + 3 Coordination + 3 Template
7
LLM Providers
Multi-Model Validation (Gemini, Groq, Anthropic BYOK, + 4 more)

Service Analytics Dashboard

Verified from GCP logs

Adoption Trajectory (Flying Hours ✈️)

19h
W03
60h
W04
66h
W05
95h
W06
120h
W07
180h
W08
688h
W09
1830h
W10
2006h
W11
2007h
W12
2013h
W13
2271h
W14
2390h
W15

Traffic Classification

MCP Client (Tool Users)86.8% · 2212h
API Integration0.3% · 8h
Human Browser7.1% · 170h
Scraper (excluded)0%
Node.js 65.3%
Python SDK 20.3%
Browser 8.5%
Claude/Anthropic 4.5%
Other 1.4%

Verified = MCP + Browser + API. Scrapers/Bots/Owner/mcp-verify excluded. Phase 74 forensic standard v2.5.

Inference Quality — Mock vs. Real

Mock/Template
Real Inference
100%
W02
100%
W03
100%
W04
100%
W05
100%
W06
100%
W07
100%
W08
70%
W09
v0.4.4 ✓
100%
W10
100%
W11
100%
W12
100%
W13
100%
W14
100%
W15

W02–W08: Mock mode (transparent disclosure). W09: Transition to real inference. W10+: Full Multi-Model Trinity (v0.4.4).

Transparency Notice — Mock Mode Period (Jan 29 – Feb 26, 2026)

During W02–W08, the MCP server operated in mock mode due to a deprecated Gemini model endpoint. As of v0.4.4 (Feb 27, 2026), all three Trinity agents return real AI inference with _overall_quality: "full". Multi-Model routing: X=Gemini 2.5 Flash, Z/CS=Groq Llama-3.3-70b. Consultation hours reflect real server traffic (verified from GCP logs). Agent response quality during the mock period was structural scaffolding only.v0.4.4 announcement → | Original disclosure →

Live Server

MCP Server is Online

Connect your AI assistant to validate ideas using the RefleXion Trinity methodology. Multi-model validation at your fingertips.

Quick Setup

Copy & paste to connect

// Add to .vscode/mcp.json in your workspace
// Or: Settings > GitHub Copilot > MCP Servers
{
  "servers": {
    "verifimindPeas": {
      "type": "http",
      "url": "https://verifimind.ysenseai.org/mcp/"
    }
  }
}

Available Tools

10 tools (4 core + 6 template)

X AgentGemini 2.5 Flash (FREE)
consult_agent_x

Innovation & Strategy Analysis

Z AgentGroq Llama-3.3-70b
consult_agent_z

Ethics & Safety Review

CS AgentGroq Llama-3.3-70b
consult_agent_cs

Security & Feasibility Validation

TrinityMulti-Model
run_full_trinity

Complete X → Z → CS Validation

Templates19 pre-built across 6 libraries
list_templates, get_template, ...

+6 template management tools

How It Works

1

Connect

Add MCP config to your AI client

2

Describe

Tell your AI about your idea

3

Validate

AI calls RefleXion Trinity agents

4

Receive

Get multi-perspective validation report

BYOK Support

v0.5.9 — Anthropic Live

Bring Your Own Keys — now supporting Anthropic Claude 4 family, Gemini 2.5 Flash, and Groq Llama-3.3-70b.

Per-tool-call api_key + llm_provider params. Auto-detects provider from key format.

Self-Host Option

Run your own instance for full control and privacy.

Clone Repo

Troubleshooting

Connection issues? Check our setup guide and FAQ.

Core Concept

Crystal Balls Inside the Black Box

Instead of treating AI as an opaque "black box," we place multiple "crystal balls" (diverse AI models) inside to illuminate the path forward.

Crystal Balls Inside Black Box

Y: Innovator

Generates creative concepts and strategic insights

X: Analyst

Provides critical analysis and identifies weaknesses

Z: Guardian

Ensures ethical compliance and safety

CS: Validator

Validates claims against external evidence

v3.1: 4-Stage Protocol
Genesis v3.1

4-Stage Security Verification Protocol

Every finding must be proven AND disproven. No auto-fixes. Human oversight is always the final stage.

STAGE 1

Detection

Automated scanning identifies potential security findings across the codebase

STAGE 2 (MANDATORY)

Self-Examination

Every finding must be argued FOR and AGAINST before escalation

STAGE 3

Severity Rating

CRITICAL / HIGH / MEDIUM / LOW with confidence scoring and evidence chains

STAGE 4

Human Review

Human oversight is always the final stage. No auto-fixes ever.

Zero Code Changes Philosophy

Genesis v3.1 is a workflow enhancement — it activates what's already there. No modifications to the server foundation. Inspired by Claude Code Security principles.

Workflow OnlyNo Auto-FixesHuman Final Say
The Methodology

Genesis Prompt Engineering Methodology

A systematic 5-step process for multi-model AI validation and orchestration

Genesis Methodology 5-Step Process
01

Initial Conceptualization

Human defines the problem, AI generates initial concepts

02

Critical Scrutiny

Multiple AI models validate and challenge each other

03

External Validation

Independent AI analysis confirms systematic approach

04

Synthesis

Human orchestrates the final synthesis

05

Iteration

Recursive refinement and continuous improvement

Architecture

AI Council: Multi-Model Orchestration

Synergizing diverse AI perspectives under human direction for objective, validated results

AI Council Architecture

Human-Centric Design

The human orchestrator sits at the center, directing all AI agents and making final decisions. This resolves the "Orchestrator Paradox" by providing persistent memory and strategic direction.

Model Heterogeneity

Leverages diverse foundational models (Gemini, Claude, Perplexity, etc.) to reduce bias and achieve more objective results through perspective diversity.

Structured Validation

Each agent has a specialized role (Innovator, Analyst, Guardian, Validator) that contributes to a comprehensive, multi-faceted validation process.

Visual Guide

The Genesis Methodology at a Glance

Orchestrating a council of AIs for validated, robust, and ethically aligned results.

The Genesis Methodology Infographic

The Genesis Methodology transforms ad-hoc multi-model usage into systematic validation.

The Journey

88 Days: From Vision to Reality

The complete development timeline from YSenseAI™ to VerifiMind PEAS

Complete Journey Timeline
PHASE 1: EARLY DEVELOPMENT

Aug 15 - Nov 10

  • YSenseAI™ journey begins
  • 16-version evolution
  • Intuitive multi-model practice
PHASE 2: BREAKTHROUGH

Sep 5 - Nov 10

  • "Crystal Balls Align" moment
  • VerifiMind PEAS v1.0.2 architecture
  • Methodology formalization
PHASE 3: PUBLICATION

Nov 15 - Nov 19

  • Defensive publications (Zenodo)
  • Kimi K2 independent recognition
  • Public launch
The Ecosystem

YSenseAI™ + VerifiMind PEAS

Two interconnected projects powered by the Genesis Methodology

YSenseAI and VerifiMind PEAS

YSenseAI™: The Dream

A Human Wisdom Library for ethical AI training. The vision of creating a DaaS platform for attributed, consented, and ethically-protected wisdom datasets.

  • 16-version evolution over 87 days
  • Built using the Genesis Methodology
  • Live prototype available
Visit ysenseai.org

VerifiMind PEAS: The Engine

The Genesis Methodology productized into a systematic validation framework. A production-ready codebase for multi-agent AI validation.

  • 17,282+ lines of production code
  • RefleXion Trinity (X-Z-CS agents)
  • MCP Server live at verifimind.ysenseai.org
View on GitHub
Powered by YSenseAI™ Engine

Open Source. Free to Use. Yours to Build With.

The Genesis Methodology is the engine behind YSenseAI™— our vision for transparent AI attribution and human-AI collaboration.

Now available as open source for researchers, developers, and innovators to validate their own ideas.

Try Wisdom Canvas

Experience the Story-First UX for ethical AI training data collection

Launch Demo

Try MCP Server

Connect your AI and start validating ideas with the RefleXion Trinity

Get Started

Explore the Code

17,282+ lines of production-ready Python code with comprehensive documentation

View Repo

Read the White Paper

Comprehensive academic documentation with case studies and evidence

Read Paper

Case Studies

Real-world validation evidence — see how the Trinity catches flaws before implementation

Case Studies

Third-Party Validation

On November 16, 2025, Kimi K2 independently recognized and articulated the Genesis Methodology by analyzing only the public GitHub repository— providing external validation of the systematic approach.

Independently Verified
Latest Updates

What's New

Follow our development journey — from mock mode to v0.5.12 Polar Integration and beyond

v0.5.12 — Polar Integration

Latest

Pioneer Tier + Polar Payment Integration! PolarClient customer state API, PolarAdapter with 5-min TTL cache, webhook endpoint with Standard Webhooks HMAC verification. Legal pages v2.0 (Privacy Policy + T&C with Polar Merchant of Record). UUID Tracer for GCP log analytics bridge. 312 tests, 52.76% coverage. 2,389+ verified engagement hours with 1,876+ endpoints.

v0.5.10 — Trinity Verified

Trinity Pipeline VERIFIED + BYOK Anthropic! Two-tier Pilot/EA registration with invite codes. Token overflow fixed. Z Guardian veto code-enforced. Anthropic Claude 4 BYOK model refresh. 290 tests total.

Apr 5, 2026

v0.5.5 — Trinity Baseline

TrinitySynthesis schema fix with 3 regression tests. 208 tests total. Phase 47 Ground Truth baseline established.

Mar 13, 2026

Phase 47 — Ground Truth Correction

Transparency

COO AY's Phase 47 forensic audit identified duplicate session counting in earlier reports. Original correction: 4,000+ → 2,100+ engagement hours,84.5% → 63.7% VCR. Phase 71 mcp-verify purge (Report 071) established the corrected baseline. Report 074 (Phase 74) now shows 2,389+ hours, 90.7% VCR, 1,876+ endpoints. All metrics reflect the forensically verified Ground Truth baseline — scrapers excluded, conservative rounding applied. We believe honest self-correction builds stronger credibility than inflated numbers ever could.

Mar 23, 2026Report 071 — Phase 71 mcp-verify purge

v0.5.4 — X Agent v4.3

Creator-centric bias fix — removed VerifiMind self-promotion from X Agent output. Added founder_summary plain-language layer andresearch_prompts (Perplexity/Grok bridge).

Mar 12, 2026

v0.5.3 — Token Ceiling Monitor

Token Ceiling Monitor for usage tracking. AY 404 retention fix resolved. Smithery server-card added for legacy compatibility.

Mar 10, 2026

v0.5.2 — Genesis v4.2 "Sentinel-Verified"

Forced citations in all agent outputs. MACP v2.2 "Identity" protocol integrated. L Blind Test achieved 11/11 perfect score.

Mar 9, 2026

v0.5.1 — Z-Protocol v1.1 + CS "Sentinel"

Z-Protocol upgraded to v1.1 with 21 frameworks. CS Agent v1.1 "Sentinel" with 6-stage pipeline. OWASP Agentic AI security standards integrated.

Mar 7, 2026

v0.5.0 — Foundation

Major

The architectural hardening release. SessionContext tracing, error handling v2, health endpoint v2. Smithery fully removed — self-hosted on GCP Cloud Run with zero external dependencies. 205 tests.

Mar 1, 2026Read announcement

Case Study: Validation-First Design

New

First real-world A/B test: Human Intuition vs. Multi-Model Trinity. The Trinity unanimously rejected a GCP deployment architecture — catching hidden costs, insufficient RAM, and over-engineering. Complete raw evidence chain published with all 3 agent reports.

Mar 2, 2026View case study

v0.4.5 — BYOK Live

Bring Your Own Key support live. Per-tool-call api_key and llm_provider parameters. Auto-detects provider from key format. Triple-validated (Manus AI 6/6, Claude Code 6/6, CI 175 tests).

Feb 28, 2026View PR #55

v0.4.4 — Multi-Model Trinity

All three AI agents (X Innovator, Z Guardian, CS Validator) now return real inference with_overall_quality: "full". Z Agent routed to Groq/Llama for reliable structured ethics analysis. Per-agent model selection enabled. MCP server version bump confirmed. VerifiMind PEAS favicon added (48x48 PNG, C-S-P validated).

Feb 27, 2026Read announcement

v0.4.3 — C-S-P Pipeline Fix

Applied GodelAI's Compression-State-Propagation methodology to fix Trinity pipeline. Robust JSON extraction, quality markers (_inference_quality), and state validation checkpoints between agent stages.

Feb 27, 202614 PRs merged in 3 hours

v0.4.2 — Mock Mode Resolved

Fixed deprecated Gemini model endpoint (gemini-2.0-flashgemini-2.5-flash). Transparent mock mode disclosure added. Real AI inference restored after 28-day mock period.

v0.4.1 — Markdown-First Output

Markdown-first output format across all agents. Smithery URL removal completed. PDF output deprecated in favor of structured Markdown. GCP Cloud Run deployment hardened.

Feb 14, 2026
Community

Join the Conversation

Connect with us, share feedback, and help shape the future of multi-model AI validation

Follow on X

Updates, insights, and discussions about VerifiMind PEAS

@creator35lwb

GitHub Discussions

Ask questions, share ideas, and connect with the community

Join Discussion

Report Issues

Found a bug or have a feature request? Let us know!

Open Issue

Early Adopter Registration

Register for free priority access to v0.6.0-Beta — EA (3mo free) or PILOT (6mo free)

Register Now
Now Open

Join as an Early Adopter

Register for free priority access to VerifiMind-PEAS v0.6.0-Beta when it launches. No credit card required.

Pilot Badge

PILOT

Invite-only · 50 slots

6 months free

Early Adopter Badge

EARLY ADOPTER

Open registration · 100 slots

3 months free

Insider Badge

INSIDER

DFSC RM 188 tier

Beta + newsletter

Validator Badge

VALIDATOR

DFSC RM 500 tier

1:1 consultation

Pioneer Badge

PIONEER

$9/month · Coordination Tools

Full Premium Access

3–6 Months Free

Pilot (6mo) or Early Adopter (3mo) tier access to v0.6.0-Beta at no cost

🧪

Exclusive Badge

Earn your tier badge — displayed on your profile and contributions

💬

Shape the Product

Direct feedback channel to the development team

Z-Protocol v1.1 compliant · GDPR/PDPA · Opt out anytime · Privacy Policy · Terms & Conditions

DreamFactory Startup Contest 2026

Support the Journey

VerifiMind™ PEAS is participating in DFSC 2026 on Mystartr. Support the project and receive exclusive rewards.

RM 50

Supporter

Digital badge + White Paper + Public shoutout

Insider Badge
RM 188

Insider

+ Priority v0.6.0-Beta invitation + Journey newsletter

Validator Badge
RM 500

Validator

+ 1:1 methodology consultation (30 min) · 100 slots

Pioneer Badge
$9/mo

Pioneer

Full Coordination Tools access + Premium support · Subscribe

Campaign: Mar 16 – Apr 15, 2026 · Target: RM 10,800 · Fixed Funding

Get in Touch

Have questions about the Genesis Methodology? Want to collaborate? We'd love to hear from you.

Share Your Feedback

Help us improve! Share your thoughts, report bugs, or suggest new features.

Rate your experience (optional):