Back to home

Under the hood

How Our AI Works

Corexi is built on a proprietary hybrid LLM pipeline, combining multiple foundation models fine-tuned with our UX-specific training layers. No single model owns the output. Every analysis is the result of cross-validated, multi-signal reasoning.

01

3-layer architecture

Three specialized layers work in sequence. Each layer adds signal depth, and the output of one feeds into the next.

01

Visual AI Layer

Corexi Vision Engine

Automated capture renders your product at multiple viewports (desktop and mobile) using headless browsers with bot-protection bypass. Each screenshot is analyzed by our proprietary multi-model vision pipeline, trained on UX-specific patterns.

  • Multi-viewport capture (desktop 1440px + mobile 390px)
  • Bot-protection aware rendering with fallback chains
  • Element-level bounding box detection for annotated findings
  • 9-category visual scoring with evidence extraction
02

Behavioral Analytics Layer

Signal Fusion

Connect GA4, Clarity, Hotjar, Mixpanel, or Amplitude. Corexi ingests sessions, bounce rates, rage clicks, dead clicks, and engagement metrics. These signals are cross-validated with visual findings to surface issues that only show up in real user behavior.

  • 6 analytics providers supported (GA4, Clarity, Hotjar, Mixpanel, Amplitude, Firebase)
  • Behavioral signal correlation with visual findings
  • Rage click and dead click hotspot mapping
  • Session-level engagement pattern analysis
03

PX Engine

Hybrid Reasoning

The PX Engine combines visual analysis with behavioral signals using hybrid reasoning. It weighs evidence from multiple sources, applies category-specific scoring models, and generates the final PX Score with prioritized, fix-ready findings.

  • Weighted multi-source scoring across 5 UX dimensions
  • Confidence scoring based on data coverage
  • Neurodiversity lens (ADHD, dyslexia, autism, sensory, color vision)
  • Stack-aware fix code generation for every finding
02

Custom training layers

Our foundation models are enhanced with UX-specific training data.

50K+

Annotated UX patterns

Categorized by severity, component type, and industry

WCAG 2.2

Full standard corpus

Every success criterion mapped to evaluation rules

600+

E-commerce UX patterns

Checkout flows, product pages, and conversion-critical UI evaluated

9

Specialized scoring models

One per UX category, each with distinct evaluation criteria

03

Analysis pipeline

From URL to report in under 60 seconds.

1

Capture

~3-8s

Headless browser renders at desktop + mobile viewports

2

Analyze

~8-15s

Multi-model vision pipeline scores 9 UX categories

3

Enrich

~1-3s

Behavioral analytics data merged (if connected)

4

Score

~<1s

PX Engine computes weighted composite + neurodiversity lens

5

Generate

~2-5s

Fix code written for your stack + findings prioritized

6

Deliver

~<1s

Report with annotated screenshots and actionable output

04

Transparency and limitations

We believe in being honest about what Corexi can and cannot do.

What about dynamic content?

Corexi captures the initial render state. Lazy-loaded content, modals triggered by interaction, and infinite scroll areas may not be fully analyzed in a single scan.

How accurate is the AI?

Our multi-model pipeline achieves strong precision on structural issues (contrast, spacing, hierarchy). Subjective design quality is harder. We always show evidence so you can verify.

What about authenticated pages?

Currently, Corexi scans publicly accessible pages. Authenticated page scanning is on our roadmap.

How is my data handled?

Screenshots are processed in memory, scored, then stored encrypted. Analytics connections use read-only OAuth tokens. We never modify your analytics or codebase.

Want to see the numbers behind the scores?