Methodology & Explainability

AI that Can Explain Every Decision It Makes

In regulated evidence synthesis, the question is never just "what did the AI decide?" It is "why, with what confidence, can I verify it, and is it reproducible?" Puraite is built to answer all four.

Conformal Prediction for Calibrated Uncertainty

Not black-box confidence scores. Mathematically rigorous prediction sets with guaranteed coverage.

The Problem with Standard AI Confidence

Most AI systems produce a single confidence score (e.g., "87% likely to include"). This score has no statistical guarantee; it's not calibrated to actual error rates. In a regulatory context, acting on an uncalibrated confidence score is scientifically indefensible.

Without Conformal Prediction: "The AI thinks this study should be included with 87% confidence." What does 87% mean? How was it calibrated? What's the error rate? Unknown.

Conformal Prediction Sets

Conformal Prediction produces prediction sets with a provable coverage guarantee. At a 90% confidence level, Puraite guarantees that the true label (include/exclude) falls within the prediction set at least 90% of the time, a statistical claim you can cite in a protocol.

With Conformal Prediction: "At α = 0.10, this study's true label is in the set {Include} with ≥90% marginal coverage guarantee." Statistically rigorous, citable and defensible.

Citation-Level Provenance

Every AI output is traceable to the exact passage in the source literature.

01

Passage-Level Citations

When Puraite extracts a PICO element, it links the extracted text to the exact sentence, table row, or figure caption in the source document. Not just the paper; the precise location within it.

02

Decision Rationale Log

Every inclusion/exclusion decision includes the PICO criteria against which it was evaluated, the matched evidence from the paper and the confidence level. Human-readable, exportable and timestamped.

03

Version-Locked Evidence

Every extraction is version-locked: model version, search date, database snapshot and protocol version are recorded. Given the same inputs and parameters, Puraite produces the same outputs. When new searches surface additional publications, results are updated transparently with full change tracking.

04

Conflict Resolution Audit

Reviewers decide when and how to use AI assistance: for tiebreaks between human reviewers, as a second screener, or not at all. The appropriate level of AI involvement depends on your use case, whether for internal decision-making or external regulatory submission. All conflicts, reasoning and resolutions are logged.

Why Methodology Matters

The difference between a language model and a validated evidence synthesis tool.

PropertyGeneric LLMPuraite
Confidence calibrationUncalibrated scoreConformal Prediction sets
Citation traceabilityHallucination riskPassage-level source links
ReproducibilityNon-deterministicVersion-locked, reproducible
Audit trailNoneImmutable, exportable
Protocol alignmentAd hocPRISMA-aligned / Cochrane-compatible
Regulatory defensibilityNot defensibleStructured for submission
Methodological Standards

Built after RAISE 2026 Guidelines

Puraite actively follows current methodological trends in AI-assisted evidence synthesis and adjusts its approach accordingly.

Puraite is designed and evaluated in accordance with the RAISE (Responsible use of AI in Evidence SynthEsis) framework, the leading international guidance for AI tools in systematic evidence synthesis, published March 2026 and submitted to Research Synthesis Methods. RAISE defines three core principles for AI tools in evidence synthesis:

Transparency / Traceability
Every AI decision must be auditable and traceable to its source.
Comprehensiveness
Recall must not be sacrificed for precision. No relevant study should be missed.
Accountability
Continuous auditing of algorithms and outputs post-deployment.

Puraite's conformal prediction architecture, citation-level provenance and human checkpoint design directly implement these principles. As the field of AI-assisted evidence synthesis evolves, Puraite monitors emerging guidance, incorporates new methodological standards and updates its approach to remain aligned with best practice.

Read the RAISE guidelines →

Want a deep-dive on the methodology?

Schedule a technical walkthrough with our CTO: conformal prediction, provenance architecture and your specific use case.

Book Technical Briefing