Research Brain Offer Source Validation Framework

Document Type: Framework
Status: Active Framework
Version: v1.0
Authority: Research Brain (Subordinate to MWMS HeadOffice)
Applies To: All external and internal sources used to support offer or opportunity research
Parent: Research Brain Architecture
Linked Systems:
Research Brain Canon
Research Brain — Offer Evidence Standards
Research Brain — Research Verdict Framework
Research Intelligence Database
Affiliate Brain
Finance Brain
MWMS AI Evidence Requirement Rule
Last Reviewed: 2026-03-26


Purpose

This framework defines how Research Brain evaluates whether a source of information is sufficiently reliable to support research conclusions.

The internet contains large volumes of:

recycled claims
SEO filler content
affiliate promotional material
unverified performance claims
copied articles
fabricated testimonials
AI-generated content presented as expertise

Research Brain must not treat all publicly available information as equally trustworthy.

This framework exists to:

improve factual reliability
reduce signal contamination
prevent weak sources from influencing decisions
ensure evidence integrity across MWMS


Scope

This framework applies to all information sources used during:

offer research
market research
competitor review
angle observation
funnel classification
niche evaluation
risk observation

including sources discovered through:

search engines
review sites
vendor material
industry blogs
forums
social media
affiliate communities
educational content
AI-assisted browsing

This framework applies regardless of whether research is performed by:

AI agents
human operators
hybrid processes


Core Principle

Availability of information does not equal reliability of information.

Visibility of information does not equal validity of information.

Popularity of information does not equal truth of information.

Research Brain must treat the internet as a mixed-quality evidence environment.

Source credibility must be evaluated before evidence weight is assigned.


Source Reliability Dimensions

Each source should be evaluated across the following dimensions:

Transparency
Independence
Replicability
Consistency
Incentive structure
Evidence traceability

No single dimension determines validity.

Reliability emerges from combined evaluation.


Source Classification

Research Brain classifies sources into structural categories.


Category A — Direct Observable Source

Information directly visible in primary assets.

Examples:

offer landing page
checkout page
advertorial page
pricing page
legal pages
terms pages
visible funnel steps

Strength:

high reliability for structural observations

Limitations:

does not confirm performance claims
does not confirm profitability


Category B — Independent Structural Observation

Information visible across multiple unrelated sources showing similar structural patterns.

Examples:

multiple competitor pages using similar angle
repeated funnel patterns in a niche
repeated pricing structures
repeated positioning themes

Strength:

useful for identifying structural norms

Limitations:

does not confirm effectiveness
does not confirm demand strength


Category C — Institutional or Educational Source

Information produced by organisations or individuals focused on structured education or analysis.

Examples:

industry research organisations
structured training material
academic publications
professional frameworks
methodology-focused content

Strength:

useful for conceptual understanding

Limitations:

may not reflect current market behaviour
may generalise across contexts


Category D — Commercial Content Source

Content created with commercial incentive to promote a product, service, or ecosystem.

Examples:

affiliate blog posts
vendor articles
promotional reviews
advertorial articles
comparison sites monetised via referral links

Strength:

useful for understanding positioning narratives

Limitations:

high bias probability
selective presentation of evidence
incentive-aligned claims


Category E — Social Signal Source

Informal commentary or opinion-based material.

Examples:

forum posts
social media commentary
community discussions
anecdotal experience sharing

Strength:

useful for identifying sentiment themes

Limitations:

low reliability
anecdotal bias
unverifiable context


Category F — Aggregated SEO Content

Content primarily created to capture search traffic rather than provide original analysis.

Examples:

SEO listicles
recycled blog posts
generic product roundups
templated review content
AI-generated filler content

Strength:

low structural reliability

Limitations:

often derivative
often inaccurate
often lacks evidence depth


Source Incentive Awareness

Research Brain must consider the incentive structure behind the source.

Sources with direct financial incentive to promote an offer must not be treated as neutral observers.

Indicators of commercial incentive:

affiliate links
referral codes
promotional tone
call-to-action bias
exaggerated claims
selective comparison framing

Incentive presence does not invalidate the source.

It reduces evidence strength weighting.


Cross-Source Consistency Check

Where possible, Research Brain should observe whether structural claims appear across multiple independent sources.

Consistency across independent sources may strengthen confidence in structural observations.

Consistency does not confirm performance outcomes.

Repeated claims may still originate from a shared inaccurate origin.

Consistency improves confidence in pattern visibility, not truth certainty.


Traceability Requirement

Where possible, Research Brain should prefer sources that:

allow tracing back to original material
provide direct observation opportunities
allow verification of structural claims

Sources lacking traceability should receive lower weighting.


Promotional Claim Handling Rule

Promotional claims must not be treated as validated facts.

Examples:

high converting offer
proven winner
best performing funnel
top EPC opportunity
high ROI niche

Such statements must be treated as positioning language unless supported by stronger evidence classes.


Review Site Reliability Caution

Review sites frequently contain:

affiliate incentives
curated product ranking bias
recycled descriptions
non-verified scoring

Presence of star ratings does not indicate methodological reliability.

Review aggregation does not equal objective evaluation.


AI-Generated Content Risk

Increasing volumes of AI-generated content may present:

high fluency
low originality
repeated structure
recycled claims

Fluency must not be mistaken for credibility.

Consistency must not be mistaken for validation.

AI-generated content may amplify weak source loops.


Source Weighting Guidance

When multiple source types exist:

prioritise direct observation
prioritise structural consistency
prioritise traceable evidence
prioritise independence of origin

reduce reliance on:

promotional narratives
unsupported rankings
generic blog content
unverifiable claims


Minimum Source Expectation

Research tasks should aim to observe:

primary offer material
at least one structural comparison point where possible
at least one independent context signal where possible

where feasible.

Low availability environments must declare evidence limitations.


Relationship to Evidence Standards

This framework operates alongside:

Research Brain — Offer Evidence Standards

Evidence Standards define evidence strength categories.

Source Validation Framework defines source credibility evaluation.

Both must align.


Relationship to Affiliate Brain

Affiliate Brain uses research outputs to prioritise testing attention.

Weak sources increase risk of testing low-quality opportunities.

Strong source discipline improves testing efficiency.


Relationship to Finance Brain

Finance Brain requires reliable structural interpretation before evaluating survivability constraints.

Weak research sources increase capital allocation risk.

Source discipline improves downstream economic modelling relevance.


Drift Protection

Research Brain must not assume:

high search ranking equals reliability
high content volume equals validity
repeated claims equal truth
polished writing equals expertise

Research Brain must remain aware that:

internet content ecosystems contain incentive distortions.


Architectural Intent

The Offer Source Validation Framework ensures that Research Brain:

operates on signal rather than noise
resists commercial bias contamination
produces more stable decision-support intelligence
maintains compatibility with MWMS governance standards


Final Rule

Research Brain may use imperfect sources.

Research Brain must acknowledge source limitations.

Research Brain must not elevate weak sources into strong conclusions.


Change Log entry

Add this to Research Brain Change Log:

2026-03-26 — Added Offer Source Validation Framework v1.0

Change Type: Structural Extension
Authority: Research Brain
Scope Impact: Defines reliability evaluation for research information sources
Parent Architecture Impact: None
Decision Authority Impact: None
Backward Compatibility: Maintained

Summary
Added new framework:

Research Brain — Offer Source Validation Framework v1.0

Defines structured classification of research sources, reliability dimensions, incentive awareness handling, cross-source consistency checks, traceability expectations, and treatment of promotional claims.

Reason for Change
Internet research environments contain large volumes of biased, duplicated, or low-quality content that may degrade research reliability if not properly filtered.

Architectural Intent
Improve signal quality inside Research Brain outputs and reduce contamination from weak or incentive-driven information sources.

Strengthens compatibility with Affiliate Brain testing prioritisation and Finance Brain evaluation requirements.