Skip to content

Methodology

Data sources, data quality assessment, analysis approach, and query transparency. Every claim in this report is traceable to public data, and every limitation is stated explicitly.


Data Quality Assessment

Critical Context

The data underlying this report has significant gaps. All claims are made within these constraints. Improving data quality is the #1 priority for making stronger claims in future reports.

Current State

DimensionStatusValueImpact on Report
Total MAUDE eventsGood2.50MFull history available
Device linkageSolved99.3% linkedVirtually all events analyzable by brand
LLM extractionModerate39.3% (981K events)Root cause and failure mode data improving
Embedding coverageGood87-91%Vector search works
PowerGlide linked eventsGood224 eventsStrong pattern detection
PowerGlide extracted eventsModerate132 events (58.9%)Failure mode and root cause data
BD Power line total eventsLarge~5,100 eventsFull portfolio analysis

What Each Gap Means

Device linkage at 99.3%: Previously the critical blocker at 18.9%. Now virtually all events are linked to specific device brands. PowerGlide specifically went from 46 to 224 linked events (4.8x improvement) through ghost event backfill and improved device matching.

LLM extraction at 39.3%: The structured fields we rely on (recall_risk, root_cause, failure_mode, user_error_blamed) come from LLM extraction of event narratives. Coverage improved from ~30% to 39.3% overall. PowerGlide has 58.9% extraction coverage (132 of 224 events). Remaining gap means some failure patterns may be underrepresented.

Embedding coverage at 87-91%: Good enough for vector similarity search and clustering. Used to discover "ghost" PowerGlide events that were filed under generic names.

Data Quality Gates

What you need before making specific types of claims:


Data Sources

FDA MAUDE Database

Source: FDA Manufacturer and User Facility Device Experience

Coverage: Medical device adverse event reports from 1992-present

Our dataset: 2.50M events (data through February 2026)

Access: Public data via openFDA API and bulk downloads

URL: https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfmaude/search.cfm

CMS Provider Utilization

Source: Centers for Medicare & Medicaid Services (public files)

Coverage: Medicare Part B procedures by provider

Dataset used: 727K procedures, 299K providers (2022 data)

Note: This data comes from publicly available CMS files. It is not stored in our ClickHouse database. We reference it for market sizing only.

HCPCS codes used:

  • 36568: PICC insertion (without imaging)
  • 36569: PICC insertion (with imaging)
  • 36572: CVC insertion (without imaging)
  • 36573: CVC insertion (with imaging)

INFO

Medicare represents ~30% of total procedures. Actual market volume is 3-4x larger.

FDA Enforcement Data

Sources: Recall database, warning letters, enforcement actions

Our dataset: 57K enforcement actions in ClickHouse


Analysis Pipeline

openFDA API → ClickHouse (analytics engine)

Process:

  • Daily date ranges to avoid API 26K pagination limit
  • Deduplication by MDR report key
  • Device and manufacturer linking via brand name matching
  • Geographic extraction from event narratives
  • LLM extraction for structured fields (recall risk, root cause, failure mode)
  • Vector embeddings for similarity search

Refresh: Weekly automated pipeline

LLM Extraction

Model: Gemini 2.0 Flash via Vertex AI Batch

Cost: ~$0.05 per 1M tokens (~$45 for 700K events)

Fields extracted:

FieldTypePurpose
recall_riskhigh/medium/lowRecall prediction
root_causeenumFailure attribution
failure_modeenumTechnical classification
user_error_blamedbooleanBlame detection
user_error_justifiedbooleanBlame validation
affects_other_unitsbooleanBatch risk indicator
facility_nametextGeographic intelligence
facility_statetextTerritory mapping

Coverage: 39.3% of total events extracted (981K of 2.50M).

Vector Embeddings

Model: Voyage-3 (1024-dimensional)

Use cases: Similar event clustering, failure pattern discovery, semantic search across narratives

Coverage: 87-91% of events embedded


Key Queries

BD Power Line Events

sql
SELECT d.brand_name, COUNT(*) as events,
       countIf(resulted_in_injury) as injuries,
       round(countIf(resulted_in_injury) * 100.0 / COUNT(*), 1) as injury_rate
FROM spincast.events e
JOIN spincast.devices d ON e.device_id = d.id
WHERE lower(d.brand_name) LIKE '%power%picc%'
   OR lower(d.brand_name) LIKE '%power%glide%'
   OR lower(d.brand_name) LIKE '%power%midline%'
GROUP BY d.brand_name
ORDER BY events DESC

PowerGlide Timeline

sql
SELECT toYYYYMM(event_date) as month,
       COUNT(*) as events,
       countIf(resulted_in_injury) as injuries
FROM spincast.events e
JOIN spincast.devices d ON e.device_id = d.id
WHERE lower(d.brand_name) LIKE '%power%glide%'
  AND event_date >= '2024-11-01'
GROUP BY month
ORDER BY month

Failure Mode Distribution

sql
SELECT failure_mode, COUNT(*) as n,
       round(COUNT()*100.0/SUM(COUNT(*)) OVER(), 1) as pct
FROM spincast.events e
JOIN spincast.devices d ON e.device_id = d.id
WHERE llm_extracted = 1
  AND (lower(d.brand_name) LIKE '%power%glide%'
       OR lower(d.brand_name) LIKE '%power%picc%'
       OR lower(d.brand_name) LIKE '%power%midline%')
GROUP BY failure_mode
ORDER BY n DESC

Root Cause Distribution

sql
SELECT root_cause, COUNT(*) as n,
       round(COUNT()*100.0/SUM(COUNT(*)) OVER(), 1) as pct
FROM spincast.events e
JOIN spincast.devices d ON e.device_id = d.id
WHERE llm_extracted = 1
  AND (lower(d.brand_name) LIKE '%power%glide%'
       OR lower(d.brand_name) LIKE '%power%picc%'
       OR lower(d.brand_name) LIKE '%power%midline%')
GROUP BY root_cause
ORDER BY n DESC

Using the Spincast API

Spincast provides a ClickHouse endpoint for live data queries. This report is a snapshot; the API gives you real-time access.

Connection

bash
# Basic query
curl -s "https://scanpath-clickhouse.fly.dev/?user=default&password=scanpath2025secure" \
  --data "SELECT count() FROM spincast.events"

# Query with format
curl -s "https://scanpath-clickhouse.fly.dev/?user=default&password=scanpath2025secure" \
  --data "SELECT brand_name, count() as n FROM spincast.events e
  JOIN spincast.devices d ON e.device_id = d.id
  WHERE lower(d.brand_name) LIKE '%power%glide%'
  GROUP BY brand_name FORMAT Pretty"

Example: Monitor a Device Category

sql
-- Monthly event trend for any device
SELECT toYYYYMM(event_date) as month,
       COUNT(*) as events,
       countIf(resulted_in_injury) as injuries,
       round(countIf(resulted_in_injury) * 100.0 / COUNT(*), 1) as injury_rate
FROM spincast.events e
JOIN spincast.devices d ON e.device_id = d.id
WHERE lower(d.brand_name) LIKE '%your_device%'
  AND event_date >= today() - 365
GROUP BY month
ORDER BY month

Example: Find Emerging Signals

sql
-- Devices with accelerating event velocity (last 6 months vs prior 6 months)
SELECT d.brand_name,
       countIf(event_date >= today() - 180) as recent_6mo,
       countIf(event_date >= today() - 365 AND event_date < today() - 180) as prior_6mo,
       round(countIf(event_date >= today() - 180) * 1.0 /
             nullIf(countIf(event_date >= today() - 365 AND event_date < today() - 180), 0), 2) as velocity_ratio
FROM spincast.events e
JOIN spincast.devices d ON e.device_id = d.id
WHERE event_date >= today() - 365
GROUP BY d.brand_name
HAVING recent_6mo >= 10
ORDER BY velocity_ratio DESC
LIMIT 20

Data Improvement Roadmap

In priority order:

PriorityGapCurrentTargetImpact
1Device linkage99.3%95%DONE -- was 18.9%, now 99.3%
1LLM extraction39.3%80%+Larger samples for root cause and failure mode analysis
2CMS data integrationNot in DBIn ClickHouseEnable volume normalization and account-level intelligence
3Account-level dataNot availableCMS Open Payments + UtilizationIdentify top PowerGlide accounts by triangulation

How to Run Improvements

bash
# Check current status
python -m clickhouse.pipeline.run_pipeline --status

# Run full pipeline (ingest + extract + embed)
python -m clickhouse.pipeline.run_pipeline --days 14

# Run extraction only (improve LLM coverage)
python -m clickhouse.pipeline.run_pipeline --extract-only --extract-limit 5000

# Run embeddings only
python -m clickhouse.pipeline.run_pipeline --embed-only --embed-limit 10000

PowerGlide 2025 Timeline

The Pattern (Updated with 4.8x More Data)

PeriodEvents/MonthInjuries/MonthPattern
H2 20241-50Low baseline
Jan-May 202520-263-7Sustained elevation
Jun-Sep 20259-172-3Moderate
Oct 20253515Spike (includes clinical study)
Nov 202533Drop (possibly incomplete)

Updated Assessment

With improved device linkage and ghost event recovery, the picture is nuanced:

  1. Oct 2025 (35 events) includes clinical study batch reports but is the largest single month on record
  2. Jan-May 2025 (20-26 events/month) is a sustained real-world elevation that cannot be explained by clinical trial batching
  3. The acceleration pattern could reflect growing PowerGlide adoption (more devices in field = more events at constant rate) or a genuine safety trend

Without procedure volume denominators, we cannot distinguish adoption growth from safety signal. CMS data integration (planned) would resolve this ambiguity.


Limitations

Reporting Bias

Voluntary facility reports (except deaths) lead to underreporting. Estimates suggest 1-10% of actual events get reported to MAUDE. Facilities may not report if manufacturer convinces them it's "user error."

Attribution Uncertainty

Manufacturer investigations are self-interested (not independent). "Use-related" conclusions are not verified by FDA. Root cause fields reflect manufacturer claim, not verified fact.

Market Share Confound

This is the biggest limitation. Raw event counts are not comparable between manufacturers with different market share. BD's ~70% mini-midline share means many more devices in the field than Stiletto's <5%. We mitigate this by using rates (injury rate per event) and patterns (failure mode distribution) rather than raw counts.

Device Linkage Gap (Resolved)

Device linkage improved from 18.9% to 99.3%. PowerGlide went from 46 to 224 linked events through ghost event backfill and improved matching. This is no longer a limitation.

LLM Extraction Reliability

Structured fields (root cause, failure mode, user error) are extracted by an LLM from narrative text. These are interpretations, not verified facts. We cite sample sizes to make the extraction basis clear.

Injury Severity

Outcome detail varies by report quality. "Injury" is binary (yes/no), not scaled by severity. A minor bruise and a surgical retrieval both count as "injury."

Interpretation Guidelines

  1. Event counts are proxies -- Higher counts may indicate market success (more devices in field), not just safety issues
  2. Patterns matter more than absolutes -- Same failure mode across multiple facilities signals design issue, not user error
  3. User error claims need scrutiny -- Manufacturer investigations are self-interested; look for pattern evidence
  4. Absence of evidence is not evidence of absence -- Low event counts may reflect low market share, underreporting, or newness
  5. Temporal spikes require investigation -- Distinguish clinical trial batches from actual field failure increases
  6. Sample sizes determine confidence -- n=26 gives directional signal; n=500 gives statistical confidence

Reproducibility

All queries can be run against the Spincast ClickHouse instance:

bash
curl -s "https://scanpath-clickhouse.fly.dev/?user=default&password=scanpath2025secure" \
  --data "SELECT count() FROM spincast.events"

Database Statistics (February 2026)

TableRow CountCoverage
events2,499,000+1992-2026
devices45,000+Brand names
manufacturers8,000+Company names
enforcement_actions57,000+Recalls, warnings
Device linkage2,482,00099.3% of events
LLM extractions981,00039.3% of events
Embeddings2,250,00087-91% of events

Transparency Commitment

All claims in this report are traceable to:

  1. FDA MAUDE database (public)
  2. CMS Medicare data (public)
  3. FDA 510(k) clearances (public)
  4. Clinical trial registry (ClinicalTrials.gov, public)

Independent verification: Any claim can be verified by searching MAUDE directly. All SQL queries are provided for reproducibility. Raw data access available via ClickHouse endpoint.

What this report does:

  • Uses rates and patterns, not raw counts
  • Includes sample sizes with every claim
  • States limitations prominently
  • Separates what the data shows from what it doesn't

What this report does NOT do:

  • Compare raw event counts between products with different market share
  • Use clinical trial spikes as field failure evidence
  • Claim Stiletto is "proven safer" (insufficient field data)
  • Hide data gaps or limitations

Last updated: February 2026