Troubleshooting & Best Practices
Follow these guidelines to get accurate extractions and ratings every time. Most extraction issues come from two sources: image-based PDFs and PII that confuses the AI.
Before every upload — two quick checks
PDF Requirements
The PDF contains an actual text layer. Characters are stored as data, not as pixels.
How to check: open the PDF and try to click-drag to highlight a word. If text highlights, you are good.
The PDF is a photograph or scan of a printed page. There is no text — only pixels. The AI has nothing to read.
How to fix: run OCR (Optical Character Recognition) using Adobe Acrobat, Smallpdf, or ilovepdf.com before uploading.
PII Replacement Reference
RecommendedUse this as a checklist when preparing a document. Replace each type of PII with a neutral placeholder before uploading. The AI does not need any of this information to extract medical findings.
You still enter the real date of injury, date of birth, and occupation group manually into the case fields — the AI only needs the medical findings from the document itself.
Best Practices
Swap personally identifiable information with neutral placeholders before uploading any report. The AI only needs medical findings — not the patient's identity.
The rating engine only needs WPI values, impairment codes, dates of injury, and occupation group. None of that requires the patient's name or SSN.
The AI reads text directly from the PDF. If the document is a scanned image inside a PDF wrapper, there is no text to read and extraction will fail or produce incorrect results.
Even if OCR is applied, check the quality. Poor scan quality (low DPI, handwriting, rotated pages) can cause mis-reads in WPI values. Always review the extracted data before running the rating.
AI extraction is highly accurate but not infallible. Before running the rating, verify every extracted field matches what is in the report.
The math is deterministic — garbage in, garbage out. A correct extraction is what makes the final rating defensible.
Each case should correspond to one QME or AME report. Mixing multiple reports in one case can cause the AI to conflate findings from different evaluations.
One evaluating physician per case keeps the extraction clean and the rating traceable to a single document.
The occupation group drives the occupation adjustment step. Using the wrong group is one of the most common rating errors.
A difference of one occupation group can shift the final PD rating by 1–3 percentage points.
Common Issues & Fixes
Likely Causes
- PDF is a scanned image with no OCR text layer
- PDF is password protected
- Report does not use standard WPI terminology — the physician may use descriptive language instead of a numeric WPI
Fix
Run OCR on the document first, remove any password protection, and check that the report explicitly states a whole person impairment percentage.
Likely Causes
- Low-quality scan caused OCR to misread digits (e.g., 10% read as 1%)
- The PDF has multiple WPI tables and the AI picked the preliminary rather than the final value
Fix
Manually correct the extracted WPI in the data panel before rating. Compare against the specific page and paragraph in the report.
Likely Causes
- The report describes the job by title without a PDRS group number
- The job title maps ambiguously to multiple occupation groups
Fix
Override the occupation group in the extraction panel. Look up the correct group in the PDRS Occupational Variant table.
Likely Causes
- The report mentions multiple dates (filing date, exam date, injury date) and the AI selected the wrong one
- The date format in the document is ambiguous (e.g., 01/02/23)
Fix
Correct the date of injury in the extraction panel. This is critical — the wrong year can change which schedule applies (pre- vs. post-SB 863).
Likely Causes
- Apportionment was not captured — the AI defaulted to 100% industrial when the physician apportioned
- Pain add-on was included or excluded incorrectly
- Occupation group is wrong
Fix
Review the extracted apportionment and pain add-on values. Check the physician's apportionment language in the report and correct the extracted values before re-rating.