Clinical Review
Available to: Clinical Reviewer (RN), Medical Director (MD) Module: UM Minimum Permission: UM Review - Clinical
Overview
The Clinical Review pane provides AI-powered evaluation of incoming authorization requests against Medicare coverage policies. Lilee automatically matches extracted procedure and diagnosis codes to National Coverage Determinations (NCDs), Local Coverage Determinations (LCDs), and Articles, then presents a structured decision tree showing whether the clinical evidence in the document meets each coverage criterion. Your role is to verify the AI's findings, apply your clinical judgment, and record the determination.
Use this feature when you need to:
Evaluate medical necessity for a prior authorization or DME request against Medicare coverage criteria
Review AI-generated clinical findings with supporting evidence traced back to the source document
Record an approval, denial, or pend determination with policy references
Override an AI recommendation with a documented clinical rationale
Generate and send a nurse review letter summarizing your clinical findings
[Screenshot: The Clinical Review pane showing a decision tree with criteria questions, Yes/No/Pending answers, evidence citations, and an overall APPROVE recommendation with confidence score]
Before You Begin
Make sure the following are in place before using Clinical Review:
Note: Clinical review runs automatically for prior authorization and DME documents during the AI processing pipeline. For other document types, you can use Ellie's LCD/NCD Check quick action to evaluate coverage manually.
How Clinical Review Works
The CMS Policy Hierarchy
When a document enters the clinical review pipeline, Lilee follows the Centers for Medicare & Medicaid Services (CMS) coverage hierarchy to find the applicable policy:
National Coverage Determinations (NCDs) are checked first. These are Medicare-wide policies that apply uniformly across the country and take precedence over local policies.
Local Coverage Determinations (LCDs) are checked next. These are regional policies set by Medicare Administrative Contractors (MACs) that cover specific geographic areas.
Articles are checked last. These provide billing, coding, and clinical guidance that supplements LCD and NCD policies.
This hierarchy ensures that the most authoritative coverage policy is always applied to each request.
[Screenshot: The policy type toggle showing NCD, LCD, and Article tabs, with the LCD tab selected and a policy dropdown showing available matching policies]
What the AI Evaluates
Once the applicable policy is identified, the AI extracts the clinical criteria from the policy and evaluates the document's clinical evidence against each criterion. The process works as follows:
The system identifies the CPT and HCPCS codes extracted from the document.
It looks up applicable coverage policies using those codes.
It extracts the individual clinical criteria from the matched policy.
Each criterion is mapped to a clinical question (for example, "Does the patient have a documented diagnosis of chronic obstructive pulmonary disease?").
The AI searches the document for evidence that answers each question.
Each criterion is assigned an answer: Yes (supporting evidence found), No (contradicting evidence found or criterion clearly not met), or Pending (insufficient evidence to determine).
An overall recommendation is generated based on the criteria evaluation.
The Decision Tree
The clinical review results are presented as a decision tree -- a structured list of clinical criteria, each presented as a question with the AI's assessment.
For each criterion, you see:
Clinical question
The specific medical necessity question derived from the policy (e.g., "Does the patient have a documented mobility limitation?")
Answer
The AI's determination: Yes (criterion met), No (criterion not met), or Pending (insufficient evidence). Each answer is displayed as an editable dropdown -- you can change the answer directly from the decision tree.
Supporting evidence
The specific text from the document that supports the answer, traced to the source
Criteria source
The policy section and reference where this criterion is defined
Page reference
The page number(s) in the source document where the supporting evidence was found
[Screenshot: A single decision tree node expanded to show the clinical question, a "Yes" answer with green indicator, the supporting evidence text quoted from the document, the policy section reference, and a page number link]
The Recommendation
Based on the decision tree evaluation, the system generates an overall recommendation:
APPROVE -- The clinical evidence meets the coverage criteria.
DENY -- The clinical evidence does not meet one or more required criteria. The specific unmet criteria are identified.
PENDING -- There is insufficient clinical evidence to make a determination. The system identifies which criteria need additional documentation.
The recommendation card includes:
A completion percentage showing how many criteria are met (YES answers) out of the total.
Status pills summarizing key dimensions: Medical Necessity (met or not met), Documentation (complete or incomplete), and Additional Info Required (shown when criteria are still pending).
An AI-generated reasoning section explaining the basis for the recommendation, referencing the specific criteria that were or were not met.
[Screenshot: The recommendation card showing an APPROVE badge, "78% complete" indicator, green status pills for Medical Necessity and Documentation, and a reasoning paragraph below]
Step-by-Step: Performing a Clinical Review
Step 1: Open the Clinical Review Pane
Select a document from the Intake Dashboard that has completed AI processing. The document should be classified as prior authorization, DME, or concurrent review.
Click the Clinical Review toggle in the toolbar to open the Clinical Review pane.
The pane loads with the AI's clinical review results. If clinical review has not yet been performed for this document, the system initiates it automatically.
[Screenshot: The toolbar showing the Clinical Review toggle button highlighted, with the pane opening alongside the Document Viewer]
Step 2: Select the Policy Type and Policy
Use the policy type toggle at the top of the pane to switch between NCD, LCD, and Article views. The system defaults to the most relevant policy type based on the CMS hierarchy.
If multiple policies apply to the extracted procedure codes, use the policy selector dropdown to choose the specific policy you want to review against. The dropdown shows the policy name and ID (for example, "L33733 - Wheelchair Seating").
The decision tree updates to reflect the criteria from the selected policy.
Tip: Start with the NCD view. National Coverage Determinations take precedence over local policies. If no NCD applies to the procedure codes in this document, move to the LCD tab to see the local policy from the applicable Medicare Administrative Contractor.
Step 3: Review the Decision Tree
Work through each criterion in the decision tree from top to bottom.
For each criterion marked Yes, verify that the supporting evidence accurately reflects the clinical documentation. Click the page reference to jump to the relevant section in the Document Viewer and compare the AI's cited evidence against the original text.
For each criterion marked No or Pending, review the clinical documentation carefully to determine whether additional evidence exists that the AI may have missed or interpreted differently.
If you identify an error in the AI's evaluation of a specific criterion, use the answer dropdown on that criterion to change it to YES, NO, or Pending. The change is saved immediately and the overall recommendation recalculates automatically based on the updated criteria.
[Screenshot: The decision tree with five criteria showing a mix of Yes, No, and Pending answers, with one criterion expanded to show the edit controls for updating the answer and adding reviewer notes]
Step 4: Cross-Reference Extracted Clinical Data
Open the Detected Workflow pane alongside the Clinical Review pane to verify the extracted data that drove the clinical review:
Procedure codes (CPT/HCPCS) -- Verify these match the services being requested. Use the CPT code picker to search and correct codes if needed.
Diagnosis codes (ICD-10) -- Verify these accurately represent the patient's conditions. Use the ICD-10 code picker to search and correct codes.
Patient information -- Confirm the member ID, date of birth, and other identifying information.
Provider information -- Verify the requesting and servicing provider details.
The platform validates extracted member IDs against your EHR system and flags mismatches automatically. Provider names are validated against your provider directory.
[Screenshot: The Detected Workflow pane showing extracted procedure codes with CPT lookup and diagnosis codes with ICD-10 lookup, displayed alongside the Clinical Review pane]
Important: Accurate procedure and diagnosis codes are critical for clinical review. If the AI extracted incorrect codes, correct them in the Detected Workflow pane and then regenerate the clinical review (Step 6) for updated results based on the corrected codes.
Step 5: Make Your Determination
Based on your review of the decision tree, supporting evidence, and clinical documentation, record your determination.
To accept the AI recommendation:
If the AI recommendation aligns with your clinical assessment, proceed to the next workflow step. For approvals, open the Authorization Review pane to prepare the authorization submission. The approval determination is recorded with the policy reference and supporting criteria.
To deny the request:
Confirm that one or more criteria are not met based on your clinical review.
Document the clinical rationale, referencing the exact policy criteria and the clinical evidence (or lack thereof) in the documentation.
Record the denial determination. The specific unmet criteria from the decision tree are attached to the determination record.
To pend the request for additional information:
Identify which criteria cannot be evaluated due to insufficient documentation.
Document what additional clinical information is needed from the requesting provider.
Record the pend determination. The system identifies the outstanding criteria for the outreach request.
To override the AI recommendation:
Click the Override button on the recommendation section.
Select your determination (Approve, Deny, or Pend).
Enter a documented reason for the override. This field is required and becomes part of the permanent audit trail.
Confirm the override.
[Screenshot: The override dialog showing the determination selection dropdown (Approve, Deny, Pend) and the required reason text field with a sample clinical rationale entered]
Compliance Checkpoint: All adverse determinations must reference the specific clinical criteria that were not met. Do not use generic language such as "does not meet criteria." Instead, cite the exact policy requirement and explain what clinical evidence was missing or insufficient. This is required under NCQA Utilization Management standards and CMS regulations.
Step 6: Regenerate Clinical Review (If Needed)
If you corrected extracted data in the Detected Workflow pane, or if new clinical documentation has arrived for an existing case, you can re-run the clinical review:
Click the Regenerate button in the Clinical Review pane toolbar.
The system re-analyzes the document against the selected policy using the current extracted data.
The decision tree refreshes with updated results.
[Screenshot: The Regenerate button in the Clinical Review pane toolbar]
Step 7: Generate and Send a Nurse Review Letter
After completing your clinical review, you can generate a nurse review letter that summarizes your findings for communication and documentation purposes.
Open the Nurse Letter pane from the toolbar.
The system auto-generates a clinical review letter based on the document analysis and your clinical review results. The letter type matches your determination: approval, denial, or pending.
Review and edit the letter content as needed using the built-in editor.
To distribute the letter:
Click Copy to copy the letter text to your clipboard.
Click Regenerate to create a new version of the letter.
Click Send Email to open the email dialog. You can customize:
To and CC addresses
Subject line (auto-populated with patient name, member ID, and document type)
Attachment -- Optionally attach the clinical review summary as a PDF
[Screenshot: The Nurse Letter pane showing an auto-generated denial letter with the email dialog open, displaying To, CC, subject, and the clinical review PDF attachment option enabled]
You can also send the clinical review summary independently by clicking Send to EHR in the Clinical Review pane header. This generates a PDF of the clinical review findings and sends it as an attachment to the authorization record in AcuityNXT.
Note: The Send to EHR button for the clinical review summary is available only after the authorization has been submitted to the EHR first (see the Authorization Review guide). The system needs an existing authorization ID to attach the summary to.
Audit Trail and Compliance
Every action taken within the Clinical Review pane is logged in the system audit trail:
Clinical review initiation -- When the AI review was first performed, including the policy matched and criteria evaluated.
Criterion updates -- Any manual changes to individual criterion assessments, with the reviewer's name and timestamp.
Determination -- The final determination (approve, deny, pend), the reviewer who made it, the timestamp, and the clinical rationale.
Overrides -- If the AI recommendation was overridden, the original recommendation, the override determination, and the documented reason.
Regenerations -- Each time the clinical review was regenerated, with the triggering event.
Nurse letters -- When letters were generated, edited, and sent, including the recipient and delivery method.
This audit trail is preserved as part of the permanent case record and is accessible for compliance reviews, NCQA audits, and internal quality assurance.
Compliance Checkpoint: CMS and NCQA require that clinical review determinations be traceable to the specific criteria applied, the clinical evidence considered, and the qualified reviewer who made the determination. Lilee captures all of this information automatically. Ensure you are logged in with your own credentials when performing clinical reviews so the audit trail correctly attributes the determination to you.
Warning: The AI clinical review is a decision-support tool, not an autonomous decision-maker. A qualified clinical reviewer must evaluate and confirm every recommendation before it is acted upon. AI recommendations do not constitute clinical determinations on their own.
Tips and Best Practices
Efficiency: Use the page references in the decision tree to jump directly to the relevant section of the source document, rather than scrolling through the entire document to find supporting evidence.
Accuracy: Always cross-reference the AI decision tree against the original document. The AI provides a strong starting point, but clinical judgment remains essential, especially for complex or ambiguous cases.
Documentation quality: When overriding an AI recommendation or recording an adverse determination, be specific in your clinical rationale. Reference the policy by name, cite the unmet criterion, and describe what evidence was considered.
Workflow: If a case requires Medical Director review, use the Notes pane to add a clinical summary of your findings and the reason for escalation before routing the case.
Multiple policies: Check all three policy types (NCD, LCD, Article) for a complete picture. An LCD may contain criteria not found in the NCD, and Articles often provide specific billing and coding requirements that affect authorization processing.
Troubleshooting
The Clinical Review pane shows no results
The document may still be processing, or the document type may not support clinical review
Verify the document status is "In Review" (not "processing"). Clinical review is available for prior authorization, DME, and concurrent review documents
No matching policy was found
The extracted CPT/HCPCS codes may not have a corresponding NCD or LCD
Verify the procedure codes in the Detected Workflow pane are correct. Use Ellie's LCD/NCD Check quick action to search for applicable policies manually
Most criteria show as "Pending"
The source document may not contain sufficient clinical information to evaluate the criteria
This is expected when clinical documentation is limited. Consider pending the case and requesting additional information from the requesting provider
The AI recommendation does not match my clinical assessment
The AI evaluation is based on the text it could extract from the document, which may be incomplete
Override the recommendation with your documented clinical rationale. This is a normal and expected part of the clinical review workflow
I corrected procedure codes but the clinical review still shows old results
The clinical review needs to be regenerated after data changes
Click Regenerate in the Clinical Review pane to re-run the analysis with the corrected codes
The nurse letter contains inaccurate information
The letter is generated from the extracted data and clinical review results
Correct the source data in the Detected Workflow pane, regenerate the clinical review if needed, then regenerate the letter
Related Features
Intake Dashboard -- Verify and correct extracted document data in the Detected Workflow pane before initiating clinical review.
Authorization Review -- After clinical review, proceed to the Authorization Review pane to prepare and submit the authorization to your EHR system.
Ellie AI Assistant -- Use Ellie's Prior Auth Review, DME Review, LCD/NCD Check, and Validate Codes quick actions for additional AI-powered clinical analysis beyond the automated review.
Last updated: February 24, 2026 | Version: 1.0
Last updated
Was this helpful?
