Report Name
Quality EVE Performance by Member
About This Report
This report shows information about EVE performance at a member level. It segments match rate performance across primary abstractor and overreader. Managers can performance of the Evidence Validation Engine (EVE) at a member level to monitor efficacy.
Service Type
Full Service, Platform, Client Clinical Only
Intended User
Manager / Technical User
Best For
Monitoring EVE performance by member on a weekly basis.
What Information Can I Filter?
| Projects | 
| Measures | 
| Gap | 
| Response Date | 
What do the Data Items Mean?
| Data Item | Description | 
| Total Members | Count of all members from the returned query | 
| Total Measures | Count of all measures from the returned query | 
| Total Open Gaps | Count of all gaps from the returned query | 
| Total EVE Match | Total count of gaps with EVE match from all the returned query | 
| Total EVE Partial Match | Total count of gaps with EVE partial match from all the returned query | 
| Total EVE No Match | Total count of gaps with EVE no match from all the returned query | 
| Average EVE Match | % of EVE match count against match, no match and partial match from all the returned query | 
| Average EVE Partial Match | % of EVE partial match count against match, no match and partial match from all the returned query | 
| Average EVE No Match | % of EVE no match count against match, no match and partial match from all the returned query | 
| NLP Overall Accuracy Rate | % of total match and partial match against all results by the EVE | 
| Total Chases Reviewed and Submitted by OR | Count of chases that are past the Overread Stage | 
| Total Gaps Reviewed | Total of all EVE results reviewed by an overreader | 
| Total Gaps Not Reviewed | Total of all EVE results not yet reviewed by an overreader | 
| Avg Gaps Reviewed | % of gap results reviewed by an overreader against total EVE results | 
| Total Gaps Overreader Agrees | Total count of gaps with overreader acceptance = Agree from all the returned query | 
| Total Gaps Overreader Disagrees | Total count of gaps with overreader acceptance = Disagree from all the returned query | 
| NLP Overreader Accuracy Rate | % of gaps with overreader acceptance = Agree against total EVE results reviewed by an overreader | 
| Table 1 | Shows a detailed view of how EVE Overread performs by a member. | 
