How do I visualize virtual ward capacity vs inpatient bed demand?
Virtual wards are only effective when they absorb patients who would otherwise occupy inpatient beds, not when they create additional monitoring for patients who would have been discharged. The key metric is "avoided admissions" — patients diverted from inpatient beds to virtual monitoring who would have been admitted based on their clinical profile.
What this looks like in Vizier
Stylized dashboard visualization. Data values obscured. Upload your own data to see real numbers.
Why This Happens
The NHS England Virtual Ward framework, introduced in 2022, set a target of 40–50 virtual ward beds per 100,000 population — a nationally funded programme with ICB commissioning responsibility. "Avoided admission" must be defined clinically, not operationally. A true avoided admission is a patient who presents at A&E or acute assessment with clinical parameters — NEWS2 score of 3–4, oxygen saturation 94–95% on air, controlled heart failure — that would historically have led to inpatient admission, but who is diverted to virtual monitoring with twice-daily clinical contact and remote physiological monitoring. A patient who would have been discharged regardless of virtual ward existence does not constitute an avoided admission, even if they are enrolled on the virtual ward and monitored remotely.
The visualisation challenge is that two metrics operate on different scales and must be displayed simultaneously to convey the operational relationship. Virtual ward census (number of patients currently monitored) is meaningful only in the context of inpatient occupancy — if inpatient occupancy is 96.1% with 42 virtual ward patients, the system is under pressure. If occupancy is 88% with 42 virtual ward patients, the virtual ward may be creating demand rather than absorbing it. Dual-axis charts showing virtual census (gold line) against inpatient occupancy bars (muted) over a 30-day window reveal whether the two are inversely correlated — the expected pattern if virtual ward is performing its intended function of reducing inpatient demand.
What the Data Usually Hides
Virtual ward utilization data looks compelling in isolation. A dashboard showing 42 patients monitored remotely appears to demonstrate significant capacity. But without comparison to admission rates for equivalent-acuity patients before and after virtual ward introduction, it is impossible to determine whether the virtual ward is substituting for inpatient admission or supplementing a discharge process that would have achieved the same outcome without remote monitoring. The NHS@Home programme evaluation data reveals that some early virtual ward implementations enrolled predominantly patients who would have been discharged the following day anyway — achieving high census numbers with minimal impact on inpatient bed demand.
The data that would prove impact — inpatient admission rates for the target conditions (heart failure, COPD exacerbation, community-acquired pneumonia, cellulitis) over 6 months before and 6 months after virtual ward introduction, stratified by acuity level — requires joining A&E attendance data, clinical coding data, and virtual ward enrolment records at the patient level. This analytical task takes 2–3 days with the right data access, but is almost never done as a routine operational report. Most ICB commissioning teams are evaluating virtual ward impact using census data alone, which cannot prove or disprove clinical effectiveness.
How to Fix It
Define the avoided admission cohort prospectively using clinical criteria embedded in the virtual ward enrolment process. At the point of virtual ward referral, the clinician records whether the patient met inpatient admission criteria (NEWS2 threshold, diagnosis-specific criteria, clinical judgement) and what the alternative pathway would have been without virtual ward availability. This prospective flagging, done at the point of enrolment, creates the "would have been admitted" evidence base that retrospective data analysis cannot reliably reconstruct.
Implement a cohort comparison methodology that compares admission rates for the target conditions in the 6 months before virtual ward introduction against the 6 months after, controlling for seasonal variation. This is the minimum evidence standard for ICB funding decisions and NHSE virtual ward evaluation requirements. Link the virtual ward census data to A&E-to-admission conversion rates for the target conditions daily — if the virtual ward is working, A&E-to-admission conversion for heart failure, COPD, and pneumonia should decline as virtual ward enrolment increases. This correlation, tracked on the dual-axis dashboard, is the operational proof of concept that drives continued ICB investment and NHSE NHS@Home programme funding eligibility.
People who asked this also asked...
- →How do I predict bed blockers 48 hours in advance?
- →Why did our 4-hour A&E standard drop below 78% last week?
- →How can my Trust secure the £2M NHS performance bonus for A&E flow?
- →How do I achieve a 5% improvement in 18-week elective wait times by March 2026?
- →How can our Trust reduce agency staff spend by 30%?
Your Data. Your Answer.
This is what the data typically shows.
Want to see what your data says?
Ask Your Vizier →