In early 2025, we enrolled 312 patients across three primary care practices in the Greater Boston area into a structured remote patient monitoring program. The enrollment criteria were narrow on purpose: active diagnoses of hypertension, type 2 diabetes, or COPD, plus at least one unplanned emergency department visit in the prior 18 months. These were the patients most likely to benefit and easiest to measure.

Fourteen months later, we pulled the data. Emergency department visits among enrolled patients fell 34% compared to the 18 months before enrollment. Hospital admissions dropped 28%. Average time-to-intervention when a vital threshold was crossed went from 6.2 hours to under 40 minutes.

Those are the headline numbers. What they obscure is everything that had to work correctly for them to happen.

What We Actually Measured

Every enrolled patient received a cellular-enabled blood pressure cuff, a pulse oximeter, and a glucometer (for diabetic patients). Devices transmitted readings automatically — no app, no Bluetooth pairing, no patient effort beyond using the device. Readings arrived in the clinical dashboard in real time.

We defined a triggerable threshold for each patient individually based on their baseline values and existing medication protocols. A hypertensive patient with a systolic consistently in the 138-145 range gets a different alert threshold than one whose baseline is 118. One-size-fits-all alert rules are how clinical teams end up drowning in false positives and ignoring the dashboard entirely.

Care coordinators reviewed the dashboard twice daily for enrolled patients. Alerts triggered a call within 30 minutes. Not a portal message. Not a callback request. A phone call.

The Surprise in the Data

We expected the hypertension cohort to show the strongest results. It did — 41% reduction in ER visits for that group. What we did not expect was the COPD cohort performing nearly as well at 38%.

The reason, when we reviewed the intervention logs, was straightforward: pulse oximetry trends gave us 48 to 72 hours of warning before most COPD exacerbations. Oxygen saturation doesn't drop off a cliff — it slides. Catching it at 93% rather than 87% is the difference between a steroid taper and a hospital admission.

The diabetic cohort showed a more modest 21% reduction in ER visits. Glucose spikes are harder to predict and act on remotely than blood pressure or oxygen trends. That's a signal worth investigating further.

What Didn't Work

Sixty-one patients dropped out before the six-month mark. Common reasons: device fatigue, concerns about who could see their readings, and one persistent belief that the devices were "sending information to the government." We take privacy concerns seriously. Our patient-facing materials now include explicit, plain-language explanations of data storage, access controls, and patient rights under HIPAA.

We also had three practices where alert thresholds weren't reviewed or adjusted after the first 60 days. Those practices showed weaker outcomes. Personalized thresholds degrade if you don't maintain them — patient health status changes, medications change, baselines shift. RPM is not a set-and-forget intervention.

Care coordinator capacity was the other limiting factor. Practices with fewer than 0.5 FTE dedicated to RPM response had slower intervention times and more alerts going unacknowledged. The technology doesn't replace human follow-through. It just makes that follow-through possible at scale.

Extrapolating Carefully

The practices in this pilot had above-average EHR adoption rates, stable care coordinator staffing, and engaged physician champions. That's not representative of every primary care environment. A 34% reduction in ER visits is an outcome, not a guarantee.

What the data does support is a structural claim: continuous monitoring of chronically ill patients, with fast human response when readings deviate, reduces unplanned acute utilization. That's not a surprising finding. But having numbers attached to it — numbers from a real patient population, in a real clinical workflow, over 14 months — makes it actionable in conversations with payers, hospital administrators, and skeptical colleagues.

We're continuing to follow the enrolled cohort. The 12-month post-program data will be available in Q4 2026. The question we're watching: how much of the effect persists after patients are discharged from the RPM program?

If you're evaluating RPM for your own patient population and want to look at our full methodology, including the threshold protocols and coordinator workflows, we're open to sharing it. Reach out through the contact form.