When CMS released the NOFO for the Rural Health Transformation Program last year, there was one category that stood out: “consumer-facing digital health”.
It’s a welcome, albeit unexpected move in the right direction. However, I’m not sure that eight weeks was sufficient time for states to thoughtfully address how they’ll effectively deploy symptom checkers, AI chatbots, prevention apps, and digital navigation tools to improve chronic disease management while overcoming digital literacy barriers and ensuring data flows back to clinicians.
Last week I expressed my doubt by predicting that given the lack of insight, most state’s RHTP consumer health technology initiatives could easily fail — if they lead with technology procurement instead of implementation science.
This week here’s what some states already know: what it actually takes to get the consumer-facing health tech right.
With 15 years of telehealth implementation I know firsthand that even if you control most of the digital experience — getting great engagement and results is hard.
Here’s what’s different about consumer health technology: you’re putting tools directly in patients’ hands. If patients don’t use the devices, or can’t figure them out, or don’t see the point — the whole program fails. No matter how good your clinical pathways are.
Diagnosis Before Prescription
Like good medicine, you don’t write a prescription before examining the patient and understanding the diagnosis.
Successful programs spend months in “examination mode” — talking to clinicians AND patients about realistic needs and actual barriers before shopping for technology solutions.
This isn’t a survey sent to providers. It’s not a committee meeting where administrators decide what rural communities need. It’s systematic listening and observation: studying chronic care management in rural clinics, fielding conversations with patients about what actually prevents them from managing their chronic conditions, understanding where the real problems are.
In one project the team talked to rural pregnant moms about prenatal care challenges BEFORE deciding remote monitoring was the answer — and what should be monitored. They learned that transportation wasn’t the only barrier — many moms didn’t understand when to be concerned about symptoms, and phone triage nurses couldn’t assess severity without seeing the patient. Remote blood pressure monitoring became part of the solution, but only after understanding the actual problem.
The question isn’t “what consumer health technology should we buy?”
The question is “what do our patients need and what do our clinicians need to deliver it?”
Technology comes later.
Design With Patients AND Clinicians Through a Proof of Concept Mindset
Modern Healthcare reported what successful health systems learned about wearables: clinical engagement at the start determines everything.
But successful consumer health programs must go further: patient engagement at the start is just as critical.
The best way to do this? Conduct a small, controlled proof-of-concept. Not a pilot program rolling out to hundreds of patients. A proof-of-concept that validates clearly defined assumptions and hypotheses with data, using 1-2 clinicians and a handful or a dozen patients.
Here’s what this looks like in practice:
A rural health system planning high-risk pregnancy remote monitoring starts with two OB nurses who take BP cuffs and scales home for a week. They use them daily on themselves. They figure out where the instructions are confusing, where the devices are finicky, what data actually transmits to the platform and how accurate it is.
Then those same nurses work with a dozen expecting moms with hypertension. These moms test the devices at home. They, too, give feedback on instructions, usability, where things get confusing. A digital navigator from the community tests the enrollment process.
What they discovered: patients were confused about WHEN to take readings. Morning? Evening? Before or after eating? The vendor manual assumed patients would know. Patients didn’t. Adding simple “take your reading when you wake up, before breakfast” guidance changed everything. The enrollment and training was improved.
This is what “design with patients and clinicians” actually means. Not asking them what they think about a vendor demo. Putting real devices in their hands and learning what actually works and what doesn’t.
The digital equity questions surface here too. Can your 68-year-old diabetic patient with vision problems actually read the glucometer screen? Can a patient with limited English proficiency understand the app instructions? Does the device work in a home without reliable Wi-Fi?
Small scale lets you discover these problems when they’re easy to fix — before you’ve distributed devices to thousands of patients. And the proof-of-concept validation metrics become your performance management system as you scale.
Scaling With a Learning Mindset
Once proof-of-concept succeeds, systematic expansion begins.
But here’s the critical distinction: learning mindset vs. expansion mindset.
The more common expansion mindset says: “We proved it works, let’s roll it out to everybody the same way, quickly, so that we can soon reap the benefits.”
A learning mindset says: “We learned what works HERE. Let’s now adapt and refine as we expand to different contexts.”
Each new cohort of patients teaches you something. Looking at it from the states’ perspective, every new organization implementing consumer digital health is different.
The rollout is not automatically easily replicable.
Different rural contexts — frontier counties with no cellular coverage vs. rural-adjacent areas with better infrastructure. Different patient populations — aging farmers vs. young immigrant families. Different connectivity challenges. Different languages spoken in the community.
The core clinical program and patient engagement principles scale. But with a mindset to allow for flexibility for local adaptation.
One high-risk pregnancy program worked beautifully in a more affluent rural region. When it expanded to another region with a large Hispanic farmworker population, the program needed language support AND a different device distribution model — reaching patients through community health workers rather than clinic visits.
Patient support infrastructure scales with the program: digital navigators, multilingual support, loaner device programs, troubleshooting capacity.
The states that succeed will keep learning through Years 2-4. The ones that fail treat initial successful rollouts with the early adopters as the final blueprint and wonder why implementation breaks down as they scale.



Verify Before You Scale — Test Sustainability in Proof of Concept
I’ve written before about the three-lens verification approach: Strategic, Financial, Clinical.
For consumer health technology, this verification happens before and during the proof of concept, since the integration of consumer digital health is so new.
Strategic: Does this initiative actually advance your rural health transformation goals? Or is it technology for technology’s sake? Remember: CMS is looking for outcomes — not deployments.
Financial: What’s the total cost of ownership — devices, platform fees, cellular data, staff time, technical support? What’s the reimbursement model? Does RPM billing actually work with real patient behavior? How can we justify the investment?
Clinical: Do clinicians find the data useful enough, insightful to change the way they manage their patients’ care? Are patients using devices consistently enough to generate meaningful data?
As our experience with RPM programs has shown, patient experience IS financial sustainability. A program with 40% patient dropout rates can’t support itself on RPM reimbursement. You need sustained engagement.
One program added text messaging during proof-of-concept. “Why this matters to YOUR health” themed messages saw patient engagement jump by 50% — making the RPM reimbursement model actually viable.
Consumer Digital Health programs need to be designed with sustainability in mind from Day 1 — financial sustainability, strategic sustainability, and clinical sustainability.
If verification fails in proof-of-concept — pivot or stop. Better to learn this with a dozen patients than after distributing devices to thousands.
Learning important lessons, so I hope, will be important to CMS too. And the money can be re-allocated to more valuable initiatives that produce actual outcomes.
What CMS Will Want to See in Years 3-5
Activity reports or deployment counts won’t cut it.
“We distributed 1,000 devices to patients across 15 counties” doesn’t demonstrate value.
CMS will want outcome data such as “68% of diabetic patients using continuous glucose monitors achieved A1c reduction of 1 point or more. 82% of patients reported feeling more engaged in managing their diabetes. ED visits decreased 23% compared to matched control groups.”
The patient voice matters here too: patient satisfaction scores, net promoter scores. patient activation measures, patient-reported outcomes. Do patients feel more confident managing their health? Do they understand their condition better? Do they feel supported?
The performance evaluation infrastructure that captures both clinical outcomes AND patient experience needs to be built from the beginning. States that did proof of concept with systematic evaluation have this data. States that just distributed devices and hoped for the best don’t.
The Reality Check
This is not easy, simple, straightforward work.
As with any innovation, it will take time. But states need to do more than hold rural health organizations accountable — they need to provide technical assistance to implement these digital innovations using proven implementation science.
States need to be patient with the first cohort, too. Gathering good data, validating the concept, will take time. CMS needs to understand this reality as well. As the adage goes, sometimes you need to go slow to go fast.
But it’s the difference between programs where patients actually use the devices — where clinicians find the data useful — where both sides see value — and programs where devices end up in drawers at patient homes while states report distribution statistics to CMS.
Consumer health technology is harder than telehealth because you’re putting tools in patients’ hands and asking them to engage independently. Technology is only 20% of success. The other 80% is the implementation science of clinical program design, patient engagement strategies, workflow integration, training, support infrastructure, and sustainability planning.
If your state received RHTP funding to launch consumer-facing digital health initiatives, your first RFP should reflect this mindset.
Because when CMS asks in Years 3-5 for “measurable benefits in patient and provider access to health data AND improved access to, quality of, and cost of care” — only the programs that got implementation science right will have answers.
Are you involved in designing your state’s RHTP RFPs for consumer-facing health tech? Invite us for a conversation to share our experience-based guidance.
Do you have connections in your state to those working on this? Then share this article with them. This knowledge is common sense but not commonly known.








To receive articles like these in your Inbox every week, you can subscribe to Christian’s Telehealth Tuesday Newsletter.
Christian Milaster and his team optimize Telehealth Services for health systems and physician practices. Christian is the Founder and President of Ingenium Digital Health Advisors where he and his expert consortium partner with healthcare leaders to enable the delivery of extraordinary care.
Contact Christian by phone or text at 657-464-3648, via email, or video chat.




