From step counter to clinical signal: what actually changed?
Ten years ago, most wearables were glorified pedometers. They counted steps, showed a cute fireworks animation, and maybe sent you a weekly “you walked 5 km” email. Nobody confused them with hospital-grade equipment.
Today’s wearables look very different. A single device can:
- Track heart rate variability and resting heart rate trends.
- Estimate blood oxygen saturation and respiratory rate while you sleep.
- Flag possible irregular rhythms that “might be atrial fibrillation.”
- Estimate blood pressure or blood glucose trends using non-traditional sensors and AI.
- Stream continuous data into apps used by clinicians, coaches, or insurers.
The more a device influences real medical decisions, the less it looks like a toy—and the more it looks like a regulated product. That’s the heart of the current tension between wearables companies and agencies like the FDA, which has published a specific General Wellness Policy for Low Risk Devices explaining when they will, and will not, exercise enforcement discretion over “wellness” gadgets.
Where the FDA draws the line on wearables
The FDA doesn’t regulate devices based on how cool the marketing video looks. It regulates based on intended use and risk.
General wellness vs medical device
Under FDA guidance, a general wellness product is typically:
- Intended only for overall health, fitness, or lifestyle (for example, “move more,” “sleep better”).
- Low risk to users if it’s wrong (you might walk 500 fewer steps, but you won’t miss a stroke).
The moment you claim to diagnose, treat, or mitigate a disease—or your feature is realistically used that way— you’re entering medical-device territory. The same heart-rate graph can be fine in a gym app and regulated if it’s marketed as “detecting arrhythmias and preventing strokes.”
Legal and regulatory experts have been warning about this shift for years, breaking down how wellness claims differ from medical ones and how easily companies can drift across the line without noticing. If you’re building a product, resources such as Stanford’s analysis of the FDA pathway for wearable medical devices , the FDA’s own General Wellness Guidance portal , and engineering-focused deep dives on standards and guidance documents for wearables are worth bookmarking.
Class I, II, III – not all devices are equal
Once a wearable crosses into “medical device” territory, it doesn’t automatically become an ultra-high-risk, implant-level product. Devices are classified:
- Class I – low risk (e.g., some basic clinical thermometers).
- Class II – moderate risk, often needing special controls (many diagnostic tools and software).
- Class III – high risk, usually supporting or sustaining life (implantable devices, etc.).
Most health wearables land in Class II if they’re cleared: they’re not implants, but the consequences of bad data can still be serious—especially when readings drive medication changes or urgent care decisions.
This illustrative chart shows a typical pattern: in the early 2010s, most wearable features lived in a “wellness” bucket. As devices added ECG, SpO₂, blood pressure estimation and arrhythmia notifications, more features moved into regulated territory, bringing them under the same quality and safety expectations as traditional medical devices.
Why wearables are suddenly “under FDA fire”
If you follow the headlines, it can feel like regulators woke up one morning and decided to declare war on wearables. In reality, the tension has been building for years, and recent enforcement actions simply made it impossible to ignore.
Consider just a few high-profile examples discussed by legal and policy commentators:
- Warning letters to fitness wearables whose “insights” on blood pressure or glucose looked more like unapproved diagnostics than casual wellness advice.
- Public warnings about smartwatches and rings that claim to measure blood glucose non-invasively, despite a lack of FDA clearance and potential for dangerous mis-treatment if the readings are wrong.
- Heightened scrutiny of niche wearable brands, including the sports-performance space, as reported in regulatory analyses and news reports on FDA scrutiny of WHOOP and other specialty devices .
Law firms and regulatory specialists—from Arnold & Porter’s analysis of FDA warning letters to Nixon Law Group’s coverage of crackdowns on WHOOP and Dexcom – all point to the same message: digital health and wearables are now firmly in the regulator’s line of sight.
Add in thought pieces like “FDA’s clash with consumer tech” in the wearable space and practical explainers such as Loeb & Loeb’s guide to navigating FDA and FTC regulation , and a picture emerges: the “play dumb and call it wellness” strategy is no longer safe.
This illustrative bar chart shows the kinds of features that tend to trigger regulatory scrutiny: non-invasive blood pressure estimates, blood glucose-style metrics, and diagnostic-sounding arrhythmia notifications sit at the top, while simple step counts and generic “move more” reminders remain low priority.
Three big shifts driving the crackdown
1. Claims got bolder
In the early days, marketing copy talked about “feeling better” and “staying active.” Now, product pages brag about catching atrial fibrillation, predicting overtraining, or estimating blood pressure with “clinical-grade accuracy.” Even if engineers internally think “this is experimental,” regulators judge the product by what consumers see.
2. Data started influencing care
At first, your step count just lived in your phone. Today, wearable data flows into telemedicine dashboards, coaching platforms, electronic health records, and even insurer programs. FDA researchers have studied how smartwatch notifications influence care, as in their post-market evaluation of cardiovascular notifications . Once data guides medication changes or triage decisions, regulators care deeply about accuracy and bias.
3. The “general wellness” excuse was overused
The FDA’s general wellness policy was designed as a pragmatic compromise: it allowed truly low-risk gadgets to innovate without forcing every step counter through a clinical trial. But some companies stretched the idea too far—marketing quasi-diagnostics under a “wellness” label. Legal analyses like “What counts as a general wellness device—and what doesn’t” now read like gentle warnings that the loophole era is ending.
This explainer video (from an independent tech-policy channel) walks through why some wearable features require FDA clearance, how risk classification works, and why accuracy and labeling matter once your product starts influencing real medical decisions.
What this crackdown means if you wear these devices
As a consumer, it’s tempting to see “FDA scrutiny” and assume that wearables have become dangerous overnight. The reality is subtler: regulators are trying to catch up to technology that got ahead of the rules.
Here’s how that plays out for you:
- Safer, more honest claims. Expect fewer vague promises about “clinic-grade” insights and more specific language about what a feature can and cannot do.
- Clearer labeling. Properly regulated features must disclose limitations, intended use, and where not to rely on the data (for example, “not intended to replace a medical diagnosis”).
- Possible feature removals. Some devices may disable or rebrand risky features while they seek clearance, especially around blood pressure or glucose-like metrics.
- More trust in approved features. When a wearable touts FDA clearance for a specific use, that feature has gone through a formal review process—often involving clinical testing, quality-system audits, and cybersecurity checks.
The key mindset shift is this: use unregulated wellness features for trends and gentle nudges, but treat diagnosis-sounding claims with caution unless you know they’ve been cleared as medical devices.
What this crackdown means if you build wearables
If you’re a founder or engineer, the new regulatory climate can feel scary—and expensive. But it also weeds out low-quality competitors and rewards teams who design with safety in mind from day one.
At a minimum, you’ll need to:
- Map your features to intended uses. For each metric (heart rate, SpO₂, blood pressure estimate, etc.), write down how marketing, UI copy, and partnerships might influence how people use it.
- Decide whether you’re a wellness tool or a medical device. Sitting “in between” is exactly what triggers warning letters.
- Study the right pathway. The difference between a low-risk wellness product and a regulated device includes design controls, clinical evidence, and post-market surveillance. Deep dives such as engineering-level standards articles and FDA’s own digital-health research pages can help you pick the right route.
- Invest in legal and regulatory advice early. Articles from firms like Loeb & Loeb , Arnold & Porter , and Nixon Law Group make it clear how costly it is to ignore this step.
It’s no longer enough to say “we’re just a fitness brand.” If your product walks and talks like a medical device, regulators will treat it like one—whether you’re ready or not.
Global ripple effects beyond the FDA
While this article focuses on the FDA, similar patterns are emerging globally. The EU’s MDR, the UK’s MHRA framework, and other regional regulators are all grappling with where to place AI-heavy wearables that live somewhere between lifestyle gadget and clinical monitor.
For multinational products, “regulatory by design” becomes critical:
- Centralize evidence and risk assessments so they can support multiple jurisdictions.
- Localize labeling and claims to match each region’s expectations.
- Align cybersecurity and data-protection practices with global norms, not just local minimums.
Guides on international standards and playlists like medical device regulation overviews are helpful starting points if you’re planning to ship outside a single market.
Future-proofing your next wearable product
The most resilient teams aren’t those that avoid regulation; they’re the ones that treat it as a design constraint—just like battery life or Bluetooth range.
Design principles for the new era
- Start with a clear regulatory thesis. Decide early whether you’re building a wellness coach or a medical assistant, and design everything—from sensors to UX copy—to match.
- Separate “fun” metrics from serious ones. Not every data stream needs clinical validation, but don’t hide high-risk estimates behind playful UI.
- Embrace transparency. Tell users how your algorithms work at a high level, where they’ve been validated, and what they should never be used for.
- Build feedback loops. Plan how you’ll capture complaints, false alarms, and “near misses” from the field and feed them back into safety improvements.
The companies that thrive in this environment will be those whose hardware, software, legal, and clinical teams sit at the same table instead of shipping first and hoping regulators don’t notice.
Key takeaways
If you remember nothing else from this article, remember these four points:
- Wearables crossed from toy to tool the moment they started influencing medical decisions, not just counting steps.
- The FDA’s “general wellness” policy was never a blank check; it’s a narrow path for low-risk gadgets, not quasi-diagnostic tools.
- Recent warning letters and policy statements are a signal, not a surprise: regulators now expect consumer tech players to meet the same safety bar as traditional device makers when they make medical-grade claims.
- For users, this should mean clearer labels and safer features; for builders, it means designing with regulation in mind from the very first prototype.
In other words, the era of “move fast and break things” for wearables is over. The next chapter belongs to teams that move thoughtfully—and still ship.
Frequently asked questions
Not necessarily. Most mainstream trackers still fall under “general wellness” when they focus on activity, sleep, and simple trends. They may only be treated as medical devices for specific features—like ECG-based rhythm analysis or approved arrhythmia notifications—rather than the entire product. The key is what the company claims a feature can do, and how realistically it could influence diagnosis or treatment.
It depends on how those metrics are positioned. If heart-rate or SpO₂ is framed as general fitness information (“see how your heart rate changes during exercise”), it may be treated as wellness. If it’s explicitly used to detect disease, guide treatment, or warn of dangerous events, the same sensors and algorithms can fall under medical-device rules and require clearance for that use.
Blood pressure is a core vital sign used to diagnose and manage conditions like hypertension and heart failure. If a wearable offers daily blood pressure “insights” that people might rely on to adjust medication or skip clinical checks, regulators view bad readings as high-risk. That’s why blood pressure estimation features, even when marketed as “insights,” often attract scrutiny and may require rigorous validation before launch.
Maybe not—but be careful. If your app stays in the realm of behavior change (“walk more,” “sleep by 11 p.m”) and avoids disease-specific promises, it may stay in general-wellness territory. Once you start interpreting data to diagnose, treat, or manage specific conditions, you may cross into software-as-a-medical-device territory, even if you never build hardware. When in doubt, talk to a regulatory specialist early.
Look for explicit language such as “FDA-cleared for detection of atrial fibrillation” or references to specific regulatory submissions in the product’s labeling and support pages. Many companies also highlight their clearance in press releases. If a feature sounds like a diagnosis but the marketing talks only about “wellness” and “general information,” treat it cautiously and don’t change medications based on it without talking to a clinician.
It may slow reckless experimentation, but it doesn’t have to stop innovation. In practice, regulation tends to push teams toward more rigorous data science, clearer labeling, better cybersecurity, and tighter quality control. That can feel slower at first, but it also builds trust with clinicians and patients—and opens doors to reimbursement, clinical partnerships, and long-term adoption.