The Health Data Grab: When Convenience Comes at the Cost of Privacy
And what you can do to protect yourself
There has been a lot of AI news lately related to the U.S. government over the past few weeks. While I'm still doing a deep dive on the AI Action plan, new executive order on AI by the government, and other news, and the implications (and cue the AuDHD, thought of Dennis and the implications IYKYK), some news I wanted to focus on immediately.
On July 30th, 2025, the Trump Administration announced that it will be launching a new private health care tracking system with the help of many of the big tech companies, promising to make it "easily and seamlessly" access all health records and monitor wellness (Seitz, 2025). More than 60 companies, including major tech companies like Google, Amazon and Apple as well as health care giants like UnitedHealth Group and CVS Health, have agreed to share patient data in the system, including EHR and health IT vendors like Epic and Oracle Health. The initiative will focus on diabetes and weight management, conversational artificial intelligence that helps patients, and digital tools such as QR codes and apps that register patients for check-ins or track medications (Seitz, 2025). The system would be maintained by the federal government through the Centers for Medicare and Medicaid Services.
On the surface, this sounds convenient. Who wouldn't want seamless access to their health records? But dig deeper, and there are some serious concerns worth examining.
The HIPAA Protection Gap
This system is spearheaded by an administration that has already freely shared highly personal data about Americans in ways that have tested legal bounds. Lawrence Gostin, a Georgetown University law professor who specializes in public health, put it bluntly: "There are enormous ethical and legal concerns. Patients across America should be very worried that their medical records are going to be used in ways that harm them and their families" (Seitz, 2025).
Here's the fundamental issue: many of these tech companies aren't covered by HIPAA, the federal law that protects your personal health information from being shared by certain entities without your consent (Lee, 2025). Once your health data moves outside traditional healthcare providers into these tech company ecosystems, you lose many of the privacy protections you currently have. While there are some health apps genuinely trying to improve health outcomes, let's be realistic about the business model here. Most of these tech companies are looking to capitalize on human data. Jeffrey Chester at the Center for Digital Democracy called the scheme "an open door for the further use and monetization of sensitive and personal health information" (Seitz, 2025).
Consider the implications for reproductive health choices. Digital privacy expert Andrew Crawford pointed to this as a particularly concerning example of medical data that many would be uncomfortable sharing across so many companies, especially given concerns about location data potentially showing if people traveled out of state to access abortion care (Lee & Bennett, 2025). In a post-Roe world, this isn't just about privacy—it's about potential legal consequences.
The RFK Jr. Factor
It's worth examining who's driving this initiative. Health and Human Services Secretary Robert F. Kennedy Jr. has previously stated he wants to use data from Americans' medical records to study autism and vaccine safety, and has filled the agency with staffers who have a history of working at or running health technology startups and businesses. The potential conflicts of interest here are significant.
The federal government has done little to regulate health apps or telehealth programs, and there are outstanding questions about what protections will be in place to ensure that the data shared with tech companies not covered by HIPAA will remain private (U.S. Department of Health and Human Services, 2024).
The AI Complexity Layer
With the growing use of AI among government agencies and big tech companies, we're adding another layer of complexity to this privacy concern. The initiative specifically mentions "conversational artificial intelligence that helps patients," which raises additional questions about algorithmic bias in healthcare, documented AI discrimination, under-regulation of AI systems, and the "black box" problem: our inability to see how AI systems make their decisions.
Research shows that biases in medical artificial intelligence arise and compound throughout the AI lifecycle. These biases can have significant clinical consequences, especially in applications involving clinical decision-making. Left unaddressed, biased medical AI can lead to substandard clinical decisions and the perpetuation of longstanding healthcare disparities (Hasanzadeh et al., 2025).
We've already seen real-world examples of this. AI algorithms have used health costs as a proxy for health needs and falsely concluded that Black patients are healthier than equally sick white patients, simply because less money was spent on them (Murdoch, 2021). As a result, these algorithms gave higher priority to white patients when treating life-threatening conditions like diabetes and kidney disease, even though Black patients had higher severity indexes.
Many algorithms used in clinical settings are severely under-regulated in the U.S. Other algorithmic decision-making tools used in clinical, administrative, and public health settings (such as those that predict risk of mortality, likelihood of readmission, and in-home care needs) aren't required to be reviewed or regulated by the FDA or any regulatory body (Accuray, 2024).
The Privacy-AI Risk Multiplier
The combination of weakened privacy protections and potentially biased AI creates compounding risks that go beyond typical privacy concerns. The ability to deidentify or anonymize patient health data may be compromised by new algorithms that have successfully reidentified such data. This increases the risk to patient data under private custodianship.
When AI systems trained on biased data make healthcare decisions, those decisions can perpetuate and amplify existing healthcare disparities. If the training data is misrepresentative of population variability, AI is prone to reinforcing bias, which can lead to harmful outcomes, misdiagnoses, and lack of generalization across different patient populations (Murdoch, 2021).
Think about it this way: an AI system trained primarily on data from one demographic group gets used to make treatment recommendations for patients from underrepresented groups. The system might consistently underestimate symptom severity or recommend inappropriate treatments, creating systematic healthcare inequities that become harder to detect because they're embedded in seemingly "objective" algorithmic decisions. When this happens within a system where your health data is shared across dozens of companies with varying levels of oversight, both privacy violations and discriminatory healthcare outcomes become significantly more likely.
What You Can Do
Rather than accepting this as inevitable, there are concrete steps you can take to protect yourself:
Immediate Digital Hygiene:
Audit your health apps and delete any you don't absolutely need
Review privacy settings on wearables like Apple Watch or Fitbit and limit unnecessary data collection
Contact your healthcare providers to understand their data sharing policies and opt out where possible
Proactive Data Management:
Exercise your right to obtain copies of your medical records from all healthcare providers
Store them securely on your own devices rather than relying solely on cloud-based systems
Be cautious about scanning QR codes for medical check-ins when this system launches
Stay Informed and Engaged:
Read privacy policies for any new health apps or digital tools before using them
Contact your representatives to express concerns about this health data initiative
Support organizations like the Electronic Frontier Foundation and Center for Digital Democracy that advocate for digital privacy rights
Financial and Identity Protection:
Consider identity monitoring services given the re-identification risks
Understand how health data breaches might affect your insurance coverage
The Trump administration is promising convenience and efficiency, but the trade-offs in privacy and potential for algorithmic discrimination are significant. This isn't about being anti-technology, it's about ensuring that technological advancement doesn't come at the expense of fundamental privacy rights and equitable healthcare.
As we've seen with other tech initiatives, once these systems are in place, rolling them back becomes exponentially more difficult. The time to raise concerns and demand stronger protections is now, before this becomes another case of "move fast and break things"—except what's being broken is our privacy and potentially our health outcomes.
The question isn't whether technology and AI can improve healthcare (it can and should). The question is whether we're willing to accept the current proposal's terms, or if we're going to demand better safeguards for something as sensitive as our health data.
This post was written by me, with editing support from AI tools, because even writers appreciate a sidekick.
References
Accuray. (2024, April 4). Overcoming AI bias: Understanding, identifying and mitigating algorithmic bias in healthcare. Accuray Blog. https://www.accuray.com/blog/overcoming-ai-bias-understanding-identifying-and-mitigating-algorithmic-bias-in-healthcare/
Hasanzadeh, F., Josephson, C. B., Waters, G., Adedinsewo, D., Azizi, Z., & White, J. A. (2025). Bias recognition and mitigation strategies in artificial intelligence healthcare applications. npj Digital Medicine, 8, 154. https://doi.org/10.1038/s41746-025-01503-7
Lee, M. (2025, July 31). Trump to launch private health tracking system with tech firms. Yahoo News. https://www.yahoo.com/news/articles/trump-launch-private-health-tracking-224903890.html
Lee, M., & Bennett, K. (2025, July 31). Trump to launch private health tracking system with tech firms. TIME. https://time.com/7306647/trump-health-data-medical-records/
Murdoch, B. (2021, September 15). Privacy and artificial intelligence: Challenges for protecting health information in a new era. BMC Medical Ethics, 22, 122. https://bmcmedethics.biomedcentral.com/articles/10.1186/s12910-021-00687-3
Seitz, M. (2025, July 31). Trump administration is launching a new private health tracking system with Big Tech's help. PBS NewsHour. https://www.pbs.org/newshour/politics/white-house-launching-health-tracking-system-with-big-techs-help
U.S. Department of Health and Human Services. (2024, June 26). Use of online tracking technologies by HIPAA covered entities and business associates. HHS.gov. https://www.hhs.gov/hipaa/for-professionals/privacy/guidance/hipaa-online-tracking/index.html
Today I came across some important historical context that reinforces these concerns that I thought would be important to note. During the HIV/AIDS epidemic in the '80s and '90s, there were significant concerns about HIV registry confidentiality breaches. These concerns became a major driving force behind HIPAA being passed in 1996. The fact that privacy advocates and patients were worried about relatively simple registry systems, with far less sophisticated technology and fewer companies involved than what's being proposed now, really puts things in perspective.
Even without widespread breaches occurring, the mere possibility of health data misuse was enough to reshape our entire approach to health privacy. Now we're looking at a system that involves 60+ companies with varying levels of oversight and sophisticated AI capabilities.
If the threat alone was concerning enough to create HIPAA, I think we should be even more cautious about this exponentially more complex data sharing arrangement.