Tag: Data

  • Understanding the Debate on AI in Electronic Health Records

    Understanding the Debate on AI in Electronic Health Records

    Background

    Artificial Intelligence (AI) refers to the use of computer algorithms to process data and make decisions, ultimately streamlining administrative processes. In healthcare, AI is being increasingly integrated with Electronic Health Records (EHRs)—digital systems that store and manage patient health information, such as medical history and diagnoses. By 2021, almost 80% of office-based physicians and virtually all non-federal acute care hospitals had implemented an EHR system. As part of this widespread adoption, various AI applications in EHRs are beginning to emerge. So far, the main functions of AI in EHRs include managing datasets of patient health information, identifying patterns in health data, and using these patterns to predict health outcomes and recommend pathways for treatment. 

    Arguments in Favor of AI in EHRs

    The use of AI in EHRs presents opportunities to improve healthcare by increasing efficiency as well as supplying administrative support. Supporters of AI integration argue that it can significantly improve diagnostic accuracy. AI-integrated EHR systems can analyze vast amounts of patient data, flagging potential issues that might otherwise be overlooked by human clinicians. Machine learning algorithms can identify patterns across multiple cases and recommend diagnoses or treatments based on evidence from similar cases. Proponents contend that by reducing human error and providing real-time insights, AI could support doctors in making more accurate and quick decisions, leading to better patient outcomes.

    Proponents of AI in EHRs also argue that AI has the potential to significantly reduce healthcare inequities by providing better access and more personalized care for underserved populations. AI-powered tools can identify at-risk patients early by analyzing complex data, including demographic and behavioral factors, and help prioritize interventions for those who need it most. Additionally, AI can bridge communication gaps for patients facing language barriers or low health literacy, ensuring they receive clear and relevant information about their health. Supporters also suggest that AI’s ability to reduce human biases in clinical decision-making, such as disparities in pain assessment or treatment recommendations, could lead to fairer, more equitable healthcare outcomes for all.

    From the workforce perspective, supporters argue that AI integration in EHRs has the ability to significantly reduce physician burnout by streamlining the documentation process. With the increasing time spent on EHR tasks, AI-driven tools like voice-to-text transcription, automated note generation, and data entry can cut down the time physicians devote to administrative duties. For instance, one 2023 study reported that AI integration in health records led to a 72% reduction in documentation time, equating to approximately 3.3 hours saved per week per clinician. This allows doctors to spend more time on direct patient care and less on paperwork, which supporters contend will improve job satisfaction and reduce stress.

    Arguments Against AI in EHRs

    While some argue that AI in EHRs will lead to more accurate and equitable healthcare, others raise concerns regarding data bias, privacy, and transparency. Critics of AI integration argue that modern legal frameworks lack adequate safeguards for individuals’ health data, leaving sensitive information vulnerable to breaches. For example, data collected by AI tools may be hacked or gathered without consent for marketing purposes. Additionally, certain genetics testing companies that operate without sufficient legal oversight may sell customer data to pharmaceutical and biotechnology companies.

    Moreover, some critics share concerns about whether AI integration in EHRs aligns with standards for informed consent. Informed consent is a key ethical principle that ensures patients are fully informed and in control of decisions regarding their healthcare. It includes elements such as the patient’s ability to understand and make decisions about their diagnoses, treatment options, and any risks involved. Ethical responsibility dictates that consent should be specific, voluntary, and clear. The rise of AI in healthcare applications has increased concerns about whether patients are fully aware of how their data is used, the risks of procedures, and potential errors in AI-driven treatments. Autonomy principles state that patients have the right to be informed about their treatment process, the privacy of their data, and the potential risks of AI-related procedures, such as errors in programming. Critics say that patients must be more informed about how AI is integrated into health records systems in order for them to truly provide informed consent. 

    Another significant ethical concern in the use of AI and machine learning (ML) in healthcare is algorithmic bias, which can manifest in racial, gender, and socioeconomic disparities due to flaws in algorithm design. Such biases may lead to misdiagnosis or delayed treatments for underrepresented groups and exacerbate inequities in access to care. To address this, advocates push for the prioritization of diverse training data that reflects demographic factors. They hold that regular evaluations are necessary to ensure that AI models consistently remain fair over time, upholding the principles of justice and equity. 

    Future Outlook

    Building on the potential of AI in healthcare, H.R. 238, introduced on January 7, 2025, proposes that AI systems be authorized to prescribe medications if they are approved by the Food and Drug Administration (FDA) and if the state where they operate permits their use for prescribing. This bill represents a significant step in integrating AI into clinical practices, going beyond data management to reshape how medications are prescribed and managed. The arguments for and against H.R. 238 mirror the debate around AI integration in EHRs; while proponents of the bill argue that AI could enhance patient safety, reduce errors, and alleviate clinician burnout, critics highlight concerns regarding the loss of human judgment, data privacy, and the potential for AI to reinforce biases in healthcare. As AI continues to play a central role in healthcare, bills like H.R. 238 spark important discussions about AI’s ethical, practical, and legal implications in clinical decision-making.

    Summary

    In conclusion, the integration of AI into EHRs has forced medical stakeholders to balance a need for improvements in accuracy and efficiency with a concern for medical ethics and patient privacy. On one hand, AI can support more accurate diagnoses, enhance patient care, and help reduce the burnout faced by healthcare providers. Additionally, AI may contribute to reducing healthcare inequities by providing better access and more personalized care, especially for underserved populations. However, the implementation of AI also raises concerns regarding data privacy, algorithmic bias, and informed consent, suggesting a need for more careful implementation and oversight. As AI’s presence in healthcare settings continues to expand, addressing these concerns will be key to ensuring it benefits patients and healthcare providers alike.

  • Understanding the Connected MOM Act: Federal Intervention in State Maternal Health Medicaid Coverage

    Understanding the Connected MOM Act: Federal Intervention in State Maternal Health Medicaid Coverage

    Introduction to Medicaid and Maternal Health Coverage

    Medicaid is a healthcare program designed to cover specific medical costs for individuals with lower incomes and limited resources. While the federal government sets baseline regulations and retains oversight authority over Medicaid programs, states maintain primary responsibility for program administration, which leads to variation in Medicaid coverage across the nation. Many state Medicaid programs offer insurance coverage for pregnant individuals through mechanisms such as presumptive eligibility. Presumptive eligibility allows certain vulnerable populations to receive coverage before their application for Medicaid is fully processed. For example, Iowa’s presumptive Medicaid coverage extends Medicaid benefits to all pregnant applicants while their eligibility is being determined, regardless of the final outcome.

    Maternal health remains a critical concern in the United States, where indicators such as preterm births and maternal mortality have continued to rise despite targeted policy interventions. A key factor in improving maternal health outcomes is access to high-quality prenatal care, yet adequate access to prenatal care is declining. A significant reason that many people cannot access adequate prenatal care is a lack of insurance coverage or sporadic insurance coverage during their pregnancy. Research emphasizes that increasing insurance coverage for pregnant people can improve access to prenatal care, which can improve maternal health outcomes.


    While federal regulations mandate certain Medicaid services, including maternal healthcare, the specifics of maternal health coverage are left largely to the discretion of individual states. For instance, Iowa’s presumptive eligibility for pregnant people continues until the applicant receives a determination of full Medicaid eligibility. In contrast, Minnesota’s hospital-based presumptive coverage for pregnant people only lasts for a month. 

    S.141 and the Scope of Federal Intervention

    Introduced on January 16, 2025, S.141—or the Connected MOM Act—aims to identify and address barriers to Medicaid coverage of health monitoring devices in an effort to improve maternal health outcomes. Given that health monitoring devices can expand access to prenatal care by allowing physicians to remotely monitor health metrics, the bill aims to explore how pregnant people might face challenges in obtaining these devices. The bill proposes investigating state-level obstacles to coverage of remote physiologic devices, which include: 

    • Blood pressure cuffs (used to monitor blood pressure)
    • Glucometers (used to assess blood glucose levels)
    • Pulse oximeters (used to measure blood oxygen saturation)
    • Thermometers (used to track body temperature)

    These devices enable at-home monitoring of key health metrics, facilitating earlier intervention for dangerous pregnancy-related conditions. According to legal experts, such investigative efforts generate data that can inform and support future policy development. S.B. 141, which has received bipartisan support, is currently under review by the Senate Finance Committee.

    Perspectives on S.B. 141 and Federal Medicaid Interventions

    Investigative legislation like the Connected MOM Act allocates funding for evidence-gathering to guide future policy decisions. In this case, the bill aims to collect information on how states manage Medicaid coverage for remote physiologic devices that are critical during pregnancy, with the long-term goal of shaping federal Medicaid policies. While supporters of the Connected MOM Act argue that it will provide necessary insights to catalyze Medicaid expansion for pregnant people, others point to the rules and regulations of Medicaid which make it difficult for the federal government to intervene broadly in state Medicaid programs. Given the structural limits on federal influence over state-run Medicaid programs, broad national reforms are often considered too costly or unlikely to yield systemic change. This dynamic was evident in the fate of H.R. 3055—the Black Maternal Health Momnibus Act—which failed to advance beyond the committee stage. Supporters of the Connected MOM Act argue that its incremental, investigative approach will help justify future reforms without being perceived as broad federal overreach. 

    Conclusion

    Each state administers its own Medicaid program, resulting in variations in coverage for certain medical devices, including remote health monitoring devices. Given the importance of these devices in expanding access to prenatal care, S.B. 141 seeks to investigate the best course of action for improving coverage of them across the nation. As it moves through committee, S.B. 141 may give insights on how policymakers can strategically navigate limits on federal power over state health programs.

  • Perspectives on the California Privacy Rights Act: America’s Strictest Data Privacy Law

    Perspectives on the California Privacy Rights Act: America’s Strictest Data Privacy Law

    Background and Key Provisions

    The California Privacy Rights Act (CPRA), also known as Proposition 24, is a recently enacted law aimed at strengthening corporate regulations on data collection and processing in California. It acts as an addendum to the California Consumer Privacy Act (CCPA), a voter-initiated measure designed to enhance oversight of corporate data practices. The CPRA seeks to increase public trust in corporations and improve transparency regarding targeted advertising and cookie usage. Cookies are small files containing user information that websites create and store on users’ devices to tailor their website experience. The CPRA aims to align California’s data privacy practices with the General Data Protection Regulation (GDPR), a European Union data privacy law regarded as the most comprehensive in the world. 

    The CPRA was introduced as a referendum by California voters for the November 2020 general election. It passed with the support of 56.2% of voters in 2020, but did not go into effect until January 1st, 2023. The law builds off of the preexisting CCPA’s protections for user data through the following key provisions:

    • Establishes the California Privacy Protection Agency (CPPA), a government agency responsible for investigating violations, imposing fines, and educating the public on digital privacy rights.
    • Clarifies CCPA definitions of personal data, creating specific categories for financial, biometric, and health data. Adds a new category of sensitive personal information, which will be regulated more heavily than personal information. 
    • Implements privacy protections for minors. Under the CPRA, companies must request permission to buy or sell data from minors, and can be fined for the intentional or unintentional misuse of minors’ data. Minors ages 13 to 16 must explicitly opt into data sharing, while minors ages 16 through 18 can opt out of data sharing. 
    • Expands consumer rights by prohibiting companies from charging fees or refusing services to users who opt out of data sharing. Building on the CCPA’s universal right to opt out of data sharing, the CPRA gives consumers a right to correct or limit the use of the data they share. Consumers can also sue companies that violate the CPRA, even if their personal data was not involved in a security breach. 
    • Modifies the CCPA’s definition of a covered business to exclude most small businesses and include any business that generates significant income from the sale of user data. 

    Perspectives on CPRA Data Collection Regulations

    One of the most contentious aspects of the CPRA is the regulation of personal data collection. Supporters contend that increased regulation will enhance consumer trust by preventing corporations from over-collecting and misusing personal data. Many California voters worry that businesses are gathering and selling personal information without consumers’ knowledge. Whether or not these fears are justified, they have driven strong public support for stricter data processing guidelines under both the CCPA and CPRA. Additionally, supporters of the CPRA argue that its impact on corporate data will be minimal, given that studies suggest less than 1% of Californians take advantage of opt-out options for data sharing.

    Opponents argue that restricting data collection could lead to inaccuracies if a large number of consumers choose to opt out. Without access to a broad dataset, companies may face higher costs to clean and verify the data they collect. Currently, many businesses rely on cookies and tracking technologies to analyze consumer behavior. If these methods become less effective, companies may need to invest in alternative, more expensive market research techniques or expand their workforce to ensure data accuracy.

    The opt-out mechanism has been a focal point of debate. Supporters view it as a balanced compromise, allowing Californians to protect their personal information without significantly disrupting corporate data operations. However, some argue that an opt-in model—requiring companies to obtain explicit consent before collecting data—would provide stronger privacy protections. Critics believe that many consumers simply accept default data collection policies because opting out can be confusing or time-consuming, ultimately limiting the effectiveness of the CPRA’s protections.

    Financial Considerations

    Beyond concerns about data collection, the financial impact of the CPRA has also been widely debated. While the CPRA exempts small businesses from its regulations, larger businesses had already invested heavily in CCPA compliance and were reluctant to incur additional costs to meet new, potentially stricter regulations under the CPRA. Additionally, implementing the CPRA was estimated to cost the State of California approximately $55 billion due to the creation of a new regulatory agency and the need for updated data practices. Critics argued that these funds could have been allocated more effectively, while supporters viewed the investment as essential for ensuring corporate accountability.

    Future Prospects for California’s Privacy Policy

    Since the CPRA is an addendum to the CCPA, California data privacy law remains open to further modifications. Future updates will likely center on three key areas: greater alignment with European Union standards, increased consumer education, and clearer guidelines on business-vendor responsibility.

    The General Data Protection Regulation (GDPR), the European Union’s comprehensive data privacy law, already shares similarities with the CPRA, particularly in restricting data collection and processing. However, a major distinction is that the GDPR applies to all companies operating within its jurisdiction, regardless of revenue. Additionally, the GDPR requires companies to obtain explicit opt-in consent for data collection, while the CPRA relies on an opt-out system. Some supporters of the CPRA believe it does not go far enough, and may consider advocating for GDPR-style opt-in requirements in the future. 

    Others argue that many individuals are unaware of how their data is collected, processed, and sold, no matter how many regulations the state implements. This lack of knowledge can lead to passive compliance rather than informed consent under laws like the CPRA. In the future, advocacy organizations may push for California privacy law to include stronger provisions for community education programs on data collection and privacy options.  

    Another area for potential reform is business-vendor responsibility. Currently, both website operators and third-party vendors are responsible for complying with CPRA regulations, which some argue leads to redundancy and confusion. If accountability is not clearly assigned, businesses may assume that the other party is handling compliance, increasing the risk of regulatory lapses. Clarifying these responsibilities might be a target for legislators or voters who are concerned about streamlining the enforcement of privacy law. 

    Conclusion

    With laws like the CCPA and the CPRA, California maintains the strongest data privacy protections in the nation. Some view these strict regulations as necessary safeguards against the misuse of consumer data that align the state with global privacy norms. Others see laws like CPRA as excessive impositions on business resources. Still, others argue that California law does not go far enough, advocating for a universal opt-in standard rather than an opt-out standard for data sharing. As debates around CPRA continue, California is likely to provide a model for other state and federal data privacy regulations across the U.S.

  • Pros and Cons of S.B. 3732: The Artificial Intelligence Environmental Impacts Act

    Pros and Cons of S.B. 3732: The Artificial Intelligence Environmental Impacts Act

    Introduction

    The rise in the prevalence of artificial intelligence (AI) has had significant impacts on the environment. This includes the electricity required to power the technology, the release of hundreds of tons of carbon emissions, and the depletion of freshwater resources for data center cooling. For example, AI data centers in the U.S. use about 7,100 liters of water per megawatt-hour of energy they consume

    Demand for energy to power AI is rising. One study predicts that AI data centers will increase from about 3% of the US’s energy usage in 2023 to about 8% in 2030. However, there is also a potential for AI to have positive impacts on the environment. AI is a powerful tool in promoting energy transitions, with a 1% increase in AI development corresponding to a 0.0025% increase in energy transition, a 0.0018% decrease in ecological footprint, and a 0.0013% decrease in carbon emissions. Still, the scientific community and general public lack knowledge about the true environmental implications of AI. Senate Bill 3732, or the Artificial Intelligence Environmental Impacts Act of 2024, aims to fill this knowledge gap. 

    The Bill

    The Artificial Intelligence Environmental Impacts Act was introduced in February 2024 by Senator Ed Markey (D-MA). A House companion bill, H.R. 7197, was introduced simultaneously by Representative Anna Eshoo (D-CA). The bill has four main clauses that instruct the Environmental Protection Agency (EPA), The National Institute of Standards and Technology, the Secretary of Energy, and the Office of Science and Technology Policy to:

    1. Initiate a study on the environmental impacts of AI
    2. Convene a consortium of intellectuals and stakeholders to create recommendations on how to address the environmental impacts of AI
    3. Create a system for the voluntary reporting of the environmental impacts of AI
    4. Report to Congress the findings of the consortium, describe the system of voluntary reporting and make recommendations for legislative and administrative action

    This bill seeks to fill the gaps in existing research by commissioning comprehensive studies of both the negative and potential positive environmental impacts of artificial intelligence. It will also employ experts to guide lawmakers in creating effective future regulation of the AI industry. 

    Arguments in Favor

    Filling Gaps in Knowledge

    A key reason Data & Society, an NYC-based independent research institute, endorsed the bill was to fill existing gaps in research. They highlight the limited understanding of both the depth and scale of the impacts of AI on the environment as key areas that require more research. They also highlight the role of this proposed research initiative in determining how to limit the environmental impacts of AI. Tamara Kneese, a researcher for the organization, highlights that there is a lack of research that seeks to understand “the full spectrum of AI’s impacts,” which this bill would directly address. 

    Increasing Transparency in the Industry

    One of the arguments made by a co-sponsor of the legislation in the House of Representatives, Representative Beyer (D-VA), highlights how this bill would put the United States ahead in AI transparency work. Currently, the industry is not forthright about its environmental impact. For example, OpenAI has released no information about the process to create and train ChatGPT’s newest model, which makes it impossible to estimate its environmental impact. The voluntary reporting system created encourages that information to be reported, allowing for tracking of emissions and increased transparency in the industry. 

    Reducing Environmental Harm

    Another supporter of the bill, Greenpeace, views the bill as a way to protect against the environmental harm of new technology and address issues of environmental injustice. Erik Kojola, Greenpeace USA’s senior research specialist, says that this bill is “a first step in holding companies accountable and shedding light on a new technology and opaque industry”. Others, such as the Piedmont Environmental Council, view it as a step towards the implementation of well-informed regulation of AI. The bill’s fourth provision outlines that recommendations be made to Congress for the implementation of regulations of the industry, based on expert opinion and the research that the bill commissions. 

    Arguments Against

    Lacks Enforcement Mechanisms, Delayed Approach

    Critics argue that the bill relies too heavily on industry compliance by primarily using voluntary emissions reporting. In essence, there is no way of forcing companies to actually report their emissions from the working of the bill. There is also the argument that calling for more research only serves to delay taking concrete action to address climate change. The bill itself does little to stop pollution and usage of freshwater resources, and instead delays any action or regulation until detailed research can be conducted and further recommendations can be made. 

    Ignores AI’s Potential to Help the Environment

    Other critics argue that AI is constantly becoming more efficient and government intervention may hinder that. According to the World Economic Forum, AI is able to both optimize its own energy consumption as well as contribute to facilitating energy transitions. Opponents of S.B. 3732 hold that research should focus on improving efficiency within the industry as opposed to tracking its output to inform regulations. 

    Top-down Approach Sidelines Industry Leaders and Efforts

    Some opponents also critique the bill’s research- and information gathering-heavy approach. Critics argue that S.B. 3732 does little to create accountability within the industry and does not integrate existing measures to increase efficiency. They point to examples that show AI itself is being used to create informed climate change policy through analyzing climate impacts on poor communities and generating solutions. Critics argue that the bill largely ignores these efforts and input from industry leaders who say federal funds should be spent optimizing AI rather than regulating it. 

    Updates and Future Outlook

    While S.B. 3732 and its House companion bill were referred to several subcommittees for review, neither made it to the floor for a vote before the end of the 118th Congress and thus will need to be re-introduced in order to be passed in the future. Should the bill be passed into law, the feasibility of its implementation is uncertain given major funding cuts to key stakeholders such as the EPA under the current administration. Without proper government funding to conduct the research that the bill outlines, the efficacy of this research is likely to be weakened. 

    In addition, President Trump signed an executive order titled “Removing Barriers to American AI Innovation” in January 2025, which calls for departments and agencies to revise or rescind all policies and other actions taken under the Biden administration that are inconsistent with “enhancing America’s leadership in AI.”  In addition to taking an anti-regulation stance on AI, this executive order is the first step in a rapid proliferation of AI data centers that are to be fueled with energy from natural gas and coal. Given this climate, S.B. 3732 and similar bills face an uncertain future in the current Congress.

    Conclusion

    S.B. 3732 responds to the knowledge gap on AI’s environmental impacts by commissioning studies and encouraging reporting of AI-related energy benefits and drawbacks. Supporters of the bill view it as a crucial intervention to fill said information gaps, increase transparency, and address environmental harms through policy recommendations. Some opponents of the bill critique it as a stalling tactic for addressing climate change, while others contend the bill simply looks in the wrong place, focusing on AI industry compliance and existing impacts instead of encouraging innovation in the sector.