Tag: AI

  • Understanding the Debate on AI in Electronic Health Records

    Understanding the Debate on AI in Electronic Health Records

    Background

    Artificial Intelligence (AI) refers to the use of computer algorithms to process data and make decisions, ultimately streamlining administrative processes. In healthcare, AI is being increasingly integrated with Electronic Health Records (EHRs)—digital systems that store and manage patient health information, such as medical history and diagnoses. By 2021, almost 80% of office-based physicians and virtually all non-federal acute care hospitals had implemented an EHR system. As part of this widespread adoption, various AI applications in EHRs are beginning to emerge. So far, the main functions of AI in EHRs include managing datasets of patient health information, identifying patterns in health data, and using these patterns to predict health outcomes and recommend pathways for treatment. 

    Arguments in Favor of AI in EHRs

    The use of AI in EHRs presents opportunities to improve healthcare by increasing efficiency as well as supplying administrative support. Supporters of AI integration argue that it can significantly improve diagnostic accuracy. AI-integrated EHR systems can analyze vast amounts of patient data, flagging potential issues that might otherwise be overlooked by human clinicians. Machine learning algorithms can identify patterns across multiple cases and recommend diagnoses or treatments based on evidence from similar cases. Proponents contend that by reducing human error and providing real-time insights, AI could support doctors in making more accurate and quick decisions, leading to better patient outcomes.

    Proponents of AI in EHRs also argue that AI has the potential to significantly reduce healthcare inequities by providing better access and more personalized care for underserved populations. AI-powered tools can identify at-risk patients early by analyzing complex data, including demographic and behavioral factors, and help prioritize interventions for those who need it most. Additionally, AI can bridge communication gaps for patients facing language barriers or low health literacy, ensuring they receive clear and relevant information about their health. Supporters also suggest that AI’s ability to reduce human biases in clinical decision-making, such as disparities in pain assessment or treatment recommendations, could lead to fairer, more equitable healthcare outcomes for all.

    From the workforce perspective, supporters argue that AI integration in EHRs has the ability to significantly reduce physician burnout by streamlining the documentation process. With the increasing time spent on EHR tasks, AI-driven tools like voice-to-text transcription, automated note generation, and data entry can cut down the time physicians devote to administrative duties. For instance, one 2023 study reported that AI integration in health records led to a 72% reduction in documentation time, equating to approximately 3.3 hours saved per week per clinician. This allows doctors to spend more time on direct patient care and less on paperwork, which supporters contend will improve job satisfaction and reduce stress.

    Arguments Against AI in EHRs

    While some argue that AI in EHRs will lead to more accurate and equitable healthcare, others raise concerns regarding data bias, privacy, and transparency. Critics of AI integration argue that modern legal frameworks lack adequate safeguards for individuals’ health data, leaving sensitive information vulnerable to breaches. For example, data collected by AI tools may be hacked or gathered without consent for marketing purposes. Additionally, certain genetics testing companies that operate without sufficient legal oversight may sell customer data to pharmaceutical and biotechnology companies.

    Moreover, some critics share concerns about whether AI integration in EHRs aligns with standards for informed consent. Informed consent is a key ethical principle that ensures patients are fully informed and in control of decisions regarding their healthcare. It includes elements such as the patient’s ability to understand and make decisions about their diagnoses, treatment options, and any risks involved. Ethical responsibility dictates that consent should be specific, voluntary, and clear. The rise of AI in healthcare applications has increased concerns about whether patients are fully aware of how their data is used, the risks of procedures, and potential errors in AI-driven treatments. Autonomy principles state that patients have the right to be informed about their treatment process, the privacy of their data, and the potential risks of AI-related procedures, such as errors in programming. Critics say that patients must be more informed about how AI is integrated into health records systems in order for them to truly provide informed consent. 

    Another significant ethical concern in the use of AI and machine learning (ML) in healthcare is algorithmic bias, which can manifest in racial, gender, and socioeconomic disparities due to flaws in algorithm design. Such biases may lead to misdiagnosis or delayed treatments for underrepresented groups and exacerbate inequities in access to care. To address this, advocates push for the prioritization of diverse training data that reflects demographic factors. They hold that regular evaluations are necessary to ensure that AI models consistently remain fair over time, upholding the principles of justice and equity. 

    Future Outlook

    Building on the potential of AI in healthcare, H.R. 238, introduced on January 7, 2025, proposes that AI systems be authorized to prescribe medications if they are approved by the Food and Drug Administration (FDA) and if the state where they operate permits their use for prescribing. This bill represents a significant step in integrating AI into clinical practices, going beyond data management to reshape how medications are prescribed and managed. The arguments for and against H.R. 238 mirror the debate around AI integration in EHRs; while proponents of the bill argue that AI could enhance patient safety, reduce errors, and alleviate clinician burnout, critics highlight concerns regarding the loss of human judgment, data privacy, and the potential for AI to reinforce biases in healthcare. As AI continues to play a central role in healthcare, bills like H.R. 238 spark important discussions about AI’s ethical, practical, and legal implications in clinical decision-making.

    Summary

    In conclusion, the integration of AI into EHRs has forced medical stakeholders to balance a need for improvements in accuracy and efficiency with a concern for medical ethics and patient privacy. On one hand, AI can support more accurate diagnoses, enhance patient care, and help reduce the burnout faced by healthcare providers. Additionally, AI may contribute to reducing healthcare inequities by providing better access and more personalized care, especially for underserved populations. However, the implementation of AI also raises concerns regarding data privacy, algorithmic bias, and informed consent, suggesting a need for more careful implementation and oversight. As AI’s presence in healthcare settings continues to expand, addressing these concerns will be key to ensuring it benefits patients and healthcare providers alike.

  • Pros and Cons of the Patent Eligibility Restoration Act of 2023

    Pros and Cons of the Patent Eligibility Restoration Act of 2023

    Background Information

    Artificial intelligence (AI) is transforming the patent landscape, creating an influx of patent applications that mirrors a rise in modern-day innovation. However, the discussion of patentable inventions under U.S. law lags behind. The Patent Eligibility Restoration Act of 2023 (PERA) aims to address this by reversing court rulings that have narrowed the scope of patent eligibility in emerging fields like AI. Ultimately, PERA stands at the intersection of technology, law, and political ideology, shaping the role of government in maintaining intellectual property (IP).

    Supreme Court decisions in Mayo v. Prometheus and Alice v. CLS are widely recognized as turning points in patent law. The cases, which restricted patent eligibility for abstract ideas and natural laws, marked the first narrowing of patent eligibility since the 1950s. PERA would “eliminate all judicial exceptions” to patent law in an attempt to remedy the confusion caused by the Mayo and Alice rulings. The bill was introduced in the Senate by Senators Thom Tillis (R-NC) and Chris Coons (D-DE) in 2023. Its House companion was introduced by Representatives Scott Peters (D-CA) Kevin Kiley (R-CA) in 2024. While it received bipartisan support and a hearing in the Senate Intellectual Property Subcommittee, PERA ultimately died in committee at the end of the 118th Congress. 

    PERA presents three key advantages: 

    1. Economic and Innovation Benefits: Boosts innovation and economic growth.
    2. International Competitiveness:  Secures U.S. innovation against global competitors.
    3. Expansion of AI and other emerging technologies:  Clarifies AI patent eligibility to strengthen U.S. leadership on the global stage.

    In terms of economic and innovative benefits, the United States Patent and Trademark Office advocates for PERA as a catalyst for innovation. It specifically states that small to medium-sized firms “need clear intellectual property laws that incentivize innovation…[as it’s] critical for job creation, economic prosperity,” in addition to several extended impacts. Furthermore, the American Intellectual Property Law Association (AIPLA), argues that PERA enacts clearer policies that will generate efficient product development and innovation, improving both industry standards and marginal utility for the consumer. Wilson Sonsini, a nonpartisan outlet that conducts the legal analysis, finds that the bill would in fact reverse the stagnation of innovation. In a written testimony submitted to the Senate Subcommittee on Intellectual Property, law professor Adam Mossoff argued that PERA is essential for restoring American dominance in global innovation and patent sectors.

    PERA not only aims to improve U.S. innovation and investment, but also clarifies AI patentability to bolster America’s edge on the global stage. According to Republican Representative Kevin Kiley, the U.S. must expand patentability to compete with China, emphasizing PERA as a key to gaining a competitive edge through clearer patent laws. In an interview with Representative Kiley, the Center for Strategic and International Studies (CSIS) found that China’s approach to intellectual property poses a significant threat to American innovation and prosperity, strengthening the case for PERA. Senator Coons, a PERA co-sponsor, believes that the bill is necessary to help the U.S. catch up to Europe and China in the realm of AI patent law. 

    Other supporters argue that PERA’s expansion of patentability will open the door to advancement in domestic AI technology. A multinational law firm argues that expanding patent eligibility to AI models and business methods is crucial for the development of the U.S. technology industry. By broadening patentability, PERA can reduce the backlog of unsuccessful patents, sparing inventors from having to revalidate their claims. To reinforce this, the global law firm McDermott Will & Emery contends that PERA reduces ambiguity in patent eligibility by defining AI-related patents and human involvement in AI inventions.

    However, while PERA offers significant benefits for innovation, global competitiveness, and emerging technologies, it also raises concerns about potential drawbacks, including the risk of overly broad patents and unintended legal complexities. 

    PERA presents three key disadvantages:

    1. Overbroad Patentability: Risks limiting access to life-saving technologies.
    2. Hurting Small Inventors: Creates an ambiguous legal landscape that only large corporations can afford to navigate.
    3. Ethical and Global Concerns: Conflicts with global patent norms, risking international relations. 

    The NYU Journal of Intellectual Property and Entertainment Law highlights concerns that broadening patent eligibility could negatively impact the life sciences sector by creating barriers between consumers and newly-patented technologies. It argues that PERA undermines the balance between rewards gained from innovation and public accessibility to products they depend on. Another critique from the Center for Innovation Promotion finds that PERA disrupts established legal standards, creating uncertainty in the patent system. Its broad eligibility could stifle innovation by exacerbating patent disruptions instead of encouraging progress and innovation. 

    Other critics worry that PERA could negatively impact small businesses. U.S. Inventor, an inventor’s rights advocacy group, critiques the bill for creating a complex legal landscape that only large corporations can afford to navigate. It argues that PERA lacks definitions for most of its crucial terms will only create more confusion, stating, “Investment into anything that risks falling into PERA’s undefined ineligibility exclusions will be hobbled.”

    PERA also raises ethical concerns, particularly in its treatment of genetic material, which may conflict with international patent standards. According to the NYU Journal of Intellectual Property and Entertainment Law, these discrepancies could lead to tensions between U.S. patent law and global practices, disrupting international collaborations and agreements. The BIOSECURE Report emphasizes PERA’s potential for significant harm to global patent standardization, as countries may struggle to reconcile U.S. policies with their own systems. These challenges could strain international relations, as nations may view PERA’s approach as a threat to their sovereignty and global patent harmony.

    The Status Quo and Future of PERA

    PERA was proposed in a time of heightened awareness and discussion of IP policy. With regard to national security concerns, the Foreign Affairs House Report finds Chinese IP theft against U.S. companies, emphasizing China’s competitive threat in innovation. Similarly, Reuters reports on Tesla’s IP theft case, showcasing ongoing challenges in protecting American technology. These challenges in protecting American innovation set the stage for potential policy shifts under a Trump presidency. According to IP Watchdog, changes in IP law could influence public trust and perceptions of America’s stance on innovation and patent protection. However, as Wolf Greenfield Think Tank notes, broader geopolitical implications, especially regarding competition with China in biotech and AI patents, may not fully align with Trump’s campaign vision. Additionally, Senate Judiciary reports highlight how bipartisan concerns over innovation could shape the future prospects of bills like PERA, with legislative gridlock potentially influencing amendments throughout the current presidential term and beyond. This gridlock could ultimately lead to a slower passing of patent-related legislation.

    Conclusion

    While PERA aims to expand patent eligibility and boost economic growth, critics are wary of overbroad patents, harm to small inventors and businesses, and geopolitical conflicts. Striking a balance between innovation, equity, and competition remains essential to ensuring a patent system that fosters progress without preventing accessibility.

  • Pros and Cons of S.B. 3732: The Artificial Intelligence Environmental Impacts Act

    Pros and Cons of S.B. 3732: The Artificial Intelligence Environmental Impacts Act

    Introduction

    The rise in the prevalence of artificial intelligence (AI) has had significant impacts on the environment. This includes the electricity required to power the technology, the release of hundreds of tons of carbon emissions, and the depletion of freshwater resources for data center cooling. For example, AI data centers in the U.S. use about 7,100 liters of water per megawatt-hour of energy they consume

    Demand for energy to power AI is rising. One study predicts that AI data centers will increase from about 3% of the US’s energy usage in 2023 to about 8% in 2030. However, there is also a potential for AI to have positive impacts on the environment. AI is a powerful tool in promoting energy transitions, with a 1% increase in AI development corresponding to a 0.0025% increase in energy transition, a 0.0018% decrease in ecological footprint, and a 0.0013% decrease in carbon emissions. Still, the scientific community and general public lack knowledge about the true environmental implications of AI. Senate Bill 3732, or the Artificial Intelligence Environmental Impacts Act of 2024, aims to fill this knowledge gap. 

    The Bill

    The Artificial Intelligence Environmental Impacts Act was introduced in February 2024 by Senator Ed Markey (D-MA). A House companion bill, H.R. 7197, was introduced simultaneously by Representative Anna Eshoo (D-CA). The bill has four main clauses that instruct the Environmental Protection Agency (EPA), The National Institute of Standards and Technology, the Secretary of Energy, and the Office of Science and Technology Policy to:

    1. Initiate a study on the environmental impacts of AI
    2. Convene a consortium of intellectuals and stakeholders to create recommendations on how to address the environmental impacts of AI
    3. Create a system for the voluntary reporting of the environmental impacts of AI
    4. Report to Congress the findings of the consortium, describe the system of voluntary reporting and make recommendations for legislative and administrative action

    This bill seeks to fill the gaps in existing research by commissioning comprehensive studies of both the negative and potential positive environmental impacts of artificial intelligence. It will also employ experts to guide lawmakers in creating effective future regulation of the AI industry. 

    Arguments in Favor

    Filling Gaps in Knowledge

    A key reason Data & Society, an NYC-based independent research institute, endorsed the bill was to fill existing gaps in research. They highlight the limited understanding of both the depth and scale of the impacts of AI on the environment as key areas that require more research. They also highlight the role of this proposed research initiative in determining how to limit the environmental impacts of AI. Tamara Kneese, a researcher for the organization, highlights that there is a lack of research that seeks to understand “the full spectrum of AI’s impacts,” which this bill would directly address. 

    Increasing Transparency in the Industry

    One of the arguments made by a co-sponsor of the legislation in the House of Representatives, Representative Beyer (D-VA), highlights how this bill would put the United States ahead in AI transparency work. Currently, the industry is not forthright about its environmental impact. For example, OpenAI has released no information about the process to create and train ChatGPT’s newest model, which makes it impossible to estimate its environmental impact. The voluntary reporting system created encourages that information to be reported, allowing for tracking of emissions and increased transparency in the industry. 

    Reducing Environmental Harm

    Another supporter of the bill, Greenpeace, views the bill as a way to protect against the environmental harm of new technology and address issues of environmental injustice. Erik Kojola, Greenpeace USA’s senior research specialist, says that this bill is “a first step in holding companies accountable and shedding light on a new technology and opaque industry”. Others, such as the Piedmont Environmental Council, view it as a step towards the implementation of well-informed regulation of AI. The bill’s fourth provision outlines that recommendations be made to Congress for the implementation of regulations of the industry, based on expert opinion and the research that the bill commissions. 

    Arguments Against

    Lacks Enforcement Mechanisms, Delayed Approach

    Critics argue that the bill relies too heavily on industry compliance by primarily using voluntary emissions reporting. In essence, there is no way of forcing companies to actually report their emissions from the working of the bill. There is also the argument that calling for more research only serves to delay taking concrete action to address climate change. The bill itself does little to stop pollution and usage of freshwater resources, and instead delays any action or regulation until detailed research can be conducted and further recommendations can be made. 

    Ignores AI’s Potential to Help the Environment

    Other critics argue that AI is constantly becoming more efficient and government intervention may hinder that. According to the World Economic Forum, AI is able to both optimize its own energy consumption as well as contribute to facilitating energy transitions. Opponents of S.B. 3732 hold that research should focus on improving efficiency within the industry as opposed to tracking its output to inform regulations. 

    Top-down Approach Sidelines Industry Leaders and Efforts

    Some opponents also critique the bill’s research- and information gathering-heavy approach. Critics argue that S.B. 3732 does little to create accountability within the industry and does not integrate existing measures to increase efficiency. They point to examples that show AI itself is being used to create informed climate change policy through analyzing climate impacts on poor communities and generating solutions. Critics argue that the bill largely ignores these efforts and input from industry leaders who say federal funds should be spent optimizing AI rather than regulating it. 

    Updates and Future Outlook

    While S.B. 3732 and its House companion bill were referred to several subcommittees for review, neither made it to the floor for a vote before the end of the 118th Congress and thus will need to be re-introduced in order to be passed in the future. Should the bill be passed into law, the feasibility of its implementation is uncertain given major funding cuts to key stakeholders such as the EPA under the current administration. Without proper government funding to conduct the research that the bill outlines, the efficacy of this research is likely to be weakened. 

    In addition, President Trump signed an executive order titled “Removing Barriers to American AI Innovation” in January 2025, which calls for departments and agencies to revise or rescind all policies and other actions taken under the Biden administration that are inconsistent with “enhancing America’s leadership in AI.”  In addition to taking an anti-regulation stance on AI, this executive order is the first step in a rapid proliferation of AI data centers that are to be fueled with energy from natural gas and coal. Given this climate, S.B. 3732 and similar bills face an uncertain future in the current Congress.

    Conclusion

    S.B. 3732 responds to the knowledge gap on AI’s environmental impacts by commissioning studies and encouraging reporting of AI-related energy benefits and drawbacks. Supporters of the bill view it as a crucial intervention to fill said information gaps, increase transparency, and address environmental harms through policy recommendations. Some opponents of the bill critique it as a stalling tactic for addressing climate change, while others contend the bill simply looks in the wrong place, focusing on AI industry compliance and existing impacts instead of encouraging innovation in the sector.

  • Pros and Cons of California SB-1047: The AI Regulation Debate

    Pros and Cons of California SB-1047: The AI Regulation Debate

    Background

    With the recent emergence of ChatGPT, artificial intelligence (AI) has transformed from an obscure mechanism to a widely-used tool in day-to-day life. Around 77% of devices integrate some form of AI in voice assistants, smart speakers, chatbots, or customized recommendations. Still, while at least half of Americans are aware of AI’s presence in their daily lives, many are unable to pinpoint how exactly it is used. For some, the rapid growth of AI has created skepticism and concern. Between 2021 and 2023, the proportion of Americans who expressed concern about AI increased from 37% to 52%. By 2023, only 10% of Americans were more excited than concerned about AI applications in their day-to-day lives. Today, legislators at the federal and state level are grappling with the benefits and drawbacks of regulating AI use and development. 

    California’s SB-1047: An Introduction

    One of the key players in AI development is the state of California, which houses 35 of the 50 most prominent AI companies in the world. Two cities in California, San Francisco and San Jose, account for 25% of all AI patents, conference papers, and companies worldwide. California has responded to the growing debate on AI use through legislative and governmental channels. In 2023, Governor Gavin Newsom signed an executive order establishing initiatives to study the benefits and drawbacks of the AI industry, train government employees on AI skills, and work with legislators to adapt policies for responsible AI development. 

    One such policy that gained attention is SB-1047, or the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. The bill passed both chambers of the state legislature, but was vetoed by Governor Newsom in September 2024. Introduced by state senator Scott Weiner of San Francisco, SB-1047 aimed to establish safeguards in the development of large-scale AI models. Specifically, the bill applied to cutting-edge AI models that use a high level of computing power or cost more than $100 million to train. Its key provisions included:

    • Cybersecurity protections: Requires developers to take reasonable cybersecurity precautions to prevent unauthorized access to or unintended use of the AI model
    • Pre-release assessment: Requires developers to thoroughly test their AI model for potential critical harm before publicly releasing it. Establishes an annual third-party audit for all developers
    • “Kill switch”: Requires developers to create a capacity to “promptly enact a full shutdown” of the AI program in the case it risks damage to critical infrastructure
    • Safety protocol: Requires developers to create a written safety and security protocol, assign a senior professional to implement it, publish a redacted version, and send an unredacted version to the U.S. Attorney General upon request
    • Whistleblower protections: Prohibits developers from retaliating against employees who report violations of safety protocol internally or to government officials
    • CalCompute: Establishes a publicly-owned and -operated cloud computing infrastructure to “expand access to computational resources” for researchers and startups

    Pros of SB-1047

    One of the main arguments in favor of SB-1047 was that the bill encouraged responsible innovation. Proponents of the bill emphasized that it aligned with federal policy in targeting large-scale systems with considerable computing power, which pose the highest risk of harm due to their cutting-edge nature. They argued that the bill’s holistic approach to regulation, including preventative standards like independent audits and response protocol like the “kill switch” provision, make it difficult for developers to simply check a box stating they do not condone illegal use of their AI model. 

    Proponents also applauded the bill’s protections for whistleblowers at companies that develop advanced AI models. Given the lack of laws on AI development, general whistleblower protections that safeguard the reporting of illegal acts leave a gap of vulnerability for AI workers whose products are largely unregulated. Supporters say SB-1047 would have filled this gap by allowing employees to report potentially dangerous AI models directly to government officials without retaliation. In September 2024, over 100 current and former employees of major AI companies – many of which publicly advocated against the bill – sent a letter to Governor Newsom in support of the legislation’s protections. 

    Other supporters were enthusiastic about the bill’s establishment of CalCompute, a cloud computing infrastructure completely owned and operated by the public sector. Advocacy group Economic Security California praised CalCompute as a necessary intervention to disrupt the dominance of a “handful of corporate actors” in the AI sector. Other advocates emphasized that CalCompute would complement, rather than replace, corporations in providing supercomputing infrastructure. They argued that the initiative would expand access to AI innovation and encourage AI development for public good. 

    Another key argument in favor of SB-1047 is that the bill would have created a necessary blueprint for AI regulation, inspiring other states and even the federal government to implement similar protections. By signing the bill into law, proponents argue, California would have become the “first jurisdiction with a comprehensive framework for governing advanced AI systems”. Countries around the world, including Brazil, Chile, and Canada, are looking at bills like SB-1047 to find ways to regulate AI innovation as its applications continue to expand. 

    Cons of SB-1047

    SB-1047 received criticism from multiple angles. While some labeled the bill an unnecessary roadblock to innovation, others argued for even stronger regulations.

    On one hand, the bill’s large scope was criticized for focusing too heavily on theoretical dangers of AI, hindering innovation that might lead to beneficial advancements. Opponents contended that some of the language in the bill introduced hypothetical scenarios, such as the creation and use of weapons of mass destruction by AI, with no regard to their low plausibility. Major companies like Google, Meta, and OpenAI voiced opposition to the bill, warning that the heavy regulations would stifle productivity and push engineers to leave the state. 

    Others criticized the bill for its potential impacts on academia and smaller startups. Fei-Fei Li, co-director of Stanford University’s Human-Centered AI Institute, argued that the regulations would put a damper on academic and public-sector AI research. Li also stated that the bill would “shackle open source development” by reducing the amount of publicly available code for new entrepreneurs to build off of – a fear that was echoed by national lawmaker Nancy Pelosi (D-CA).

    On the other hand, some believe the bill did not go far enough in regulating cutting-edge AI. These critics pointed to provisions that exempt developers from liability if certain protocols are followed, which raised questions for them about the bill’s ability to hold developers accountable. They also criticized amendments that reduced or completely eliminated certain enforcement mechanisms such as criminal liability for perjury, stating such changes catered to the interests of large tech corporations. Critics argued that the bill’s vague definitions of “unreasonable risk” and “critical harm” leave ample room for developers to evade accountability. 

    Given the bill’s sweeping language in key areas, critics worried that it could either overregulate, or fail to regulate, AI effectively.

    Recent Developments

    On February 27th, 2025, SB-1047 sponsor Scott Weiner introduced a new piece of legislation on AI safety. The new bill, SB-53, was created with a similar intention of safeguarding AI development, but focuses specifically on the whistleblower protection and CalCompute provisions of the original bill.  

    While California continues to grapple with state-level regulations, the federal government has also taken steps to address AI. The Federal Communications Commission is using the 1980s Telephone Consumer Protection Act to restrict AI-generated human voices. The Federal Trade Commission has warned against AI misuse, including discrimination, false claims, and using AI without understanding its risks. In 2024, the Office of Management and Budget issued AI guidelines for all federal agencies. Later that year, the White House formed an AI Council and the AI and Technology Talent Task Force. Although no federal legislation has been passed, these actions show a growing focus on AI regulation.

    Conclusion 

    California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act aimed to regulate AI development through novel safeguards. While it was applauded by some as a necessary response to an ever-evolving technology, others believed its wide regulations would have stifled innovation and entrepreneurship. As AI’s use and applications continue to evolve, new policy solutions are likely to emerge at both a state and federal level in the future. 

  • Understanding the AI in Healthcare Debate

    Understanding the AI in Healthcare Debate

    Background

    What is Artificial Intelligence?

    Artificial intelligence, more commonly referred to as AI, encompasses many technologies that enable computers to simulate human intelligence and problem solving abilities. AI includes machine learning, which allows computers to imitate human learning, and deep learning, a subset of machine learning that simulates the decision making processes of the human brain. Together, these algorithms power most of the AI in our daily lives, such as Chat GPT, self-driving vehicles, GPS, and more. 

    Introduction

    Due to the rapid and successful development of AI technology, its use is growing across many sectors including healthcare. According to a recent Morgan Stanley report, 94 percent of surveyed healthcare companies use AI in some capacity. In addition, a MarketsandMarkets study valued the global AI healthcare market at $20.9 billion for 2024 and predicted the value to surpass $148 billion by 2029. The high projected value of AI can be attributed to the increasing use of AI across hospitals, medical research, and medical companies. Hospitals currently use AI to predict disease risk in patients, summarize symptoms for potential diagnoses, power chatbots, and streamline patient check-ins. 

    The increased use of AI in healthcare and other sectors has prompted policymakers to recommend global standards for AI implementation. UNESCO published the first global standards for AI ethics in November 2021, and the Biden-Harris Administration announced an executive order in October 2023 on safe AI use and development. Following these recommendations, the Department of Health and Human Services published a regulation titled HTI-1 Final Rule, which includes requirements, standards, and certifications for AI use in healthcare settings. The FDA also expanded its inspection of medical devices that incorporate AI in 2023, approving 692 AI devices. While the current applications of AI in the health industry seem promising, the debate over the extent of its use remains a contentious topic for patients and providers.

    Arguments in Favor of AI In Healthcare

    Those in favor of AI in healthcare cite its usefulness in diagnosing patients and streamlining patient interactions with the healthcare system. They point to evidence showing that AI is valuable for identifying patterns in complex health data to profile diseases. In a study evaluating the diagnostic accuracy of AI in primary care for over 100,000 patients, researchers found an overall 84.2 percent agreement rate between the physician and the AI diagnosis

    In addition, proponents argue that AI will reduce the work burden on physicians and administrators. According to a survey by the American Medical Association, two thirds of over 1,000 physicians surveyed identified advantages to using AI such as reductions in documentation time. Moreover, a study published in Health Informatics found that using AI to generate draft replies to patient messages reduced burnout and burden scores for physicians. Supporters claim that AI can improve the patient experience as well, reducing waiting times for appointments and assisting in appointment scheduling.

    Proponents also argue that using AI could significantly combat mounting medical and health insurance costs. According to a 2024 poll, around half of surveyed U.S. adults said they struggled to afford healthcare, and one in four said they put off necessary care due to the cost. Supporters hold that AI may be a solution, citing one study that found that AI’s efficiency in diagnosis and treatment lowered healthcare costs compared to traditional methods. Moreover, researchers estimate that the expansion of AI in healthcare could lead to savings of up to $360 billion in domestic healthcare spending. For example, AI could be used to save $150 billion annually by automating about 45 percent of administrative tasks and $200 billion in insurance payouts by detecting fraud. 

    Arguments Against AI in Healthcare

    Opponents caution against scaling up AI’s role in healthcare because of the risks associated with algorithmic bias and data privacy. Algorithmic bias, or discriminatory practices taken up by AI from unrepresentative data, is a well-known flaw that critics say is too risky to integrate into already-inequitable healthcare settings. For example, when trained with existing healthcare data such as medical records, AI algorithms tended to incorrectly evaluate health needs and disease risks in Black patients compared to White patients. One study argues that this bias in AI medical applications will worsen existing health inequities by underestimating care needs in populations of color. For example, the study found that an AI system designed to predict breast cancer risk may incorrectly assign Black patients as “low risk”. Since clinical trial data in the U.S. still severely underrepresents people of color, critics argue that algorithmic bias will remain a dangerous feature of healthcare AI systems in the future.

    Those against AI use in healthcare also cite concerns with data privacy and consumer trust. They highlight that as AI use expands, more corporations, clinics, and public bodies will have access to medical records. One review explained that recent partnerships between healthcare settings and private AI corporations has resulted in concerns about the control and use of patient data. Moreover, opponents argue that the general public is significantly less likely to trust private tech companies with their health data than physicians, which may lead to distrust of healthcare settings that partner with tech companies to integrate AI. Another issue critics emphasize is the risk of data breaches. Even when patient data is anonymized, new algorithms are capable of re-identifying patients. If data security is left to private AI companies that may not have experience protecting such large quantities of patient data against sophisticated attacks, opponents claim the risk of large-scale data leaks may increase. 

    Conclusion

    The rise of AI in healthcare has prompted debates on diverse topics ranging from healthcare costs to work burden to data privacy. Proponents highlight AI’s potential to enhance diagnostic accuracy, reduce administrative burdens on healthcare professionals, and lower costs. Conversely, opponents express concerns about algorithmic bias exacerbating health disparities and data breaches leaking patient information. As the debate continues, the future of AI in healthcare will hinge on addressing these diverse perspectives and ensuring that the technology is developed responsibly.