Tag: data privacy

  • Perspectives on the California Privacy Rights Act: America’s Strictest Data Privacy Law

    Perspectives on the California Privacy Rights Act: America’s Strictest Data Privacy Law

    Background and Key Provisions

    The California Privacy Rights Act (CPRA), also known as Proposition 24, is a recently enacted law aimed at strengthening corporate regulations on data collection and processing in California. It acts as an addendum to the California Consumer Privacy Act (CCPA), a voter-initiated measure designed to enhance oversight of corporate data practices. The CPRA seeks to increase public trust in corporations and improve transparency regarding targeted advertising and cookie usage. Cookies are small files containing user information that websites create and store on users’ devices to tailor their website experience. The CPRA aims to align California’s data privacy practices with the General Data Protection Regulation (GDPR), a European Union data privacy law regarded as the most comprehensive in the world. 

    The CPRA was introduced as a referendum by California voters for the November 2020 general election. It passed with the support of 56.2% of voters in 2020, but did not go into effect until January 1st, 2023. The law builds off of the preexisting CCPA’s protections for user data through the following key provisions:

    • Establishes the California Privacy Protection Agency (CPPA), a government agency responsible for investigating violations, imposing fines, and educating the public on digital privacy rights.
    • Clarifies CCPA definitions of personal data, creating specific categories for financial, biometric, and health data. Adds a new category of sensitive personal information, which will be regulated more heavily than personal information. 
    • Implements privacy protections for minors. Under the CPRA, companies must request permission to buy or sell data from minors, and can be fined for the intentional or unintentional misuse of minors’ data. Minors ages 13 to 16 must explicitly opt into data sharing, while minors ages 16 through 18 can opt out of data sharing. 
    • Expands consumer rights by prohibiting companies from charging fees or refusing services to users who opt out of data sharing. Building on the CCPA’s universal right to opt out of data sharing, the CPRA gives consumers a right to correct or limit the use of the data they share. Consumers can also sue companies that violate the CPRA, even if their personal data was not involved in a security breach. 
    • Modifies the CCPA’s definition of a covered business to exclude most small businesses and include any business that generates significant income from the sale of user data. 

    Perspectives on CPRA Data Collection Regulations

    One of the most contentious aspects of the CPRA is the regulation of personal data collection. Supporters contend that increased regulation will enhance consumer trust by preventing corporations from over-collecting and misusing personal data. Many California voters worry that businesses are gathering and selling personal information without consumers’ knowledge. Whether or not these fears are justified, they have driven strong public support for stricter data processing guidelines under both the CCPA and CPRA. Additionally, supporters of the CPRA argue that its impact on corporate data will be minimal, given that studies suggest less than 1% of Californians take advantage of opt-out options for data sharing.

    Opponents argue that restricting data collection could lead to inaccuracies if a large number of consumers choose to opt out. Without access to a broad dataset, companies may face higher costs to clean and verify the data they collect. Currently, many businesses rely on cookies and tracking technologies to analyze consumer behavior. If these methods become less effective, companies may need to invest in alternative, more expensive market research techniques or expand their workforce to ensure data accuracy.

    The opt-out mechanism has been a focal point of debate. Supporters view it as a balanced compromise, allowing Californians to protect their personal information without significantly disrupting corporate data operations. However, some argue that an opt-in model—requiring companies to obtain explicit consent before collecting data—would provide stronger privacy protections. Critics believe that many consumers simply accept default data collection policies because opting out can be confusing or time-consuming, ultimately limiting the effectiveness of the CPRA’s protections.

    Financial Considerations

    Beyond concerns about data collection, the financial impact of the CPRA has also been widely debated. While the CPRA exempts small businesses from its regulations, larger businesses had already invested heavily in CCPA compliance and were reluctant to incur additional costs to meet new, potentially stricter regulations under the CPRA. Additionally, implementing the CPRA was estimated to cost the State of California approximately $55 billion due to the creation of a new regulatory agency and the need for updated data practices. Critics argued that these funds could have been allocated more effectively, while supporters viewed the investment as essential for ensuring corporate accountability.

    Future Prospects for California’s Privacy Policy

    Since the CPRA is an addendum to the CCPA, California data privacy law remains open to further modifications. Future updates will likely center on three key areas: greater alignment with European Union standards, increased consumer education, and clearer guidelines on business-vendor responsibility.

    The General Data Protection Regulation (GDPR), the European Union’s comprehensive data privacy law, already shares similarities with the CPRA, particularly in restricting data collection and processing. However, a major distinction is that the GDPR applies to all companies operating within its jurisdiction, regardless of revenue. Additionally, the GDPR requires companies to obtain explicit opt-in consent for data collection, while the CPRA relies on an opt-out system. Some supporters of the CPRA believe it does not go far enough, and may consider advocating for GDPR-style opt-in requirements in the future. 

    Others argue that many individuals are unaware of how their data is collected, processed, and sold, no matter how many regulations the state implements. This lack of knowledge can lead to passive compliance rather than informed consent under laws like the CPRA. In the future, advocacy organizations may push for California privacy law to include stronger provisions for community education programs on data collection and privacy options.  

    Another area for potential reform is business-vendor responsibility. Currently, both website operators and third-party vendors are responsible for complying with CPRA regulations, which some argue leads to redundancy and confusion. If accountability is not clearly assigned, businesses may assume that the other party is handling compliance, increasing the risk of regulatory lapses. Clarifying these responsibilities might be a target for legislators or voters who are concerned about streamlining the enforcement of privacy law. 

    Conclusion

    With laws like the CCPA and the CPRA, California maintains the strongest data privacy protections in the nation. Some view these strict regulations as necessary safeguards against the misuse of consumer data that align the state with global privacy norms. Others see laws like CPRA as excessive impositions on business resources. Still, others argue that California law does not go far enough, advocating for a universal opt-in standard rather than an opt-out standard for data sharing. As debates around CPRA continue, California is likely to provide a model for other state and federal data privacy regulations across the U.S.

  • Understanding Data Privacy Protections: ADPPA and APRA

    Understanding Data Privacy Protections: ADPPA and APRA

    Data privacy is an ongoing concern for Americans. A national study from 2014 found that over 90% of respondents believed they had lost control of how their personal data is used by companies, and that 80% were concerned about government surveillance of online communications. Nearly a decade later, the vast majority of Americans remained concerned and confused about how companies and the government use their personal data. Tech companies like Google, Meta, and Microsoft often collect data about users’ activities, preferences, and interactions on social media platforms and websites. This data can include users’ demographic information, browsing history, location, device information, and social interactions. While a majority of data tracking happens within apps, companies can also employ hard-to-detect tracking techniques to follow individuals across a variety of apps, websites, and devices.  This can make it difficult for users to evade data tracking even when they decline data collection permissions. 

    Background: ADPPA and APRA

    To address longstanding concerns about data privacy, lawmakers proposed The American Data Privacy and Protection Act (ADPPA) in 2022. ADPPA aimed to “limit the collection, processing, and transfer of personal data” of consumers while also “generally prohibit[ing] companies from transferring individuals’ personal data without their affirmative express consent.” Representative Frank Pallone (D-NJ) and Ranking Member Cathy McMorris Rodgers (R-WA) sponsored the bill. ADPPA passed out of the House Committee on Energy and Commerce with almost unanimous support, but was not brought up for a vote before the close of the 117th Congress. Two years later, lawmakers introduced the American Privacy Rights Act (APRA), a similar data privacy bill with more robust mechanisms for data control and privacy. To understand the APRA, it’s crucial to examine the various arguments for and against its predecessor, the ADPPA. 

    Arguments in Favor of the ADPPA

    The push for ADPPA reflected a need to create uniform privacy standards on a federal level. Many businesses and industry groups supported the ADPPA because it would have standardized data privacy policies across the United States through a preemption clause that overrides similar state laws. Supporters argued that this would eliminate the challenge and high cost of enforcing a patchwork of data privacy laws across 20 states

    Other proponents applauded the ADPPA’s efforts to maintain civil rights for marginalized users. In a letter to House Speaker Pelosi (D-CA), 48 civil rights, privacy, and consumer organizations highlighted the bill’s provisions to require technology companies to test their algorithms for bias and increase online protections for users under 17 years old. They argued that these provisions, along with limitations on data collection without user consent, would “provide long overdue and much needed protections for individuals and communities.”

    Criticisms of the ADPPA

    Although the ADPPA garnered strong bipartisan support in committee, it ultimately failed to pass. Some experts argued that the bill contained loopholes that could be exploited by companies to avoid compliance, including inadequate provisions addressing data brokers, a limited private right of action, and the potential for gaps in enforcement. The ADPPA’s private right of action clause, which allows individuals and groups to take civil action against tech companies in Federal court for violating the ADPPA’s provisions, drew much debate. While the original bill was rewritten to permit a private right of action beginning two years after the passage of the bill rather than four years, some lawmakers still feared that this two-year delay left a gap in enforcement. As for the data broker question, the ADPPA would have implemented more robust hurdles to the sale of user data, but did not ban the brokerage of sensitive data outright. Given the growing influence of the data brokerage industry, some argued that the ADPPA overlooked a critical component of the data privacy ecosystem by omitting strong regulations on data brokerage. 

    The greatest criticism of the ADPPA concerned its preemption clause. While ADPPA would have supplemented existing data privacy legislation in states like Virginia and Colorado, other states were worried that the implementation of ADPPA would overrule stronger privacy laws at the state level. California lawmakers feared that the ADPPA’s preemption clause would nullify the stricter provisions in the California Consumer Privacy Act, weakening their state’s pre-existing privacy protections. While the ADPPA contains carve-outs for some parts of strict state-level laws and was rewritten to allow the California Privacy Protection Agency to enforce ADPPA compliance, these provisions did not satisfy lawmakers who worried that privacy protections for their constituents would still be rolled back. Ultimately, many cite opposition from Californian legislators as a major reason why the ADPPA failed to pass. 

    The APRA: A New Framework

    In 2024, Senate Commerce Committee Chair Maria Cantwell (D-WA) and House Energy and Commerce Committee Chair Cathy McMorris Rodgers (D-WA) introduced the American Privacy Rights Act (APRA) as a new federal framework for data privacy protection. Senator Cantwell had been an outspoken critic of ADPPA’s right to private action. Both ADPPA and APRA contain similar provisions to establish a centralized procedure for users to opt-out of data sharing and to require corporations to collect no more data than is necessary to meet specific needs. Additionally, both bills include a state law preemption clause, with differing exceptions to the state laws they override. APRA’s preemption clause has received similar criticisms to ADPPA’s clause, as California legislators fight for a “floor” for privacy rights rather than a “ceiling.”

    In contrast to the ADPPA, the APRA includes a much broader private right of action, allowing individuals to sue companies for violations immediately. This differs from the ADPPA’s two-year delay on private suits that intended to give businesses time to comply. The APRA also expands the ADPPA’s definition of covered organizations to include agencies that process the sensitive data of over 700,000 connected devices. Additionally, the APRA includes more specific provisions to protect data for users under the age of 17. The APRA is currently undergoing a similar process to its predecessor, and was most recently referred to the House Committee on Energy and Commerce. It will likely take months for decisions to be made regarding the bill’s passage out of committee, but the bill has garnered significant bipartisan support and shows promise in the current Congress. 

    Conclusion

    In today’s digital age, more Americans than ever are concerned about the data they share and how it’s used. With evolving social media algorithms and corporate data collection strategies, bills like the ADPPA and the APRA provide potential routes to stronger protections for user privacy. The debate surrounding both bills centers on balancing the need for a uniform federal standard with the preservation of stronger state laws, and reconciling strict consumer protection with the likelihood of corporate compliance. As lawmakers consider these factors, data privacy bills like the APRA are likely to make progress in coming months.

  • Understanding the AI in Healthcare Debate

    Understanding the AI in Healthcare Debate

    Background

    What is Artificial Intelligence?

    Artificial intelligence, more commonly referred to as AI, encompasses many technologies that enable computers to simulate human intelligence and problem solving abilities. AI includes machine learning, which allows computers to imitate human learning, and deep learning, a subset of machine learning that simulates the decision making processes of the human brain. Together, these algorithms power most of the AI in our daily lives, such as Chat GPT, self-driving vehicles, GPS, and more. 

    Introduction

    Due to the rapid and successful development of AI technology, its use is growing across many sectors including healthcare. According to a recent Morgan Stanley report, 94 percent of surveyed healthcare companies use AI in some capacity. In addition, a MarketsandMarkets study valued the global AI healthcare market at $20.9 billion for 2024 and predicted the value to surpass $148 billion by 2029. The high projected value of AI can be attributed to the increasing use of AI across hospitals, medical research, and medical companies. Hospitals currently use AI to predict disease risk in patients, summarize symptoms for potential diagnoses, power chatbots, and streamline patient check-ins. 

    The increased use of AI in healthcare and other sectors has prompted policymakers to recommend global standards for AI implementation. UNESCO published the first global standards for AI ethics in November 2021, and the Biden-Harris Administration announced an executive order in October 2023 on safe AI use and development. Following these recommendations, the Department of Health and Human Services published a regulation titled HTI-1 Final Rule, which includes requirements, standards, and certifications for AI use in healthcare settings. The FDA also expanded its inspection of medical devices that incorporate AI in 2023, approving 692 AI devices. While the current applications of AI in the health industry seem promising, the debate over the extent of its use remains a contentious topic for patients and providers.

    Arguments in Favor of AI In Healthcare

    Those in favor of AI in healthcare cite its usefulness in diagnosing patients and streamlining patient interactions with the healthcare system. They point to evidence showing that AI is valuable for identifying patterns in complex health data to profile diseases. In a study evaluating the diagnostic accuracy of AI in primary care for over 100,000 patients, researchers found an overall 84.2 percent agreement rate between the physician and the AI diagnosis

    In addition, proponents argue that AI will reduce the work burden on physicians and administrators. According to a survey by the American Medical Association, two thirds of over 1,000 physicians surveyed identified advantages to using AI such as reductions in documentation time. Moreover, a study published in Health Informatics found that using AI to generate draft replies to patient messages reduced burnout and burden scores for physicians. Supporters claim that AI can improve the patient experience as well, reducing waiting times for appointments and assisting in appointment scheduling.

    Proponents also argue that using AI could significantly combat mounting medical and health insurance costs. According to a 2024 poll, around half of surveyed U.S. adults said they struggled to afford healthcare, and one in four said they put off necessary care due to the cost. Supporters hold that AI may be a solution, citing one study that found that AI’s efficiency in diagnosis and treatment lowered healthcare costs compared to traditional methods. Moreover, researchers estimate that the expansion of AI in healthcare could lead to savings of up to $360 billion in domestic healthcare spending. For example, AI could be used to save $150 billion annually by automating about 45 percent of administrative tasks and $200 billion in insurance payouts by detecting fraud. 

    Arguments Against AI in Healthcare

    Opponents caution against scaling up AI’s role in healthcare because of the risks associated with algorithmic bias and data privacy. Algorithmic bias, or discriminatory practices taken up by AI from unrepresentative data, is a well-known flaw that critics say is too risky to integrate into already-inequitable healthcare settings. For example, when trained with existing healthcare data such as medical records, AI algorithms tended to incorrectly evaluate health needs and disease risks in Black patients compared to White patients. One study argues that this bias in AI medical applications will worsen existing health inequities by underestimating care needs in populations of color. For example, the study found that an AI system designed to predict breast cancer risk may incorrectly assign Black patients as “low risk”. Since clinical trial data in the U.S. still severely underrepresents people of color, critics argue that algorithmic bias will remain a dangerous feature of healthcare AI systems in the future.

    Those against AI use in healthcare also cite concerns with data privacy and consumer trust. They highlight that as AI use expands, more corporations, clinics, and public bodies will have access to medical records. One review explained that recent partnerships between healthcare settings and private AI corporations has resulted in concerns about the control and use of patient data. Moreover, opponents argue that the general public is significantly less likely to trust private tech companies with their health data than physicians, which may lead to distrust of healthcare settings that partner with tech companies to integrate AI. Another issue critics emphasize is the risk of data breaches. Even when patient data is anonymized, new algorithms are capable of re-identifying patients. If data security is left to private AI companies that may not have experience protecting such large quantities of patient data against sophisticated attacks, opponents claim the risk of large-scale data leaks may increase. 

    Conclusion

    The rise of AI in healthcare has prompted debates on diverse topics ranging from healthcare costs to work burden to data privacy. Proponents highlight AI’s potential to enhance diagnostic accuracy, reduce administrative burdens on healthcare professionals, and lower costs. Conversely, opponents express concerns about algorithmic bias exacerbating health disparities and data breaches leaking patient information. As the debate continues, the future of AI in healthcare will hinge on addressing these diverse perspectives and ensuring that the technology is developed responsibly.