Tag: privacy

  • Perspectives on the California Privacy Rights Act: America’s Strictest Data Privacy Law

    Perspectives on the California Privacy Rights Act: America’s Strictest Data Privacy Law

    Background and Key Provisions

    The California Privacy Rights Act (CPRA), also known as Proposition 24, is a recently enacted law aimed at strengthening corporate regulations on data collection and processing in California. It acts as an addendum to the California Consumer Privacy Act (CCPA), a voter-initiated measure designed to enhance oversight of corporate data practices. The CPRA seeks to increase public trust in corporations and improve transparency regarding targeted advertising and cookie usage. Cookies are small files containing user information that websites create and store on users’ devices to tailor their website experience. The CPRA aims to align California’s data privacy practices with the General Data Protection Regulation (GDPR), a European Union data privacy law regarded as the most comprehensive in the world. 

    The CPRA was introduced as a referendum by California voters for the November 2020 general election. It passed with the support of 56.2% of voters in 2020, but did not go into effect until January 1st, 2023. The law builds off of the preexisting CCPA’s protections for user data through the following key provisions:

    • Establishes the California Privacy Protection Agency (CPPA), a government agency responsible for investigating violations, imposing fines, and educating the public on digital privacy rights.
    • Clarifies CCPA definitions of personal data, creating specific categories for financial, biometric, and health data. Adds a new category of sensitive personal information, which will be regulated more heavily than personal information. 
    • Implements privacy protections for minors. Under the CPRA, companies must request permission to buy or sell data from minors, and can be fined for the intentional or unintentional misuse of minors’ data. Minors ages 13 to 16 must explicitly opt into data sharing, while minors ages 16 through 18 can opt out of data sharing. 
    • Expands consumer rights by prohibiting companies from charging fees or refusing services to users who opt out of data sharing. Building on the CCPA’s universal right to opt out of data sharing, the CPRA gives consumers a right to correct or limit the use of the data they share. Consumers can also sue companies that violate the CPRA, even if their personal data was not involved in a security breach. 
    • Modifies the CCPA’s definition of a covered business to exclude most small businesses and include any business that generates significant income from the sale of user data. 

    Perspectives on CPRA Data Collection Regulations

    One of the most contentious aspects of the CPRA is the regulation of personal data collection. Supporters contend that increased regulation will enhance consumer trust by preventing corporations from over-collecting and misusing personal data. Many California voters worry that businesses are gathering and selling personal information without consumers’ knowledge. Whether or not these fears are justified, they have driven strong public support for stricter data processing guidelines under both the CCPA and CPRA. Additionally, supporters of the CPRA argue that its impact on corporate data will be minimal, given that studies suggest less than 1% of Californians take advantage of opt-out options for data sharing.

    Opponents argue that restricting data collection could lead to inaccuracies if a large number of consumers choose to opt out. Without access to a broad dataset, companies may face higher costs to clean and verify the data they collect. Currently, many businesses rely on cookies and tracking technologies to analyze consumer behavior. If these methods become less effective, companies may need to invest in alternative, more expensive market research techniques or expand their workforce to ensure data accuracy.

    The opt-out mechanism has been a focal point of debate. Supporters view it as a balanced compromise, allowing Californians to protect their personal information without significantly disrupting corporate data operations. However, some argue that an opt-in model—requiring companies to obtain explicit consent before collecting data—would provide stronger privacy protections. Critics believe that many consumers simply accept default data collection policies because opting out can be confusing or time-consuming, ultimately limiting the effectiveness of the CPRA’s protections.

    Financial Considerations

    Beyond concerns about data collection, the financial impact of the CPRA has also been widely debated. While the CPRA exempts small businesses from its regulations, larger businesses had already invested heavily in CCPA compliance and were reluctant to incur additional costs to meet new, potentially stricter regulations under the CPRA. Additionally, implementing the CPRA was estimated to cost the State of California approximately $55 billion due to the creation of a new regulatory agency and the need for updated data practices. Critics argued that these funds could have been allocated more effectively, while supporters viewed the investment as essential for ensuring corporate accountability.

    Future Prospects for California’s Privacy Policy

    Since the CPRA is an addendum to the CCPA, California data privacy law remains open to further modifications. Future updates will likely center on three key areas: greater alignment with European Union standards, increased consumer education, and clearer guidelines on business-vendor responsibility.

    The General Data Protection Regulation (GDPR), the European Union’s comprehensive data privacy law, already shares similarities with the CPRA, particularly in restricting data collection and processing. However, a major distinction is that the GDPR applies to all companies operating within its jurisdiction, regardless of revenue. Additionally, the GDPR requires companies to obtain explicit opt-in consent for data collection, while the CPRA relies on an opt-out system. Some supporters of the CPRA believe it does not go far enough, and may consider advocating for GDPR-style opt-in requirements in the future. 

    Others argue that many individuals are unaware of how their data is collected, processed, and sold, no matter how many regulations the state implements. This lack of knowledge can lead to passive compliance rather than informed consent under laws like the CPRA. In the future, advocacy organizations may push for California privacy law to include stronger provisions for community education programs on data collection and privacy options.  

    Another area for potential reform is business-vendor responsibility. Currently, both website operators and third-party vendors are responsible for complying with CPRA regulations, which some argue leads to redundancy and confusion. If accountability is not clearly assigned, businesses may assume that the other party is handling compliance, increasing the risk of regulatory lapses. Clarifying these responsibilities might be a target for legislators or voters who are concerned about streamlining the enforcement of privacy law. 

    Conclusion

    With laws like the CCPA and the CPRA, California maintains the strongest data privacy protections in the nation. Some view these strict regulations as necessary safeguards against the misuse of consumer data that align the state with global privacy norms. Others see laws like CPRA as excessive impositions on business resources. Still, others argue that California law does not go far enough, advocating for a universal opt-in standard rather than an opt-out standard for data sharing. As debates around CPRA continue, California is likely to provide a model for other state and federal data privacy regulations across the U.S.

  • Pros and Cons of California SB-1047: The AI Regulation Debate

    Pros and Cons of California SB-1047: The AI Regulation Debate

    Background

    With the recent emergence of ChatGPT, artificial intelligence (AI) has transformed from an obscure mechanism to a widely-used tool in day-to-day life. Around 77% of devices integrate some form of AI in voice assistants, smart speakers, chatbots, or customized recommendations. Still, while at least half of Americans are aware of AI’s presence in their daily lives, many are unable to pinpoint how exactly it is used. For some, the rapid growth of AI has created skepticism and concern. Between 2021 and 2023, the proportion of Americans who expressed concern about AI increased from 37% to 52%. By 2023, only 10% of Americans were more excited than concerned about AI applications in their day-to-day lives. Today, legislators at the federal and state level are grappling with the benefits and drawbacks of regulating AI use and development. 

    California’s SB-1047: An Introduction

    One of the key players in AI development is the state of California, which houses 35 of the 50 most prominent AI companies in the world. Two cities in California, San Francisco and San Jose, account for 25% of all AI patents, conference papers, and companies worldwide. California has responded to the growing debate on AI use through legislative and governmental channels. In 2023, Governor Gavin Newsom signed an executive order establishing initiatives to study the benefits and drawbacks of the AI industry, train government employees on AI skills, and work with legislators to adapt policies for responsible AI development. 

    One such policy that gained attention is SB-1047, or the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. The bill passed both chambers of the state legislature, but was vetoed by Governor Newsom in September 2024. Introduced by state senator Scott Weiner of San Francisco, SB-1047 aimed to establish safeguards in the development of large-scale AI models. Specifically, the bill applied to cutting-edge AI models that use a high level of computing power or cost more than $100 million to train. Its key provisions included:

    • Cybersecurity protections: Requires developers to take reasonable cybersecurity precautions to prevent unauthorized access to or unintended use of the AI model
    • Pre-release assessment: Requires developers to thoroughly test their AI model for potential critical harm before publicly releasing it. Establishes an annual third-party audit for all developers
    • “Kill switch”: Requires developers to create a capacity to “promptly enact a full shutdown” of the AI program in the case it risks damage to critical infrastructure
    • Safety protocol: Requires developers to create a written safety and security protocol, assign a senior professional to implement it, publish a redacted version, and send an unredacted version to the U.S. Attorney General upon request
    • Whistleblower protections: Prohibits developers from retaliating against employees who report violations of safety protocol internally or to government officials
    • CalCompute: Establishes a publicly-owned and -operated cloud computing infrastructure to “expand access to computational resources” for researchers and startups

    Pros of SB-1047

    One of the main arguments in favor of SB-1047 was that the bill encouraged responsible innovation. Proponents of the bill emphasized that it aligned with federal policy in targeting large-scale systems with considerable computing power, which pose the highest risk of harm due to their cutting-edge nature. They argued that the bill’s holistic approach to regulation, including preventative standards like independent audits and response protocol like the “kill switch” provision, make it difficult for developers to simply check a box stating they do not condone illegal use of their AI model. 

    Proponents also applauded the bill’s protections for whistleblowers at companies that develop advanced AI models. Given the lack of laws on AI development, general whistleblower protections that safeguard the reporting of illegal acts leave a gap of vulnerability for AI workers whose products are largely unregulated. Supporters say SB-1047 would have filled this gap by allowing employees to report potentially dangerous AI models directly to government officials without retaliation. In September 2024, over 100 current and former employees of major AI companies – many of which publicly advocated against the bill – sent a letter to Governor Newsom in support of the legislation’s protections. 

    Other supporters were enthusiastic about the bill’s establishment of CalCompute, a cloud computing infrastructure completely owned and operated by the public sector. Advocacy group Economic Security California praised CalCompute as a necessary intervention to disrupt the dominance of a “handful of corporate actors” in the AI sector. Other advocates emphasized that CalCompute would complement, rather than replace, corporations in providing supercomputing infrastructure. They argued that the initiative would expand access to AI innovation and encourage AI development for public good. 

    Another key argument in favor of SB-1047 is that the bill would have created a necessary blueprint for AI regulation, inspiring other states and even the federal government to implement similar protections. By signing the bill into law, proponents argue, California would have become the “first jurisdiction with a comprehensive framework for governing advanced AI systems”. Countries around the world, including Brazil, Chile, and Canada, are looking at bills like SB-1047 to find ways to regulate AI innovation as its applications continue to expand. 

    Cons of SB-1047

    SB-1047 received criticism from multiple angles. While some labeled the bill an unnecessary roadblock to innovation, others argued for even stronger regulations.

    On one hand, the bill’s large scope was criticized for focusing too heavily on theoretical dangers of AI, hindering innovation that might lead to beneficial advancements. Opponents contended that some of the language in the bill introduced hypothetical scenarios, such as the creation and use of weapons of mass destruction by AI, with no regard to their low plausibility. Major companies like Google, Meta, and OpenAI voiced opposition to the bill, warning that the heavy regulations would stifle productivity and push engineers to leave the state. 

    Others criticized the bill for its potential impacts on academia and smaller startups. Fei-Fei Li, co-director of Stanford University’s Human-Centered AI Institute, argued that the regulations would put a damper on academic and public-sector AI research. Li also stated that the bill would “shackle open source development” by reducing the amount of publicly available code for new entrepreneurs to build off of – a fear that was echoed by national lawmaker Nancy Pelosi (D-CA).

    On the other hand, some believe the bill did not go far enough in regulating cutting-edge AI. These critics pointed to provisions that exempt developers from liability if certain protocols are followed, which raised questions for them about the bill’s ability to hold developers accountable. They also criticized amendments that reduced or completely eliminated certain enforcement mechanisms such as criminal liability for perjury, stating such changes catered to the interests of large tech corporations. Critics argued that the bill’s vague definitions of “unreasonable risk” and “critical harm” leave ample room for developers to evade accountability. 

    Given the bill’s sweeping language in key areas, critics worried that it could either overregulate, or fail to regulate, AI effectively.

    Recent Developments

    On February 27th, 2025, SB-1047 sponsor Scott Weiner introduced a new piece of legislation on AI safety. The new bill, SB-53, was created with a similar intention of safeguarding AI development, but focuses specifically on the whistleblower protection and CalCompute provisions of the original bill.  

    While California continues to grapple with state-level regulations, the federal government has also taken steps to address AI. The Federal Communications Commission is using the 1980s Telephone Consumer Protection Act to restrict AI-generated human voices. The Federal Trade Commission has warned against AI misuse, including discrimination, false claims, and using AI without understanding its risks. In 2024, the Office of Management and Budget issued AI guidelines for all federal agencies. Later that year, the White House formed an AI Council and the AI and Technology Talent Task Force. Although no federal legislation has been passed, these actions show a growing focus on AI regulation.

    Conclusion 

    California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act aimed to regulate AI development through novel safeguards. While it was applauded by some as a necessary response to an ever-evolving technology, others believed its wide regulations would have stifled innovation and entrepreneurship. As AI’s use and applications continue to evolve, new policy solutions are likely to emerge at both a state and federal level in the future.