Tag: innovation

  • Pros and Cons of the Patent Eligibility Restoration Act of 2023

    Pros and Cons of the Patent Eligibility Restoration Act of 2023

    Background Information

    Artificial intelligence (AI) is transforming the patent landscape, creating an influx of patent applications that mirrors a rise in modern-day innovation. However, the discussion of patentable inventions under U.S. law lags behind. The Patent Eligibility Restoration Act of 2023 (PERA) aims to address this by reversing court rulings that have narrowed the scope of patent eligibility in emerging fields like AI. Ultimately, PERA stands at the intersection of technology, law, and political ideology, shaping the role of government in maintaining intellectual property (IP).

    Supreme Court decisions in Mayo v. Prometheus and Alice v. CLS are widely recognized as turning points in patent law. The cases, which restricted patent eligibility for abstract ideas and natural laws, marked the first narrowing of patent eligibility since the 1950s. PERA would “eliminate all judicial exceptions” to patent law in an attempt to remedy the confusion caused by the Mayo and Alice rulings. The bill was introduced in the Senate by Senators Thom Tillis (R-NC) and Chris Coons (D-DE) in 2023. Its House companion was introduced by Representatives Scott Peters (D-CA) Kevin Kiley (R-CA) in 2024. While it received bipartisan support and a hearing in the Senate Intellectual Property Subcommittee, PERA ultimately died in committee at the end of the 118th Congress. 

    PERA presents three key advantages: 

    1. Economic and Innovation Benefits: Boosts innovation and economic growth.
    2. International Competitiveness:  Secures U.S. innovation against global competitors.
    3. Expansion of AI and other emerging technologies:  Clarifies AI patent eligibility to strengthen U.S. leadership on the global stage.

    In terms of economic and innovative benefits, the United States Patent and Trademark Office advocates for PERA as a catalyst for innovation. It specifically states that small to medium-sized firms “need clear intellectual property laws that incentivize innovation…[as it’s] critical for job creation, economic prosperity,” in addition to several extended impacts. Furthermore, the American Intellectual Property Law Association (AIPLA), argues that PERA enacts clearer policies that will generate efficient product development and innovation, improving both industry standards and marginal utility for the consumer. Wilson Sonsini, a nonpartisan outlet that conducts the legal analysis, finds that the bill would in fact reverse the stagnation of innovation. In a written testimony submitted to the Senate Subcommittee on Intellectual Property, law professor Adam Mossoff argued that PERA is essential for restoring American dominance in global innovation and patent sectors.

    PERA not only aims to improve U.S. innovation and investment, but also clarifies AI patentability to bolster America’s edge on the global stage. According to Republican Representative Kevin Kiley, the U.S. must expand patentability to compete with China, emphasizing PERA as a key to gaining a competitive edge through clearer patent laws. In an interview with Representative Kiley, the Center for Strategic and International Studies (CSIS) found that China’s approach to intellectual property poses a significant threat to American innovation and prosperity, strengthening the case for PERA. Senator Coons, a PERA co-sponsor, believes that the bill is necessary to help the U.S. catch up to Europe and China in the realm of AI patent law. 

    Other supporters argue that PERA’s expansion of patentability will open the door to advancement in domestic AI technology. A multinational law firm argues that expanding patent eligibility to AI models and business methods is crucial for the development of the U.S. technology industry. By broadening patentability, PERA can reduce the backlog of unsuccessful patents, sparing inventors from having to revalidate their claims. To reinforce this, the global law firm McDermott Will & Emery contends that PERA reduces ambiguity in patent eligibility by defining AI-related patents and human involvement in AI inventions.

    However, while PERA offers significant benefits for innovation, global competitiveness, and emerging technologies, it also raises concerns about potential drawbacks, including the risk of overly broad patents and unintended legal complexities. 

    PERA presents three key disadvantages:

    1. Overbroad Patentability: Risks limiting access to life-saving technologies.
    2. Hurting Small Inventors: Creates an ambiguous legal landscape that only large corporations can afford to navigate.
    3. Ethical and Global Concerns: Conflicts with global patent norms, risking international relations. 

    The NYU Journal of Intellectual Property and Entertainment Law highlights concerns that broadening patent eligibility could negatively impact the life sciences sector by creating barriers between consumers and newly-patented technologies. It argues that PERA undermines the balance between rewards gained from innovation and public accessibility to products they depend on. Another critique from the Center for Innovation Promotion finds that PERA disrupts established legal standards, creating uncertainty in the patent system. Its broad eligibility could stifle innovation by exacerbating patent disruptions instead of encouraging progress and innovation. 

    Other critics worry that PERA could negatively impact small businesses. U.S. Inventor, an inventor’s rights advocacy group, critiques the bill for creating a complex legal landscape that only large corporations can afford to navigate. It argues that PERA lacks definitions for most of its crucial terms will only create more confusion, stating, “Investment into anything that risks falling into PERA’s undefined ineligibility exclusions will be hobbled.”

    PERA also raises ethical concerns, particularly in its treatment of genetic material, which may conflict with international patent standards. According to the NYU Journal of Intellectual Property and Entertainment Law, these discrepancies could lead to tensions between U.S. patent law and global practices, disrupting international collaborations and agreements. The BIOSECURE Report emphasizes PERA’s potential for significant harm to global patent standardization, as countries may struggle to reconcile U.S. policies with their own systems. These challenges could strain international relations, as nations may view PERA’s approach as a threat to their sovereignty and global patent harmony.

    The Status Quo and Future of PERA

    PERA was proposed in a time of heightened awareness and discussion of IP policy. With regard to national security concerns, the Foreign Affairs House Report finds Chinese IP theft against U.S. companies, emphasizing China’s competitive threat in innovation. Similarly, Reuters reports on Tesla’s IP theft case, showcasing ongoing challenges in protecting American technology. These challenges in protecting American innovation set the stage for potential policy shifts under a Trump presidency. According to IP Watchdog, changes in IP law could influence public trust and perceptions of America’s stance on innovation and patent protection. However, as Wolf Greenfield Think Tank notes, broader geopolitical implications, especially regarding competition with China in biotech and AI patents, may not fully align with Trump’s campaign vision. Additionally, Senate Judiciary reports highlight how bipartisan concerns over innovation could shape the future prospects of bills like PERA, with legislative gridlock potentially influencing amendments throughout the current presidential term and beyond. This gridlock could ultimately lead to a slower passing of patent-related legislation.

    Conclusion

    While PERA aims to expand patent eligibility and boost economic growth, critics are wary of overbroad patents, harm to small inventors and businesses, and geopolitical conflicts. Striking a balance between innovation, equity, and competition remains essential to ensuring a patent system that fosters progress without preventing accessibility.

  • Pros and Cons of California SB-1047: The AI Regulation Debate

    Pros and Cons of California SB-1047: The AI Regulation Debate

    Background

    With the recent emergence of ChatGPT, artificial intelligence (AI) has transformed from an obscure mechanism to a widely-used tool in day-to-day life. Around 77% of devices integrate some form of AI in voice assistants, smart speakers, chatbots, or customized recommendations. Still, while at least half of Americans are aware of AI’s presence in their daily lives, many are unable to pinpoint how exactly it is used. For some, the rapid growth of AI has created skepticism and concern. Between 2021 and 2023, the proportion of Americans who expressed concern about AI increased from 37% to 52%. By 2023, only 10% of Americans were more excited than concerned about AI applications in their day-to-day lives. Today, legislators at the federal and state level are grappling with the benefits and drawbacks of regulating AI use and development. 

    California’s SB-1047: An Introduction

    One of the key players in AI development is the state of California, which houses 35 of the 50 most prominent AI companies in the world. Two cities in California, San Francisco and San Jose, account for 25% of all AI patents, conference papers, and companies worldwide. California has responded to the growing debate on AI use through legislative and governmental channels. In 2023, Governor Gavin Newsom signed an executive order establishing initiatives to study the benefits and drawbacks of the AI industry, train government employees on AI skills, and work with legislators to adapt policies for responsible AI development. 

    One such policy that gained attention is SB-1047, or the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. The bill passed both chambers of the state legislature, but was vetoed by Governor Newsom in September 2024. Introduced by state senator Scott Weiner of San Francisco, SB-1047 aimed to establish safeguards in the development of large-scale AI models. Specifically, the bill applied to cutting-edge AI models that use a high level of computing power or cost more than $100 million to train. Its key provisions included:

    • Cybersecurity protections: Requires developers to take reasonable cybersecurity precautions to prevent unauthorized access to or unintended use of the AI model
    • Pre-release assessment: Requires developers to thoroughly test their AI model for potential critical harm before publicly releasing it. Establishes an annual third-party audit for all developers
    • “Kill switch”: Requires developers to create a capacity to “promptly enact a full shutdown” of the AI program in the case it risks damage to critical infrastructure
    • Safety protocol: Requires developers to create a written safety and security protocol, assign a senior professional to implement it, publish a redacted version, and send an unredacted version to the U.S. Attorney General upon request
    • Whistleblower protections: Prohibits developers from retaliating against employees who report violations of safety protocol internally or to government officials
    • CalCompute: Establishes a publicly-owned and -operated cloud computing infrastructure to “expand access to computational resources” for researchers and startups

    Pros of SB-1047

    One of the main arguments in favor of SB-1047 was that the bill encouraged responsible innovation. Proponents of the bill emphasized that it aligned with federal policy in targeting large-scale systems with considerable computing power, which pose the highest risk of harm due to their cutting-edge nature. They argued that the bill’s holistic approach to regulation, including preventative standards like independent audits and response protocol like the “kill switch” provision, make it difficult for developers to simply check a box stating they do not condone illegal use of their AI model. 

    Proponents also applauded the bill’s protections for whistleblowers at companies that develop advanced AI models. Given the lack of laws on AI development, general whistleblower protections that safeguard the reporting of illegal acts leave a gap of vulnerability for AI workers whose products are largely unregulated. Supporters say SB-1047 would have filled this gap by allowing employees to report potentially dangerous AI models directly to government officials without retaliation. In September 2024, over 100 current and former employees of major AI companies – many of which publicly advocated against the bill – sent a letter to Governor Newsom in support of the legislation’s protections. 

    Other supporters were enthusiastic about the bill’s establishment of CalCompute, a cloud computing infrastructure completely owned and operated by the public sector. Advocacy group Economic Security California praised CalCompute as a necessary intervention to disrupt the dominance of a “handful of corporate actors” in the AI sector. Other advocates emphasized that CalCompute would complement, rather than replace, corporations in providing supercomputing infrastructure. They argued that the initiative would expand access to AI innovation and encourage AI development for public good. 

    Another key argument in favor of SB-1047 is that the bill would have created a necessary blueprint for AI regulation, inspiring other states and even the federal government to implement similar protections. By signing the bill into law, proponents argue, California would have become the “first jurisdiction with a comprehensive framework for governing advanced AI systems”. Countries around the world, including Brazil, Chile, and Canada, are looking at bills like SB-1047 to find ways to regulate AI innovation as its applications continue to expand. 

    Cons of SB-1047

    SB-1047 received criticism from multiple angles. While some labeled the bill an unnecessary roadblock to innovation, others argued for even stronger regulations.

    On one hand, the bill’s large scope was criticized for focusing too heavily on theoretical dangers of AI, hindering innovation that might lead to beneficial advancements. Opponents contended that some of the language in the bill introduced hypothetical scenarios, such as the creation and use of weapons of mass destruction by AI, with no regard to their low plausibility. Major companies like Google, Meta, and OpenAI voiced opposition to the bill, warning that the heavy regulations would stifle productivity and push engineers to leave the state. 

    Others criticized the bill for its potential impacts on academia and smaller startups. Fei-Fei Li, co-director of Stanford University’s Human-Centered AI Institute, argued that the regulations would put a damper on academic and public-sector AI research. Li also stated that the bill would “shackle open source development” by reducing the amount of publicly available code for new entrepreneurs to build off of – a fear that was echoed by national lawmaker Nancy Pelosi (D-CA).

    On the other hand, some believe the bill did not go far enough in regulating cutting-edge AI. These critics pointed to provisions that exempt developers from liability if certain protocols are followed, which raised questions for them about the bill’s ability to hold developers accountable. They also criticized amendments that reduced or completely eliminated certain enforcement mechanisms such as criminal liability for perjury, stating such changes catered to the interests of large tech corporations. Critics argued that the bill’s vague definitions of “unreasonable risk” and “critical harm” leave ample room for developers to evade accountability. 

    Given the bill’s sweeping language in key areas, critics worried that it could either overregulate, or fail to regulate, AI effectively.

    Recent Developments

    On February 27th, 2025, SB-1047 sponsor Scott Weiner introduced a new piece of legislation on AI safety. The new bill, SB-53, was created with a similar intention of safeguarding AI development, but focuses specifically on the whistleblower protection and CalCompute provisions of the original bill.  

    While California continues to grapple with state-level regulations, the federal government has also taken steps to address AI. The Federal Communications Commission is using the 1980s Telephone Consumer Protection Act to restrict AI-generated human voices. The Federal Trade Commission has warned against AI misuse, including discrimination, false claims, and using AI without understanding its risks. In 2024, the Office of Management and Budget issued AI guidelines for all federal agencies. Later that year, the White House formed an AI Council and the AI and Technology Talent Task Force. Although no federal legislation has been passed, these actions show a growing focus on AI regulation.

    Conclusion 

    California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act aimed to regulate AI development through novel safeguards. While it was applauded by some as a necessary response to an ever-evolving technology, others believed its wide regulations would have stifled innovation and entrepreneurship. As AI’s use and applications continue to evolve, new policy solutions are likely to emerge at both a state and federal level in the future.