Tag: Infrastructure

  • Pros and Cons of S.B. 3732: The Artificial Intelligence Environmental Impacts Act

    Pros and Cons of S.B. 3732: The Artificial Intelligence Environmental Impacts Act

    Introduction

    The rise in the prevalence of artificial intelligence (AI) has had significant impacts on the environment. This includes the electricity required to power the technology, the release of hundreds of tons of carbon emissions, and the depletion of freshwater resources for data center cooling. For example, AI data centers in the U.S. use about 7,100 liters of water per megawatt-hour of energy they consume

    Demand for energy to power AI is rising. One study predicts that AI data centers will increase from about 3% of the US’s energy usage in 2023 to about 8% in 2030. However, there is also a potential for AI to have positive impacts on the environment. AI is a powerful tool in promoting energy transitions, with a 1% increase in AI development corresponding to a 0.0025% increase in energy transition, a 0.0018% decrease in ecological footprint, and a 0.0013% decrease in carbon emissions. Still, the scientific community and general public lack knowledge about the true environmental implications of AI. Senate Bill 3732, or the Artificial Intelligence Environmental Impacts Act of 2024, aims to fill this knowledge gap. 

    The Bill

    The Artificial Intelligence Environmental Impacts Act was introduced in February 2024 by Senator Ed Markey (D-MA). A House companion bill, H.R. 7197, was introduced simultaneously by Representative Anna Eshoo (D-CA). The bill has four main clauses that instruct the Environmental Protection Agency (EPA), The National Institute of Standards and Technology, the Secretary of Energy, and the Office of Science and Technology Policy to:

    1. Initiate a study on the environmental impacts of AI
    2. Convene a consortium of intellectuals and stakeholders to create recommendations on how to address the environmental impacts of AI
    3. Create a system for the voluntary reporting of the environmental impacts of AI
    4. Report to Congress the findings of the consortium, describe the system of voluntary reporting and make recommendations for legislative and administrative action

    This bill seeks to fill the gaps in existing research by commissioning comprehensive studies of both the negative and potential positive environmental impacts of artificial intelligence. It will also employ experts to guide lawmakers in creating effective future regulation of the AI industry. 

    Arguments in Favor

    Filling Gaps in Knowledge

    A key reason Data & Society, an NYC-based independent research institute, endorsed the bill was to fill existing gaps in research. They highlight the limited understanding of both the depth and scale of the impacts of AI on the environment as key areas that require more research. They also highlight the role of this proposed research initiative in determining how to limit the environmental impacts of AI. Tamara Kneese, a researcher for the organization, highlights that there is a lack of research that seeks to understand “the full spectrum of AI’s impacts,” which this bill would directly address. 

    Increasing Transparency in the Industry

    One of the arguments made by a co-sponsor of the legislation in the House of Representatives, Representative Beyer (D-VA), highlights how this bill would put the United States ahead in AI transparency work. Currently, the industry is not forthright about its environmental impact. For example, OpenAI has released no information about the process to create and train ChatGPT’s newest model, which makes it impossible to estimate its environmental impact. The voluntary reporting system created encourages that information to be reported, allowing for tracking of emissions and increased transparency in the industry. 

    Reducing Environmental Harm

    Another supporter of the bill, Greenpeace, views the bill as a way to protect against the environmental harm of new technology and address issues of environmental injustice. Erik Kojola, Greenpeace USA’s senior research specialist, says that this bill is “a first step in holding companies accountable and shedding light on a new technology and opaque industry”. Others, such as the Piedmont Environmental Council, view it as a step towards the implementation of well-informed regulation of AI. The bill’s fourth provision outlines that recommendations be made to Congress for the implementation of regulations of the industry, based on expert opinion and the research that the bill commissions. 

    Arguments Against

    Lacks Enforcement Mechanisms, Delayed Approach

    Critics argue that the bill relies too heavily on industry compliance by primarily using voluntary emissions reporting. In essence, there is no way of forcing companies to actually report their emissions from the working of the bill. There is also the argument that calling for more research only serves to delay taking concrete action to address climate change. The bill itself does little to stop pollution and usage of freshwater resources, and instead delays any action or regulation until detailed research can be conducted and further recommendations can be made. 

    Ignores AI’s Potential to Help the Environment

    Other critics argue that AI is constantly becoming more efficient and government intervention may hinder that. According to the World Economic Forum, AI is able to both optimize its own energy consumption as well as contribute to facilitating energy transitions. Opponents of S.B. 3732 hold that research should focus on improving efficiency within the industry as opposed to tracking its output to inform regulations. 

    Top-down Approach Sidelines Industry Leaders and Efforts

    Some opponents also critique the bill’s research- and information gathering-heavy approach. Critics argue that S.B. 3732 does little to create accountability within the industry and does not integrate existing measures to increase efficiency. They point to examples that show AI itself is being used to create informed climate change policy through analyzing climate impacts on poor communities and generating solutions. Critics argue that the bill largely ignores these efforts and input from industry leaders who say federal funds should be spent optimizing AI rather than regulating it. 

    Updates and Future Outlook

    While S.B. 3732 and its House companion bill were referred to several subcommittees for review, neither made it to the floor for a vote before the end of the 118th Congress and thus will need to be re-introduced in order to be passed in the future. Should the bill be passed into law, the feasibility of its implementation is uncertain given major funding cuts to key stakeholders such as the EPA under the current administration. Without proper government funding to conduct the research that the bill outlines, the efficacy of this research is likely to be weakened. 

    In addition, President Trump signed an executive order titled “Removing Barriers to American AI Innovation” in January 2025, which calls for departments and agencies to revise or rescind all policies and other actions taken under the Biden administration that are inconsistent with “enhancing America’s leadership in AI.”  In addition to taking an anti-regulation stance on AI, this executive order is the first step in a rapid proliferation of AI data centers that are to be fueled with energy from natural gas and coal. Given this climate, S.B. 3732 and similar bills face an uncertain future in the current Congress.

    Conclusion

    S.B. 3732 responds to the knowledge gap on AI’s environmental impacts by commissioning studies and encouraging reporting of AI-related energy benefits and drawbacks. Supporters of the bill view it as a crucial intervention to fill said information gaps, increase transparency, and address environmental harms through policy recommendations. Some opponents of the bill critique it as a stalling tactic for addressing climate change, while others contend the bill simply looks in the wrong place, focusing on AI industry compliance and existing impacts instead of encouraging innovation in the sector.

  • Pros and Cons of California SB-1047: The AI Regulation Debate

    Pros and Cons of California SB-1047: The AI Regulation Debate

    Background

    With the recent emergence of ChatGPT, artificial intelligence (AI) has transformed from an obscure mechanism to a widely-used tool in day-to-day life. Around 77% of devices integrate some form of AI in voice assistants, smart speakers, chatbots, or customized recommendations. Still, while at least half of Americans are aware of AI’s presence in their daily lives, many are unable to pinpoint how exactly it is used. For some, the rapid growth of AI has created skepticism and concern. Between 2021 and 2023, the proportion of Americans who expressed concern about AI increased from 37% to 52%. By 2023, only 10% of Americans were more excited than concerned about AI applications in their day-to-day lives. Today, legislators at the federal and state level are grappling with the benefits and drawbacks of regulating AI use and development. 

    California’s SB-1047: An Introduction

    One of the key players in AI development is the state of California, which houses 35 of the 50 most prominent AI companies in the world. Two cities in California, San Francisco and San Jose, account for 25% of all AI patents, conference papers, and companies worldwide. California has responded to the growing debate on AI use through legislative and governmental channels. In 2023, Governor Gavin Newsom signed an executive order establishing initiatives to study the benefits and drawbacks of the AI industry, train government employees on AI skills, and work with legislators to adapt policies for responsible AI development. 

    One such policy that gained attention is SB-1047, or the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. The bill passed both chambers of the state legislature, but was vetoed by Governor Newsom in September 2024. Introduced by state senator Scott Weiner of San Francisco, SB-1047 aimed to establish safeguards in the development of large-scale AI models. Specifically, the bill applied to cutting-edge AI models that use a high level of computing power or cost more than $100 million to train. Its key provisions included:

    • Cybersecurity protections: Requires developers to take reasonable cybersecurity precautions to prevent unauthorized access to or unintended use of the AI model
    • Pre-release assessment: Requires developers to thoroughly test their AI model for potential critical harm before publicly releasing it. Establishes an annual third-party audit for all developers
    • “Kill switch”: Requires developers to create a capacity to “promptly enact a full shutdown” of the AI program in the case it risks damage to critical infrastructure
    • Safety protocol: Requires developers to create a written safety and security protocol, assign a senior professional to implement it, publish a redacted version, and send an unredacted version to the U.S. Attorney General upon request
    • Whistleblower protections: Prohibits developers from retaliating against employees who report violations of safety protocol internally or to government officials
    • CalCompute: Establishes a publicly-owned and -operated cloud computing infrastructure to “expand access to computational resources” for researchers and startups

    Pros of SB-1047

    One of the main arguments in favor of SB-1047 was that the bill encouraged responsible innovation. Proponents of the bill emphasized that it aligned with federal policy in targeting large-scale systems with considerable computing power, which pose the highest risk of harm due to their cutting-edge nature. They argued that the bill’s holistic approach to regulation, including preventative standards like independent audits and response protocol like the “kill switch” provision, make it difficult for developers to simply check a box stating they do not condone illegal use of their AI model. 

    Proponents also applauded the bill’s protections for whistleblowers at companies that develop advanced AI models. Given the lack of laws on AI development, general whistleblower protections that safeguard the reporting of illegal acts leave a gap of vulnerability for AI workers whose products are largely unregulated. Supporters say SB-1047 would have filled this gap by allowing employees to report potentially dangerous AI models directly to government officials without retaliation. In September 2024, over 100 current and former employees of major AI companies – many of which publicly advocated against the bill – sent a letter to Governor Newsom in support of the legislation’s protections. 

    Other supporters were enthusiastic about the bill’s establishment of CalCompute, a cloud computing infrastructure completely owned and operated by the public sector. Advocacy group Economic Security California praised CalCompute as a necessary intervention to disrupt the dominance of a “handful of corporate actors” in the AI sector. Other advocates emphasized that CalCompute would complement, rather than replace, corporations in providing supercomputing infrastructure. They argued that the initiative would expand access to AI innovation and encourage AI development for public good. 

    Another key argument in favor of SB-1047 is that the bill would have created a necessary blueprint for AI regulation, inspiring other states and even the federal government to implement similar protections. By signing the bill into law, proponents argue, California would have become the “first jurisdiction with a comprehensive framework for governing advanced AI systems”. Countries around the world, including Brazil, Chile, and Canada, are looking at bills like SB-1047 to find ways to regulate AI innovation as its applications continue to expand. 

    Cons of SB-1047

    SB-1047 received criticism from multiple angles. While some labeled the bill an unnecessary roadblock to innovation, others argued for even stronger regulations.

    On one hand, the bill’s large scope was criticized for focusing too heavily on theoretical dangers of AI, hindering innovation that might lead to beneficial advancements. Opponents contended that some of the language in the bill introduced hypothetical scenarios, such as the creation and use of weapons of mass destruction by AI, with no regard to their low plausibility. Major companies like Google, Meta, and OpenAI voiced opposition to the bill, warning that the heavy regulations would stifle productivity and push engineers to leave the state. 

    Others criticized the bill for its potential impacts on academia and smaller startups. Fei-Fei Li, co-director of Stanford University’s Human-Centered AI Institute, argued that the regulations would put a damper on academic and public-sector AI research. Li also stated that the bill would “shackle open source development” by reducing the amount of publicly available code for new entrepreneurs to build off of – a fear that was echoed by national lawmaker Nancy Pelosi (D-CA).

    On the other hand, some believe the bill did not go far enough in regulating cutting-edge AI. These critics pointed to provisions that exempt developers from liability if certain protocols are followed, which raised questions for them about the bill’s ability to hold developers accountable. They also criticized amendments that reduced or completely eliminated certain enforcement mechanisms such as criminal liability for perjury, stating such changes catered to the interests of large tech corporations. Critics argued that the bill’s vague definitions of “unreasonable risk” and “critical harm” leave ample room for developers to evade accountability. 

    Given the bill’s sweeping language in key areas, critics worried that it could either overregulate, or fail to regulate, AI effectively.

    Recent Developments

    On February 27th, 2025, SB-1047 sponsor Scott Weiner introduced a new piece of legislation on AI safety. The new bill, SB-53, was created with a similar intention of safeguarding AI development, but focuses specifically on the whistleblower protection and CalCompute provisions of the original bill.  

    While California continues to grapple with state-level regulations, the federal government has also taken steps to address AI. The Federal Communications Commission is using the 1980s Telephone Consumer Protection Act to restrict AI-generated human voices. The Federal Trade Commission has warned against AI misuse, including discrimination, false claims, and using AI without understanding its risks. In 2024, the Office of Management and Budget issued AI guidelines for all federal agencies. Later that year, the White House formed an AI Council and the AI and Technology Talent Task Force. Although no federal legislation has been passed, these actions show a growing focus on AI regulation.

    Conclusion 

    California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act aimed to regulate AI development through novel safeguards. While it was applauded by some as a necessary response to an ever-evolving technology, others believed its wide regulations would have stifled innovation and entrepreneurship. As AI’s use and applications continue to evolve, new policy solutions are likely to emerge at both a state and federal level in the future.