Category: Technology

  • Understanding the FTC’s Data Breach Response Plan

    Understanding the FTC’s Data Breach Response Plan

    A data breach is defined as an incident with loss of control, compromise, or unauthorized disclosure/access of Personally Identifiable Information (PII). PII includes Social Security Numbers, financial accounts, license and passport numbers, credit card numbers, and personal address information. Losing control over PII increases the risk for identity theft and fraudulent activity. A data breach differs from a cyberattack, which is any malicious attempt to breach computer system, network, or device security with the intention to steal data, disrupt operations, cause damage, and/or gain unauthorized access to sensitive information. A data breach is a specific outcome of a cyberattack where PII is exposed, accessed, or distributed by unauthorized entities. 

    Data breaches are occurring with increasing frequency, especially against corporations. In today’s digital age, companies collect and store consumer information to gain an analytical business advantage, which results in an increased chance of data breaches. Corporations like Target and Yahoo had breaches which put millions of consumers at risk and cost millions of dollars. A data breach response plan is highly recommended for businesses because they have a responsibility to protect consumer data and a professional reputation to uphold. Preparedness is crucial to mitigate further breach consequences like financial loss, reputation damage, legal action, operation downtime, and further data loss/compromisation. The Federal Trade Commission (FTC) developed a data breach response plan to assist businesses. Their online document outlines recommended procedures from beginning to end.

    The Pros of the FTC’s Data Breach Response Plan: Responsibility, Consumer Transparency, Prevention, Communication, and Minimizing Damages

    The FTC published an online data breach response plan to aid businesses and explain key principles in navigating a breach. First, the FTC’s plan encourages businesses to be responsible in their reaction and communication. It discusses the importance of planning resources ahead of time and having them on hand to prepare for a potential breach. Additionally, companies received steps to respond promptly and efficiently in resolving the breach, and update consumers. Having a thorough response plan, including adequate communication, protects business reputations. By demonstrating competency and responsibility,  the FTC posits that businesses can regain consumer trust and uphold a professional reputation. The FTC’s plan advocates for consumer transparency through communication. The FTC urges businesses to notify involved parties quickly in addition to adhering to local laws. The FTC gives notification advice and how to lawfully communicate to the public (who’s allowed to know, what information can be released, etc). 

    Second, the FTC response plan discusses prevention tactics. The plan advises companies to look closely at their network and services for vulnerabilities and work with experts to seal them. Businesses should consider things like access privileges, network segmentation, and encryption measures.

    The final benefits of having a data breach response plan is decreasing associated costs and business downtime. Having a regularly trained incident response team leads to significant cost savings. Preparedness will minimize the financial burden. According to an IBM study, breach cost savings were $2.66 million for organizations with an incident response team and regularly tested plan, versus no team or testing. Additionally, a data breach response plan minimizes business downtime. Downtime is the time period when system use is unavailable. Breaches are a common cause. Companies can minimize downtime and have a quick recovery if they are prepared to tackle the breach. 

    The Cons of the FTC’s Data Breach Response Plan: High Expectations, Overstepping Enforcement Measures, and Catering to a Specific Business Type

    First, some businesses can view the response plan as burdensome and are reluctant to engage with FTC guidelines. The FTC wants companies to invest time and resources into preparedness.

    Similarly, there are notification conflicts. The FTC does not legally mandate companies to release breach notification so there are different approaches and potential underreporting. Law enforcement and businesses have different objectives when a data breach occurs; companies want to minimize damage and losses while law enforcement wants to convict the parties responsible. Private companies are less inclined to report a breach because they fear reputational damage—they often prefer to mitigate and cover up before the public notices. However, critics and consumers want to be notified immediately and would prefer an enforced mandation. 

    A final concern is determining who the FTC plan is best for. The FTC guide is useful for smaller-to-medium businesses because large companies likely already have sophisticated plans established. The FTC plan could be an area of conflict for large businesses as they try to fulfill FTC requirements and their regional guidelines, which could create confusion and gaps. Yet, it is important to note that the FTC plan does not replace a legitimate individualized business plan. Despite its guidance, specifically smaller organizations can take the advice but they still have to put it in action. The FTC guide doesn’t cover the potentially burdensome costs and time of creating an official plan, it is merely guidance.

    The FTC’s plan (and data breach response plans in general) will continue to develop. Potential reforms include:

    • Higher emphasis on prevention. Companies should have advanced threat detection and monitoring capabilities, like watching network traffic. For protection, companies should practice minimizing data storage.
    • More focus on incident readiness. Businesses should conduct simulations to test the effectiveness of their response plan, train employees, and identify weaknesses.
  • Pros and Cons of the Cybersecurity Information Sharing Act of 2015

    Pros and Cons of the Cybersecurity Information Sharing Act of 2015

    Background

    In February 2015, Anthem became the first major healthcare provider to experience a cyber attack when attackers stole 80 million records from Amerigroup and Blue Cross Blue Shield health plan users. In June 2015, hackers stole the personal information of over 20 million people from the Office of Personnel Management (OPM), which was the largest cyberattack on the U.S. government at the time. A month later, the hacking group Impact Team stole the user database of the adultery website Ashley Madison to blackmail its parent company Avid Life Media. The hacking group released the private information of its 37 million users as well as the website’s database of corporate emails. The sheer number of cyber incidents in 2015 brought cybersecurity to the forefront of domestic policy, leading to the Cybersecurity Information Sharing Act of 2015. This law changed the way the private and public sectors tackle cyber threats by prioritizing the sharing of cybersecurity information, and affected the federal government, private software companies, and the consumers who use their products. 

    Summary

    Congress signed the Cybersecurity Information Sharing Act into law as Title 1 of the Cybersecurity Act of 2015. The act establishes the Cybersecurity and Infrastructure Security Agency (CISA) as the central hub for the sharing of “defensive measures” and “cyber threat indicators” between the private and public sectors for a “cybersecurity purpose.” The act also defines key terms: 

    • Cyber threat indicators: necessary information to identify “listed threats…[and] information on the ‘actual or potential harm caused by an incident, including a description of the information exfiltrated as a result of a particular cybersecurity threat”
    • Defensive measures: something that “detects, prevents, or mitigates a known or suspected cybersecurity threat or security vulnerability.”
    • Cybersecurity purpose: the purpose of protecting an information system or information from a cybersecurity threat/security vulnerability 

    In the past, companies seldom shared valuable cybersecurity information due to concerns about violating numerous regulations. This law altered that situation. It provides a series of protections to encourage companies to voluntarily share information, including federal antitrust exemptions, immunity from federal/state disclosure laws (like open government and freedom of information laws), and a non-waiver of applicable protections for sharing materials. Additionally, under the law, the shared material is treated as commercial, proprietary, and financial information. Moreover, this act grants an ex parte communications waiver, which means that CISA sharing of cyber threat indicators and defensive measures with the federal government is not legally considered communication with a decision-making official, and therefore not bound to the same rules. 

    CISA, housed within the Department of Homeland Security (DHS), centralizes the sharing of this information. The main method is the Automated Indicator Sharing (AIS) Initiative. The DHS also specifies information that AIS participants cannot share, in addition to non-cybersecurity threat details, such as: 

    • Protected health information (medical records, lab reports, etc.)
    • Education history
    • Human resource information (hiring decisions, performance, etc.)
    • Financial information (credit reports, bank statements, etc.)

    Arguments in Favor of and in Opposition to the Strategy

    Supporters claim that the policy is beneficial because it reduces liability for companies. Companies could freely share cyber threat indicators, such as malware samples, without worrying about being held liable for criminal charges like antitrust and disclosure law violations. This increases access to cyber threat information and defensive measures.

    Moreover, proponents of the bill argue that the act would provide greater cybersecurity. The increased information sharing would help companies to improve their cybersecurity, which leads to more secure products and greater consumer trust. Also, it reduces the cost of improving cybersecurity. It means that companies can still maximize their profit while not having to sacrifice the security of their products. 

    However, there are also several reservations about the law. Opponents of the bill claim that cyber threat indicators are relatively ineffective and that there is not much evidence that sharing cyber threat indicators would enhance Internet security. In fact, in the years that CISA has been active, some of the indicators were unusable or inaccurate. This is partially due to a lack of expertise and staff at CISA, and the struggle the organization has faced in effectively providing guidance and training to AIS participants. Additionally, others believe that the act is insufficient. As opposed to simply focusing on information sharing, they believe that the U.S. needs a stronger and more expansive cyber strategy to combat the numerous cyber threats it faces.

    Furthermore, many worry that the government’s data collection will expand beyond cyber threat indicators and defensive measures. They point to what they see as dangerously broad language which would allow the government to take much more information than needed to deter cyber threats. Others see it as a way for the government to circumvent search warrants and directly obtain personal information themselves. However, proponents believe that the privacy concerns that opponents of the bill make carry little weight because companies must remove personal identifying data from the shared information, therefore supporters believe there is no feasible opportunity for the collection of unnecessary information. 

    Conclusion

    The Cybersecurity Information Sharing Act has been in law for almost a decade and thus far has seen both successes and drawbacks. For example, by 2018, CISA included more than 5.4 million unclassified indicators with governmental and non-governmental entities as well as more than 219 non-federal participants. On the other hand, as stated above, companies have discovered that some of the cyber threat indicators were unusable. AIS participants sometimes found that the shared indicators lacked background information that was vital to using the indicators to deter potential cyber threats. In other cases, the indicators were simply inaccurate. Despite the specific effects, there is no doubt that this act has and will continue to shape the way the public and private sectors respond to cyber threats.

  • Understanding the Government Face Surveillance Ban Debate

    Understanding the Government Face Surveillance Ban Debate

    Background

    In modern society face recognition technology (FRT) is a part of many people’s daily lives. One common application of FRT that many are familiar with is using Face ID to unlock a smartphone. FRT captures unique facial measurements for each person, making it an important tool for confirming someone’s identity. In 1996, FRT gained attention from the U.S. government due to its effectiveness in verifying identity compared to fingerprints. That year, the U.S. Department of Defense invested in the FERET project, developing a large-scale face database. Since then, FRT found its way into various government applications with law enforcement being a primary user. This technology, driven by artificial intelligence (AI), enables police officers to compare facial images from patrols (or collected photos and videos) with existing public records such as mugshots, jail booking records, and driver’s licenses as well as commercial databases developed by technology companies.

    According to the U.S. Government Accountability Office’s report, over 40 federal agencies employed FRT with plans for further expansion into areas beyond law enforcement. For example, Customs and Border Protection uses FRT to cross-reference images of departing passengers at boarding gates and passengers awaiting entry to their visa or passport application photos to assist with verifying identities for national security purposes. The Transport Security Administration also employs FRT at domestic airports.

    Despite the growing potential uses of FRT, concerns regarding its inaccuracy, potential abuse uses, and privacy violation led to movements aimed at banning government use of this technology. At the municipal level, following San Francisco’s lead, 16 other cities adopted bans on FRT for municipal uses between 2019 to 2021. In 2021, Virginia and Vermont banned FRT for law enforcement uses.Virginia later lifted the measure in 2022, citing public safety concerns.

    There remains a general lack of regulation and oversight at the federal level. In response to increasing concerns of the public, major FRT developers including Amazon and Microsoft declared they would not sell their products to police until Congress passed federal legislation to regulate. Currently, Congress is considering the Facial Recognition and Biometric Technology Moratorium Act of 2023. If passed, this 2023 bill would prohibit the use of facial recognition and biometric technologies by federal entities, and only Congress could lift the ban. It would also halt federal funding for developing FRT surveillance systems.

    The major topics in this field include the cost of inaccuracy on innocent individuals, suppression of speech, and online privacy concerns.

    The Cost of Inaccuracy

    Supporters of movements banning FRT argue that the risks of FRT’s inaccuracy outweigh its benefits when used by the government. False arrest cases, such as of Ms. Woodruff, Mr. Williams and 5 others, show how FRT’s inaccuracies disproportionately affect marginalized individuals. The NIST’s 2019 report highlighted racial bias, showing false positive matches for African-American and Asian faces were more common than for Caucasian faces, with African-American women particularly impacted. Research also found commercial FRT programs showed higher rates of falsely identifying darker-skinned women compared to white male faces as well as misidentifying non-cisgender individuals. 

    In response, opponents of the ban contend that proper human oversight would resolve the inaccuracy issues over time. They argue against outright banning FRT as it can be an effective tool for maintaining public safety. Law enforcement, in particular, could use FRT for various purposes including facilitating investigative leads, identifying victims, examining forensic evidence, and verifying individuals being released from prison. Law enforcement’s use of FRT demonstrated successful outcomes, such as effectively identifying an armed robber, a rapist, and a mass shooter. According to Pew Research Center’s 2021 survey, the majority of U.S. adults (46%) believe police use of FRT would be a great idea for society, while 27% disagree and 27% are unsure. 

    Speech Suppression 

    Supporters of ban movements also argue that abusive uses of FRT by law enforcement for surveillance raises civil liberties issues. For instance, police could potentially use RFT to target activists, especially in a police reform protest, causing chilling effects that threaten free speech. The fear of having their faces captured at protests by the police would deter people from expressing their political voices. The chilling effects could intensify with the increase in police use of FRT, especially with body cameras. Legal scholars referred to FRT as “the most uniquely dangerous surveillance mechanism ever invented.” 

    On the other hand, opponents of ban movements believe that banning FRT is not necessary to address potential abusive uses of the technology. They advocate for finding a compromise where law enforcement can use FRT with checks and balances. For example, Utah chose to limit law enforcement’s use of FRT with stringent approval requirements instead of banning the technology. Likewise, Massachusetts implemented a requirement for police to have a court order before comparing images with face photos and names in the databases of the Registry of Motor Vehicle, FBI, and state police. Also, with increasing crime rates and staffing shortages in law enforcement, cities and states recently started to partially repeal their FRT bans.

    Online Privacy

    Proponents of the ban argue that developing the technology in itself is a serious invasion of online privacy. They believe that FRT should not be in the hands of the government whose duty is to protect the people. The process of training FRT AI models requires a massive amount of face photos, many of which are available online. This technology caused FRT developers to scrape billions of people’s faces online without their knowledge or  permission. Commentators argue that this practice is a loss of privacy for the affected individuals, as they lose control of their sensitive information for something that could be used against them. 

    However, some argue that banning privacy-invasive technologies may not be an effective solution. That is because the banning legislation “may quickly become irrelevant with the advent of a newer technology not covered by the law.” (See e.g., law review article at 396). To discourage FRT developers from secretly using people’s faces, passing strong biometrics privacy laws—like the Illinois Biometric Information Privacy Act (BIPA)—could be a more practical solution. Such laws would make it illegal to capture and use face prints and other biometrics without the subjects’ permission. Furthermore, copyright law—though seemingly unrelated—could help slow down unauthorized uses of face photos while waiting for effective FRT legislation to be passed.

    Conclusion 

    Both sides of the debate appear to agree on the need to regulate the government’s use of RFT. However, the disagreement lies in whether to ban or restrict. Proponents of the bans worry that compromises would decrease the likelihood that a ban will ever be passed. Meanwhile, critics of the ban argue that the “ban or nothing” approach is causing the lack of regulations needed to regulate government use of FRT.

  • Pros and Cons of Biden’s National Cybersecurity Strategy

    Pros and Cons of Biden’s National Cybersecurity Strategy

    Background

    Cyber attacks during the Biden-Harris administration pushed cybersecurity to the forefront of domestic policy. In 2021, Colonial Pipeline, a large oil pipeline that transports almost half of all fuel used on the East Coast, suffered a ransomware incident from the hacking group Darkside. After stealing 100 gigabytes of data and threatening to release it, Darkside extorted 75 bitcoins (valued around $5 billion) from Colonial Pipeline. Even up until 2023, the China-sponsored hacking group Volt Typhoon has secretly targeted U.S. critical infrastructure sectors. Strong cybersecurity has become vital, and Biden’s National Cybersecurity Strategy reflects the administration’s attempt to combat increased cyber threats. 

    Summary of the Strategy

    Biden’s National Cybersecurity Strategy consist of five pillars: 

    1. Defend critical infrastructure
    2. Disrupt and dismantle threat actors
    3. Shape market forces to drive security and resilience
    4. Invest in a resilient future
    5. Forge international partnerships to pursue shared goals. 

    Pillar One is focused on defending U.S. critical infrastructure by increasing the number of cybersecurity regulations in critical sectors, enhancing the sharing of threat intelligence and other cybersecurity information between the public and private sector, and modernizing federal networks. Pillar Two reflects the administration’s goal to disrupt cyber adversaries capabilities and address the numerous ransomware threats the U.S. has faced. Pillar Three, one of the key goals behind the strategy, aims to shift liability for software vulnerabilities to companies by holding them responsible for security flaws and breaches of their consumers’ data. Pillar Three also calls for the possible implementation of a federal cyber insurance backstop, in order to stabilize the market in the case of a cyber incident. 

    Pillar Four plans to grow and strengthen the U.S. cybersecurity workforce by expanding the number of opportunities and apprenticeships available to prospective workers. It also focuses on investing in research and development in cybersecurity and on protecting the cloud-based technologies that companies are becoming increasingly reliant on. Pillar Five intends to strengthen partnerships with U.S. allies to deter cyber threats as well as secure global supply chains. 

    Arguments in Favor of and in Opposition to the Strategy

    Biden’s National Cybersecurity Strategy has sparked a discussion between those in favor and in opposition to the strategy, and about what effective cybersecurity and cyber defense should look like. Proponents of the strategy claim that it increases company responsibility, which is necessary. The unregulated cyber market that has existed thus far has led to the development of numerous products that are not sufficiently prepared for cyber attacks. Because the strategy included that software companies can be held liable, this strategy will push them to put in more effort to protect their data. This will, in turn, reduce the risk of cybersecurity incidents. Opponents of the strategy, however, point to aspects of the policy like the possible cyber insurance backstop, which they deem complex to provide. A cyber insurance backstop would mean that if a cyber insurance company was not able to cover a major cyber issue, the government would provide funds. The problem they see with this is that it’s difficult to price cyber risk, and increased funding means higher taxes. 

    Those who agree with the bill also support that it focuses on the main actors of cyber attacks and espionage. This strategy calls out authoritarian states that use cyberattacks against the U.S. like Russia, Iran, North Korea, and China.The general concern around China’s cyber espionage and use of cyber weapons has made this point especially popular. However, some desenters believe that the strategy’s general focus on cyber defense is insufficient and that offense is the best defense. They argue that the U.S. should also focus on increasing its use of offensive cyber operations to reduce adversaries’ abilities. They think that the U.S. should publicize its cyber capabilities and willingness to use them to discourage state actors from attacking. 

    Moreover, proponents of the strategy support that it prioritizes collaboration of the government with the private sector and other countries. This strategy recognizes that the government cannot unilaterally solve this problem. Therefore, it needs support from the private sector and other countries. The strategy also encourages collaboration with U.S. allies that promotes cybersecurity cooperation in those regions. However, opponents of the strategy believe that many parts of it will be not feasible to implement. Though cybersecurity is a relatively nonpartisan issue, some policy sections will be tricky to push through, such as shifting liability to software vendors. Such a regulation could only be done through congressional legislation, and it’s difficult to say whether that will happen or not. It doesn’t help that “software is still not a tangible product under the Uniform Commercial Code (UCC) in the US,” which means it is difficult to assign liability.

    Conclusion

    After being released only a few months ago, Biden’s National Cybersecurity Strategy has already started to shape the administration’s response to cyber threats. Recently, the administration submitted a request to increase the budget of the Cybersecurity and Infrastructure Security Agency (CISA) to $3.1 billion (by 22%) to implement this strategy, among other initiatives. The Transportation Security Administration (TSA) has issued a new cybersecurity amendment to the security programs of certain airports/aircraft operators in an effort to improve their cybersecurity resilience. The strategy will undoubtedly influence the way the United States tackles cyber threats for years to come.

  • Pros and Cons of the Fair Credit Reporting Act

    Pros and Cons of the Fair Credit Reporting Act

    The Fair Credit Reporting Act (FCRA) is a federal law that regulates the collection, distribution, and use of consumer credit information. The FCRA defines the rights and obligations related to consumer reporting data, aiming to create a dependable and precise exchange of information that benefits lenders, landlords, and employers. It achieves this by setting up privacy and transparency standards for consumer credit information. Businesses that follow the FCRA are largely immune from privacy and defamation lawsuits. The Federal Trade Commission (FTC) and Consumer Financial Protection Bureau (CFPB) are responsible for enforcing the FCRA, among organizations, and non compliance can lead to legal consequences. 

    In 1899, businessmen established the first major credit reporting agency (CRA), Retail Credit Co.. The bureau expanded and sold reports to insurers and employers, which created controversy when businesses denied services based on credit information. By the 1960s, credit agencies operated independently. There was significant corruption including incomplete and fabricated information on consumer credit reports to meet internal quota. The CRAs provided personal and inaccurate information to unauthorized entities. 

    Passed in 1970, the FCRA was primarily designed to target three major CRAs: Experian, Equifax, and TransUnion. These agencies have data on more than 200 million Americans with a study finding that 40 million consumers had errors on their reports. Credit information is valuable and used extensively in credit card applications, loans, employment, child support, and government licenses. By protecting and accurately maintaining consumer credit information, consumers will not be wrongfully serviced.

    The Pros of FCRA: Accessibility, Dispute Resolutions, and Usability Disclosures

    The FCRA grants consumers numerous rights regarding their credit information with an emphasis on transparency and accuracy. The goal is to facilitate a reliable and functioning credit reporting system. The main elements of the legislation include:

    1. Consumers have the right to access their credit report at any time and review it to verify its accuracy. With this accessibility, consumers can monitor their payments, detect fraudulent activity, and gain financial awareness. 
    2. Consumers can to dispute and resolve inaccuracies. Responsible CRAs can request an investigation to resolve the issue. CRAs must delete or correct inaccurate, incomplete, or unverifiable information within 30 days. Consumers can initiate troubleshooting, which allows them to have more control over their information. Furthermore, the consumer may seek damages from violators. If a CRA, report user, or data provider breaches the FCRA, it could lead to a potential lawsuit in court.
    3. Consumer credit information is private and protected. Sharing of credit reports is permissible under limited circumstances such as employment screening. Even then, written consent is mandatory to allow third-party viewership. Consumer notification and sign off limit fraudulent and suspicious activity.
    4. Consumers are entitled to adverse action notifications. Entities must notify consumers when their credit reports influence negative decisions. For example, if a credit report causes an employment denial, consumers have the right to know the reasoning. This notification helps consumers to better understand the issue and take appropriate action to address it. This also ensures that companies do not discriminate and must provide a legitimate reason. 

    The Cons of FCRA: Limited Accountability, Inadequate Customer Solutions, and Burdening Small Businesses 

    Despite the benefits of the FCRA, concerns have been raised. First, data furnishers have limited accountability. CRAs are supervised, but furnishers have less government oversight. By regulating the furnishers, the root of the problem (data suppliers) can be addressed by ensuring accurate data was provided. 

    Second, some believe the consumer solutions are inadequate. The process to correct inaccuracies can be time consuming and identifying mistakes can be difficult without sufficient knowledge and resources. Another concern is the lack of enforcement by the FTC. In the past 40 years, the FTC has taken 87 enforcement actions. For comparison, customers initiated 4,531 lawsuits in 2018. Because the primary target of the FCRA and the FTC are credit agencies, furnisher organizations are not well supervised. Therefore, there is a lack of action and consumer ability to achieve personal justice is limited.

    A final concern of the FCRA is that it potentially burdens small businesses. Guidelines like adverse action notice or investigations can be time consuming and costly for small businesses. Additionally, because the FCRA is lengthy and extensive, small businesses could have unintentional violations and they don’t have the same resources to ensure compliance and work through claims as large corporations. 

    The FCRA will continue to develop and reform proposals include:

    • Easier processes for corrections and consumer appeals
    • Reducing the amount of time adverse credit information can remain on the consumer credit report
  • Understanding the TikTok Personal Device Ban Debate

    Understanding the TikTok Personal Device Ban Debate

    The TikTok ban debate revolves around the popular video-sharing application’s owner ByteDance. While TikTok gained popularity among U.S. users, the government banned the application on federal devices, as well as government-issued devices in some states, and a nationwide TikTok ban for non-government users is potentially imminent.

    What is TikTok?

    TikTok is a social media platform where users can create and share short videos featuring lip-syncing, acting, or comedy sketches. TikTok has become one of the most downloaded apps worldwide with a significant user base among U.S. teenagers, particularly girls. Its unique algorithm suggests personalized content on the “For You” page tailoring the user experience to individual preferences and subcultures.

    U.S. federal and state governments targeted the application because it is owned by the Chinese corporation ByteDance. This ownership raises concerns about the Chinese government potentially accessing U.S. users’ data stored within its borders. Responding to these concerns, TikTok declared that they have long stored the U.S. users’ data in its center in Virginia and its backup storage in Singapore.

    Overview of the attempts to ban TikTok in the U.S.

    At the federal level, the campaign to ban the application gained momentum in 2020 when the Trump Administration issued an executive order prohibiting American app stores from listing TikTok. The implication of the order is a nation-wide ban of the applications on personal devices. Simultaneously, the Trump Administration ordered ByteDance to divest its U.S. operations and user data to American-owned entities . However, the enforcement of the 2020 Order came to an end after a court decision, which found a lack of statutory authority for the President to enforce the order.

    The Biden Administration, in addition to proceeding with the execution of the Divestment Order from the previous administration, issued another order in 2022 to remove TikTok from the executive agencies’ IT systems and devices.

    At the state level, efforts to ban TikTok continued. More than half of the U.S. states banned TikTok on government-issued devices. In May 2023, Montana became the first state to pass legislation prohibiting app stores from making TikTok available to users residing in the state. Despite facing lawsuits, Montana’s action sets a precedent for other states, including Texas, and the federal government to enact similar legislation. Consequently, a nationwide ban of TikTok on personal devices appears to be increasingly imminent.

    Key topics in the TikTok bans debate

    1. National Security v. First Amendment

    Proponents of the TikTok bans argue that TikTok’s data collection practices can expose classified government information and public officials’ personal data to the Chinese government that leaves individuals vulnerable to “blackmail or espionage.” They are concerned that the Chinese government could identify and exploit U.S. government employees. Further, those in favor of TikTok bans argue that the Chinese government could use TikTok’s user data to shape public opinion by moderating U.S. politics-related content on the platform, like their actions to block content related to the Hong Kong protests.

    However, critics say that there is insufficient concrete evidence regarding threats to national security. Therefore, passing legislation banning TikTok is unconstitutional, similar to how a federal district court struck down the Trump Administration’s WeChat ban. They point out that the First Amendment protects freedom of speech and expression, which applies to both platforms of speech, like TikTok, and their users.

    Like other U.S. platforms of speech, the First Amendment protects TikTok as a “separately incorporated organization within the U.S. from being a specific target for restrictions. Banning the platform without showing greater governmental interests would likely fail the intermediate scrutiny test, making the restriction unconstitutional.

    For users, critics contend the personal device ban would violate the First Amendment as it limits individuals’ right to receive information and ideas from abroad. Furthermore, the ban would hinder user’s freedom to explore professional opportunities, as highlighted by content creators in their lawsuit against the 2020 Order. Addressing the Montana legislation, which imposes a TikTok ban on personal devices, the American Civil Liberties Union (ACLU) criticizes that the ban has “trampled on the free speech of hundreds of thousands of Montanans who use the app to express themselves, gather information, and run their small business.”

    1. Privacy of users in the U.S.

    Additionally, the TikTok ban on personal devices concerns users’ privacy in the U.S. While TikTok collects a significant amount of user data—comparable to other social media platforms—the key difference is that TikTok’s data may travel to China. China has cyber laws (such as China’s Cybersecurity Law 2017), which provide the Chinese government with access to data held by businesses within the country. Fueling these concerns, investigations suggest that the storage of U.S. user data may be within China’s geographic limits. As a result, the government and several schools and universities in the U.S. ban the application to protect the privacy of their students.

    Nonetheless, critics of TikTok bans argue there may be better alternatives to banning to protect the privacy of U.S. users. Legal scholars have contested outright bans of privacy-invasive technologies since the legislation “may quickly become irrelevant with the advent of a newer technology not covered by the law.” (See e.g., this law review article at 396). 

    Conclusion

    Moving forward, critics of the ban suggest implementing comprehensive data privacy legislation that applies to not only TikTok but all social media platforms. The legislation should regulate data brokering practices, limit cross-border data transfers, and protect data of U.S. users when exported overseas.

  • Pros and Cons of the American Innovation and Choice Online Act

    Pros and Cons of the American Innovation and Choice Online Act

    Introduction

    The American Innovation and Choice Online Act (AICOA) is a bill proposed in the United States Congress to promote competition in the digital marketplace by addressing antitrust concerns related to major technology companies. The bill aims to prevent monopolies and trusts by prohibiting dominant platforms from abusing their gatekeeping power, favoring their own products, and disadvantaging competitors. AICOA proposes that the Federal Trade Commission (FTC) and the Department of Justice (DOJ) enforce new antitrust and anticompetitive regulations. The regulations include:

    • Prohibiting dominant platforms from preventing other businesses’ products or services from interoperating with the platform
    • Prohibiting requiring a business to buy a dominant platform’s goods or services
    • Prohibiting preferential gatekeeping placement on its platform
    • Prohibiting misuse of a business’ data to compete 

    Specific qualifications are required for application of the AICOA. The legislation only applies to the largest online platforms, which must be a “website, online or mobile application, operating system, digital assistant, or online service.” Qualifying platforms must

    • Enable users to generate or interact with content, facilitate e-commerce, or enable user searches that display large volumes of information.
    • Have at least 50 million U.S. monthly users or 100,000 U.S.-based monthly active businesses. 
    • Have a market capitalization or a total U.S. net sales exceeding $550 billion, and serve as a “critical trading partner” for business users. The critical trading partner label signifies that the affected platforms can restrict or impede the ability of dependent companies to reach their customers.. 

    Background

    If enacted into law, the AICOA would affect most U.S. citizens. Companies such as Apple, Alphabet, Amazon, and Meta would be impacted. Almost half of Americans utilize an Apple iPhone, while almost 70% of adults use Meta (formerly known as Facebook). Further, almost half of the United States population uses Amazon Prime, and 70% of Americans, or 268 million individuals, shop online. The legislation aims to address issues raised in a 370-page report by the House Judiciary Committee in 2022 highlighting numerous anticompetitive practices in the digital marketplace. In 2022, the U.S. Chamber of Commerce released a national poll conducted by AXIS Research that indicated most Americans opposed the bill. The study reported that 79% of Republicans, 72% of Independents, and 59% of Democrats oppose or strongly oppose the AICOA.

    Opposition to AICOA

    Opponents of the bill state that it grants the FTC and DOJ too much regulatory power, and many free-market advocacy groups, such as Americans for Tax Reform and the Consumer Choice Center, publicly espoused this belief in a letter. They argue that the AICOA would enable the government to pick economic winners and losers. Neil Bradley, Executive Vice President of the U.S. Chamber of Commerce, echoed this stance and stated that the bill allows the FTC to decide what is lawful and to whom the law applies, and gives the ability to define what constitutes data, among many other powers. 

    The proposed regulations, according to opponents, would harm consumers more than they would benefit. Joshua Wright, a Professor of Law at Antonin Scalia Law School, emphasizes the importance of competition and differentiation in the marketplace. He states that ranking functions are not only desirable but that they are necessary and beneficial to the consumers. Experts such as Geoffrey Manne, president and founder of the International Center for Law and Economics (ICLE), and Sam Bowman, Director of Competition Policy at the ICLE, seconded the previous statements. In an article, they labeled good digital platforms as “middlemen” that protect consumers by sorting through information to ensure that advertised products and businesses are not untrustworthy actors. Data corroborates these ideas, as users have been shown to prefer platforms that discriminate, like Amazon, over other, more chaotic platforms, such as eBay. 

    Opponents also point out unintended consequences of the bill, including limitations for U.S. businesses to compete globally and concerns about data privacy. Sean Heather, Senior Vice President of International Regulatory Affairs & Antitrust U.S. Chamber of Commerce, argues that American companies would be limited from competing internationally under the new regulations. The disadvantage, he claims, stems from the fact that American-owned companies would be subject to regulations while foreign entities in a similar position would not. He states that the bill would impose economic damages for conduct that most would regard as healthy competition, limit covered platforms’ ability to innovate, and force platforms to share data with rivals. Gary Shapiro, President and CEO of the Consumer Technology Association, echoed this belief and claimed that the bill could “enable foreign rivals and cyber-criminals to access U.S. consumer data.” A provision in the AICOA makes it unlawful for covered platforms to impose limits on how other businesses access user data from consumers; however, these limits have been cited to help platforms protect consumers from fraud, abuse, and harassment. 

    Support for AICOA

    Proponents of the American Innovation and Choice Online Act argue that the strict language of the legislation will ensure that the government stays within its boundaries. Three digital policy scholars claim that any uncertainty in the language utilized in the bill, such as “materially harmed” and “competition,” will be resolved with the development of case law over time. They assert that the process for diminishing uncertainty with other laws is the same, so this bill will not grant any special powers. Further, the Congressional Research Service published a report that stated the “defenses appropriately place the burden to defend potentially anticompetitive conduct on platform operators, who have more information about their products than regulators.” Only State Attorneys General and federal agencies would have the power to file a suit, so they claim frivolous lawsuits would be avoided. 

    The same scholars voice the opinion that current antitrust laws are weak. They argue that the competition enforcement against large online platforms does not promote competition currently and needs to be increased. Experts such as Columbia University law professor Tim Wu reiterate the need for competition, writing that limited competition deters innovation and new business. A report from the Center for American Progress (CAP) states that the proposed legislation will spark innovation and increase consumer choice by opening the market for new competitors to be seen by consumers. Partners at Winston and Strawn LLP concur, claiming that the AICOA will ensure that third-party sellers’ products will be seen. Organizations such as Small Business Rising, representing over 150,000 small businesses, agreed that the legislation was a “win for small businesses” that would ultimately promote innovation, new business, and third-party sellers. 

    Supporters also argue that the legislation would enhance online consumer choice. Currently, “small and medium companies have no practical option but to go through these big tech companies to reach their consumers;” however, this bill would prohibit the anticompetitive practices that force consumers to pay more for fewer choices. More than 60 companies publicly embraced this belief in a letter, stating that the legislation would unleash the creative power of thousands of smaller companies that would have never seen the light of day. 

  • Introduction to the CHIPS and Science Act

    Introduction to the CHIPS and Science Act

    The CHIPS and Science Act of 2022 invests $52.7 billion to strengthen U.S. leadership in semiconductor research, domestic manufacturing, and workforce development. The U.S. currently produces 10% of semiconductors globally, with the Asia-Pacific region producing 75%. Semiconductors form the foundation of every advanced technology, from artificial intelligence to quantum computing. Many view semiconductor production as a national security issue, similar to ensuring a domestic food supply. Operationalizing the CHIPS Act would require onshoring of American manufacturing of semiconductors. Onshoring is the practice of transferring a business operation that was moved overseas back to the country from which it was originally relocated.

    What is the CHIPS Act?

    The CHIPS Act aims to encourage onshoring of American manufacturing in three main ways:

    • U.S. Leadership: The long-term goal of this legislation is to advance U.S. leadership in wireless technologies and their supply chains. U.S. national security priorities include the future of its economic competitiveness globally as well as domestic public investment in research and development.
    • Economic Growth: The CHIPS Act invests $10 billion in regional innovation, specifically in technology hubs that enables coordination among state and local governments with higher education as well as labor unions and businesses for the shared goal of technological advancements. 
    • STEM Development: Science, technology, engineering, and mathematics (STEM) education is essential for workforce development and meeting the demand for high-skilled jobs.

    According to the Semiconductor Industry Association, “the share of modern semiconductor manufacturing capacity located in the U.S. has eroded from 37% in 1990 to 12% today.” Additionally, Secretary of Commerce Gina Raimondo said the U.S. will need to triple the number of college graduates in semiconductor-related fields, including engineering. She discussed partnering firms in the semiconductor industry with high schools and community colleges to train over 100,000 new technicians over the next decade. The CHIPS Act strengthens research and development by establishing new regional innovation and technology hubs and investments in all levels of science, technology, engineering, and math education for students. This investment drives opportunity and equity with the diversification of research institutions and students as well as the researchers they serve, such as Historically Black Colleges and Universities (HBCUs) and other historically-underserved institutions. Students have the opportunity to play a crucial role in the future of U.S. leadership in technology.

    Arguments Against the CHIPS Act

    Opposition to the CHIPS Act is based on the idea that the U.S. government should not interfere in the free-market system. Federal interference may cause unintended disruption and unfair financial advantages for certain companies, while others suffer as a consequence. In addition, some believe the policy is limited in scope, in that it provides too little support for firms that design chips but do not manufacture them. Furthermore, opposition voices in the government object that creating a ‘blank check’ for the profitable microchip industry without guardrails could create a corporate free-for-all that attracts a wide range of businesses interested in obtaining funding. For corporations, the bid for funding is low-risk, high-reward and anything that touches a semiconductor could be eligible. Other opposition voices claim the federal spending generally “fuel[s] inflation that is hurting the poor and middle class.” Opposition voices within the federal government want the funds redirected elsewhere, since the microchip industry is already profitable. In 2021, the global market size was 527.88 billion USD.

  • AI Technology Usage in Weapons Systems

    AI Technology Usage in Weapons Systems

    Whoever becomes the leader in [artificial intelligence] will become the ruler of the world” – President Vladimir Putin 2018

    The ever-changing dynamic of international security is adapting quickly to new developments in technology. The nuclear bomb, space travel, and chemical weapons drastically changed the state of warfare globally. Now, artificial intelligence is transforming the way we use and conceptualize weaponry and global militaries; just one of the products of the digital age in the 21st century. 

    Presently, global leaders are automizing weapons systems to both advance antiquated weapon technology and thwart new security threats in the age of globalization and digitalization. The quote above by Russian president Vladimir Putin illustrates the cyberspace-race nature of automized weapons systems and the importance of keeping in touch with the future of automized technology. Current leaders in automized weaponry include China, the United States, Russia, and Israel. 

    Key terms and Definitions

    Artificial intelligence, commonly known as AI, refers to the way computers process large volumes of information, recognize patterns in that data, and make predictions based on the information given. Though the word “intelligence” gives way to many social connotations, it is important to recognize that artificial intelligence is simply a way of synthesizing large amounts of data. Much like traditional forms of data synthesis, data sets limit the prediction output, leaving room for faulty or poorly-informed predictions. 

    Automation refers to the process of making systems function automatically, or without human intervention. Automation varies in range from no autonomy, partial autonomy, and full autonomy. 

    Artificial intelligence plays a key role in automation as the predictive nature of artificial intelligence allows machines to interpret information as they function, leading to more efficient autonomy and less reliance on human intervention. 

    Implementations of an AI-automated Military

    AI development is contentious among U.S decision makers, with questions around the ethical and moral implications of AI, or a fully autonomous weapon This has significantly stunted growth in the AI defense sector, causing political analysts to caution against slow development. 

    The hesitation to implement AI-automation in the military has some merits. There is significant skepticism about the reliability of this technology. Gaps in data sets threaten output dependability. Additionally, blind trust in AI undermines the importance of human rationality. Human rationale generally prevails in wartime decision-making due to war’s complexities. For example, in 1983, a Russian officer, when presented with a warning that the U.S had launched a nuclear attack, decided not to move forward with retaliation. The warning was a computer malfunction, and the Russian officer ultimately saved the world from nuclear disaster. 

    However, AI-powered weapons can significantly change the state of combat. Having semi or fully autonomous weapons decreases armed casualties and the need for a large standing army. Furthermore, global actors like China and Russia are placing significant emphasis on AI weapons proliferation, threatening U.S security. 

    The government has been slow to utilize more sophisticated AI-driven weaponry. This led to the 2021 conclusion by the National Security Commission that the U.S is ill-prepared for future AI combat. To address this, President Biden appointed Margaret Palmieri as the new deputy chief digital and artificial intelligence officer to spearhead the movement toward AI defense systems. Additionally, the administration created the National Artificial Intelligence Resource Research Task Force which focuses on increasing access to this technology to promote innovation and incentivize engineers, researchers, and data scientists to join the defense sector. However, this comes with limitations. The United States will need to ensure access to more defense data, especially data held by private companies. Additionally, incentivizing data talent is an obstacle as STEM talent flocks to start-ups and private companies due to the promise of money and unregulated research. 

    In 2020, the United States defense sector set aside $4 billion for AI research and automated technology. This number is trivial compared to overall defense spending, which was $100 billion in 2020 for general weapons research and development. However, it is important to keep in mind that the cost of AI technologies is decreasing rapidly as hardware becomes more affordable in the private sphere.

    Weapons Automation Among Global Actors

    France is developing an ethics committee to oversee the development of novel automated weapons systems. This committee will work with the Ministry of Defense to ensure the systems implemented are reliable and safe. Similarly, Germany is taking a multilateral approach to AI integration. The government is dedicated to seeing AI technology used ethically and sustainably in private and military sectors. 

    Israel currently leads the Western world in technological development and has a symbiotic relationship with the United States in terms of weapons development, research, and funding. The most notable achievement in Israeli defense is the Iron Dome missile defense system. This automated system immediately detects and shoots down adversary artillery to prevent threats to civilians. This system operates with little human oversight, meaning there is no chain of command for initiating the defense response. 

    China holds much of the world’s attention on automated weapons systems. The majority of China’s AI systems are used in the surveillance sector to monitor movement and social activity through biometric data. In terms of automated defense systems, China is currently developing new forms of weaponry as opposed to automating existing technology. Most notable is the implementation of swarming technology, or the use of many small, fully autonomous drones to surround enemies. 

    Russia, with Chinese aid, is currently developing AI weaponry to bolster security ambitions. This technology, however, is largely absent from the current conflict in Ukraine where forces are using traditional war tactics. Instead, a large portion of Russian antagony has consisted of deepfake media and misinformation campaigns. For instance, during the lead-up to the 2016 U.S Presidential elections, Russia made use of troll farms on Facebook to sow discord in key swing districts. Furthermore, Russia used similar tactics to foster pro-Russian sentiment in eastern parts of Ukraine to bolster rebel forces in Donbas. While misinformation is certainly a war tactic, these actions stray away from typical AI-powered weaponry. 

    More on Global Actors and AI-Weaponry:

    More on the Implications of AI Automation

  • Intro to Data Privacy

    Intro to Data Privacy

    Data privacy is concerned with the way personal data is collected, analyzed, and used. This is not to be confused with data security, which is how collected data is protected from external attacks. Within the United States, internet usage amongst adults has increased from 52% in 2000 to 93% in 2021. With more people using the internet, more personal data can be retrieved from online. Thus, improvement of data privacy is increasingly vital. In the definition provided by the California Consumer Privacy Act, private information includes any material “that identifies, relates to, describes, is reasonably capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household.” Some examples of private information include, but are not limited to:

    • Health records
    • Email
    • Social Security Number
    • Postal address
    • Drivers license number
    • Passport number
    • Alias

    Federal Policy

    When it comes to understanding the policies in place to protect private data, only 3% of Americans know how current regulations and laws work. Furthermore, only 9% of Americans say they always read company privacy policies to understand how private data is used. This lack of knowledge about data privacy regulations is in part due to the way policies are set up. There is no federal regulation that includes language for multiple types of private data. Instead, multiple policies each cover a specific type of private data. 

    Here are the current federal policies that relate to data privacy:

    The Future of Data Privacy Policy

    Data privacy policies are currently being revisited. Legislation at both the international and state level has heightened the repercussions for companies if found guilty of personal data misuse. In May of 2018, the European Union passed the General Data Protection Regulation (GDPR), the most progressive and punishing data privacy policy to date, with strict fines and broad terms. This regulation punishes any enterprise that illegally collects or uses, in the scope of the GDPR literature, the data of residents of the EU even if the company is not in the EU. The GDPR governs data privacy under seven basic principles including lawfulness, fairness and transparency; purpose limitation; data minimization; accuracy; storage limitation; integrity and confidentiality; and accountability.

    In the United States, state level legislation is leading the way towards passing a federal law pertaining to data privacy. California passed the California Consumer Privacy Act (CCPA) in 2018, the first active data privacy state regulation. With the CCPA in place, all residents of California have the right to know how their personal information is collected and used, the right to delete personal information, the right to opt-out of their information being sold, and the right to non-discrimination should they exercise their rights listed under the CCPA. In 2020, the California Privacy Rights Act (CPRA) was passed. The CPRA builds on the CCPA, adding the right to rectification, right to restriction, and updated special protections surrounding sensitive personal data, like social security numbers. In addition to these new rights given to consumers, the CPRA established the California Privacy Protection Agency (CPPA). The CPPA is the first private data privacy agency in the United States.

    Virginia is the second state to pass a data privacy law. The Virginia Consumer Data Protection Act (CDPA) was signed into law in March 2021. The CDPA is very similar to the CCPA and CPRA. However, there are two key differences between the California legislature and the CDPA. First, enforcement of the CPDA in Virginia comes from the attorney general, not a private enforcement agency, like the CPPA in California. Secondly, the CDPA does not include a revenue threshold for companies to impose obligations. This allows companies to avoid the CDPA laws as long as they do not control or process the personal data of at least 100,000 consumers during a calendar year or control or process the personal data of at least 25,000 consumers and derive at least 50% of their gross revenue from the sale of personal data

    Other states are working towards passing data privacy laws while using the GDPR and California legislature as examples to emulate and build upon. Despite the shared perspective that consumers’ data is valued, the Democratic and Republican parties want to regulate protection in different ways. Democrats focus on protecting the consumer, believing that data collectors should be held accountable for the misuse or mishandling of consumer data. Alternatively, Republicans fear that consumers could abuse their protections at the expense of industry and push for less strict punishments for companies that collect data. 

    As these differing viewpoints are discussed and navigated in policy making processes, states will look to establish their own laws. This trend can be tracked here, where you can view your own state’s progress in passing data privacy legislation.