Author: Abigail Gaetz

  • Understanding PADFA: The Pros and Cons of the Federal Government’s Approach to American Data Privacy

    Understanding PADFA: The Pros and Cons of the Federal Government’s Approach to American Data Privacy

    A new law addressing the national security risks of the data brokerage industry took effect on June 23, 2024, following bipartisan approval in January. The bill, known as the Protecting Americans’ Data from Foreign Adversaries Act or PADFAA, seeks to limit the transfer of sensitive American data to firms owned or controlled by Russia, China, North Korea, and Iran

    PADFAA is the first piece of federal legislation to take aim at the data brokerage ecosystem, an industry that buys, sells, and analyzes online data for third-party usage. Furthermore, the bill represents growing bipartisan efforts to address data privacy at large. While advocates celebrate the narrow and targeted nature of the bill to address specific challenges in the data brokerage industry, some claim that the bill either goes too far or doesn’t go far enough. 

    What is the data brokerage industry? 

    The data brokerage industry encompasses the wide network of buyers, sellers, and contractors who collect, license, and share data from public and private sources online. The data broker industry generates $200 billion in annual revenue in the United States alone. 

    Data brokers collect information from both public and private sources. Companies may sell the data they collect on their platforms to data brokers, including user details, purchase history, and cookie information. Alternatively, data brokers can scrape the internet for publicly available information found on public sites and social media platforms. 

    Data collected online is then analyzed and aggregated to create packages of information tied to certain groups, such as their purchase habits, interests, health history, ideology, and identity. Even if a data broker doesn’t specifically collect personal information such as names and phone numbers, holistic data can still be used to identify individual users. 

    The demand for packaged data is wide-reaching across sectors. Packaged data can be used for advertising, fraud detection, risk assessment, and populating people-search sites: websites where users can search for an individual’s personal details given only their name. Potential buyers include banks, credit agencies, insurance firms, internet service providers, loan companies, advertisers, and law enforcement agencies.  A Duke University study shows that data buyers can acquire access with varying levels of vetting, suggesting that nefarious actors can often buy access to data for dangerous purposes.

    Arguments in Favor of PADFAA

    Proponents of PADFAA praise the bill for its comprehensive approach to data privacy. Six months before PADFAA came into effect, President Biden passed an executive order to address similar concerns about adversarial countries acquiring sensitive American data. PADFAA not only enshrines the executive order’s provisions into law, but expands its scope from government-affiliated Americans to all Americans. Additionally, the bill applies to all data transactions, both big and small, which supporters argue will better protect the average American citizen. 

    Supporters also argue that PADFAA creates an even national standard for data privacy, replacing a patchwork web of state laws and more niche federal laws. Through its comprehensive definition of “sensitive data,” PADFAA also creates a legal precedent that can be used as the basis for future legislation in the area of data brokerage. Under the bill, sensitive data includes geolocation data, passport information, social security and driver’s license numbers, bank details, biometric and genetic information, private communication, personal identities such as age, gender, and race, and online activity. Proponents argue the breadth of this definition will make it difficult for data brokers to exploit loopholes in existing data privacy laws. 

    Arguments Against PADFAA

    Critics argue that the law’s focus on third-party data brokers, who collect and analyze data for sale, leaves much of the industry unregulated. PADFAA’s definition of a “data broker” does not include first-party data collectors, allowing apps, social media platforms, and healthcare services to sell American data directly to companies owned or controlled by Russia, China, North Korea, or Iran.

    Additionally, the law does not prohibit selling American data to the four listed countries if the seller does not reside in one of those countries. Data privacy advocates stress that under PADFAA, if a company licensed outside of Russia, China, North Korea, or Iran acquires American data, it is still permissible for that company to sell American data to any of the four countries. 

    Opponents also claim that PADFAA will overburden the Federal Trade Commission (FTC). The FTC has long specialized in consumer privacy and data protection. However, critics argue that the FTC does not have the capacity to enforce foreign policy. Mainly, the FTC lacks the security clearances necessary for obtaining critical intelligence information about adversarial attempts to acquire American data. Critics also argue that the FTC’s privacy division is underfunded and overstretched, ill-equipped for the task. 

    On the other hand, some argue that PADFAA is an unnecessary addition to an already-complex legal landscape concerning the data broker industry. Federal measures like the Fair Credit Reporting Act (FCRA) and state laws such as those in Vermont and  California take steps to protect consumer data from harmful use, and critics of PADFAA argue that those existing measures provide adequate protection. 

    Conclusion

    PADFAA marks the first step in regulating the data broker industry and protecting against its harmful effects. However, the law does not fully encompass the scale of the issues raised by privacy advocates, such as discretionary data collection, predatory and dangerous uses of individual information, and the lack of transparency in the industry. Nonetheless, the bipartisan support for a policy measure of this kind makes the path for future legislation less opaque.

  • Understanding The Debate on Facial Recognition Technology in Policing: Pros, Cons, and Privacy Concerns

    Understanding The Debate on Facial Recognition Technology in Policing: Pros, Cons, and Privacy Concerns

    Introduction

    A facial image is like a fingerprint: a unique piece of human data that can identify an individual or connect them to a crime. Law enforcement uses facial recognition to identify suspects, monitor large crowds, and ensure public safety.

    Facial recognition software is used by local, state, and federal law enforcement, but its adoption is uneven. Some cities, like San Francisco and Boston, have banned its use for law enforcement, while others have embraced it. Despite this, the technology has been instrumental in solving cold cases, tracking suspects, and finding missing persons, and is considered a game changer by some in law enforcement.

    Facial recognition software can be integrated with existing police databases, including mugshots and driver’s license records. Private companies like Clearview AI and Amazon’s Rekognition also provide law enforcement with databases containing information gathered from the internet. 

    Here’s how police use facial recognition technology:

    1. Law enforcement collects a snapshot of a suspect drawn from traditional investigative methods, such as surveillance footage or independent intelligence. 
    2. They then input the image into the facial recognition software database to search for potential matches. 
    3. The system populates a series of similar facial images ranked by the software’s algorithm, along with personal information such as name, address, phone number, and social media presence 
    4. Law enforcement analyzes the results and gathers information about a potential suspect, which is later confirmed through additional police work

    The exact number of law enforcement agencies using facial recognition software is difficult to know.

    Much like other investigative techniques, law enforcement tends to keep the practice of facial recognition identification out of the public eye to protect ongoing investigations. Furthermore, facial recognition technology is not used as evidence in court proceedings, meaning that it is hard to track the frequency of use of this technology in criminal prosecutions. 

    However, studies conducted on facial recognition and law enforcement give a broad understanding of the scope and scale of this debate. A 2017 study conducted by Georgetown Law’s Center on Privacy & Technology, estimates that 3,947 out of roughly 15,388 state and local law enforcement agencies in 2013, or one in four, “can run face recognition searches of their own databases, run those searches on another agency’s face recognition system, or have the option to access such a system.” Furthermore, a 2021 study from the Government Accountability Office (GAO) found that 42 federal agencies used facial recognition technology that they either owned or were provided by another agency or company. 

    Supporters of this technology celebrate the use of facial recognition to solve crimes and find suspects faster than ever before. 

    “The obvious effect of [this] technology is that ‘wow’ factor,” said Liu. “You put any photo in there, as long as it’s not a very low-quality photo, and it will find matches ranked from most likely to ones that are similar in a short second,” says Terrance Liu, vice president of research at Clearview AI, a leading service provider of facial recognition software to law enforcement. 

    Before facial recognition technology, identifying suspects caught on surveillance cameras was difficult, especially without substantial leads. Law enforcement argues that this technology can help investigators develop and pursue leads at faster rates. 

    How accurate are the results produced by this software? 

    Due to the advances in processing power and data availability in recent years, facial recognition technology is more accurate than it was ten years ago, according to a study conducted by The National Institute of Standards and Technology (NIST). 

    However, research conducted by Joy Bulowami at MIT Media Lab demonstrates that while some facial recognition software boasts more than 90% accuracy, this number can be misleading. When broken down into demographic categories, the technology is 11.8% – 19.2% less accurate when matching faces of color. Critics argue that this reliability gap endangers people of color, making them more likely to be misidentified by the technology. After the initial release of the study, the research noted that IBM and Microsoft were able to correct the accuracy differentials across specific demographics, indicating that with more care and attention when crafting these technologies, adverse effects like these can be prevented. 

    Image clarity plays a large role in determining the accuracy of a match. A 2024 study from NIST found that matching errors are “in large part attributable to long-run aging, facial injury, and poor image quality.” Furthermore, when the technology is tested against a real-world venue, such as a sports stadium, NIST found that the accuracy ranged between 36% and 87%, depending on the camera placement. However, as low-cost, high-resolution cameras become more widely available, researchers suggest the technology will improve. 

    Law enforcement emphasizes that because facial recognition cannot be used as probable cause, investigators must use traditional investigative measures before making an arrest, safeguarding against misuse of the technology. However, a study conducted by Georgetown Law’s Center for Privacy and Technology says that despite the assurance, there is evidence that facial recognition technology has been used as the primary basis for arrest. Publicized misidentification cases, such as Porcha Woodruff, a black woman who was eight months pregnant when wrongfully convicted of a carjacking and armed robbery from a false match on the police’s facial recognition software, reaffirm the belief that police can use matches in a facial recognition return to make an arrest and that the technology is less reliable for faces of color. 

    Opponents argue that law enforcement is not doing enough to ensure that their systems are accurate and reliable. This is in part due to how law enforcement uses facial recognition software, and for what purpose. Facial recognition software allows users to adjust the confidence scores, or reliability, of the image returns. Images with high confidence scores are more accurate but produce fewer returns. In some cases, law enforcement might use a lower confidence score to generate as many leads as possible. 

    For example, the ACLU of Northern California captured media attention with an investigative study that found that Amazon’s Rekognition software falsely matched 28 members of Congress with mugshots, 40% of which were congresspeople of color. The study used Rekognition’s default confidence score, 80% which is considered relatively low, to generate the matches. However, in response to the study, Amazon advised users that a 95% confidence threshold is recommended to find human matches. 

    Both proponents and critics advocate for comprehensive training to help law enforcement navigate the software and protect against misuse. A GAO study of seven federal agencies using facial recognition technology found that all seven agencies used the software without prior training. After the study, the DHS and DOJ concurred with the recommendations and made steps to rectify this issue. 

    Civil liberties advocates and policy groups argue that without regulation and transparency in this area, it is hard to ensure that the systems are used correctly and in good faith. 

    Privacy Concerns and Freedom of Assembly 

    Social media companies have raised concerns about how the information hosted on their platform is used in facial recognition software. Facial recognition software is powered by algorithms that scrape the internet of facial images and personal information which is then used to generate returns for the software. Facebook, YouTube, and Twitter are among the largest companies to speak out against the practice, filing cease and desist orders. However, legal precedence established in a 2019 ruling, ​​HiQ Labs V. LinkedIn, allows third parties to harvest publicly available information on the internet due to its nature as public domain. 

    Facial recognition technology can be used to monitor large crowds and events, which is commonplace in airports, sports venues, and casinos. Furthermore, law enforcement is known to have used facial recognition software to find individuals present at the January 6th Insurrection and the nationwide Black Lives Matter movements. Law enforcement argues that surveilling large crowds with this technology can help protect the public from unlawful actors, and help catch suspects quickly. However, privacy and civil liberties activists worry about the impact of surveillance on the freedom of assembly. 

    Regulatory Landscape and Conclusions 

    In 2023, Senator Ed Markey (D) reintroduced a bill to place a moratorium on the use of facial recognition technology by local, state, and federal entities, including law enforcement, which has yet to progress through Congress. However, states like Maine and California have enacted laws that address some of the challenges presented by the technology, along with a patchwork of other local laws across the country. 
    Critics continue to argue that a lack of transparency and accountability among law enforcement drives uncertainty in this area. The ACLU is currently suing the FBI, DEA, ICE, and Customs and Border Protection to turn over all records regarding facial recognition technology usage. However, advocates argue that the benefits outweigh the concerns, and are a useful tool for law enforcement.

  • AI Technology Usage in Weapons Systems

    AI Technology Usage in Weapons Systems

    Whoever becomes the leader in [artificial intelligence] will become the ruler of the world” – President Vladimir Putin 2018

    The ever-changing dynamic of international security is adapting quickly to new developments in technology. The nuclear bomb, space travel, and chemical weapons drastically changed the state of warfare globally. Now, artificial intelligence is transforming the way we use and conceptualize weaponry and global militaries; just one of the products of the digital age in the 21st century. 

    Presently, global leaders are automizing weapons systems to both advance antiquated weapon technology and thwart new security threats in the age of globalization and digitalization. The quote above by Russian president Vladimir Putin illustrates the cyberspace-race nature of automized weapons systems and the importance of keeping in touch with the future of automized technology. Current leaders in automized weaponry include China, the United States, Russia, and Israel. 

    Key terms and Definitions

    Artificial intelligence, commonly known as AI, refers to the way computers process large volumes of information, recognize patterns in that data, and make predictions based on the information given. Though the word “intelligence” gives way to many social connotations, it is important to recognize that artificial intelligence is simply a way of synthesizing large amounts of data. Much like traditional forms of data synthesis, data sets limit the prediction output, leaving room for faulty or poorly-informed predictions. 

    Automation refers to the process of making systems function automatically, or without human intervention. Automation varies in range from no autonomy, partial autonomy, and full autonomy. 

    Artificial intelligence plays a key role in automation as the predictive nature of artificial intelligence allows machines to interpret information as they function, leading to more efficient autonomy and less reliance on human intervention. 

    Implementations of an AI-automated Military

    AI development is contentious among U.S decision makers, with questions around the ethical and moral implications of AI, or a fully autonomous weapon This has significantly stunted growth in the AI defense sector, causing political analysts to caution against slow development. 

    The hesitation to implement AI-automation in the military has some merits. There is significant skepticism about the reliability of this technology. Gaps in data sets threaten output dependability. Additionally, blind trust in AI undermines the importance of human rationality. Human rationale generally prevails in wartime decision-making due to war’s complexities. For example, in 1983, a Russian officer, when presented with a warning that the U.S had launched a nuclear attack, decided not to move forward with retaliation. The warning was a computer malfunction, and the Russian officer ultimately saved the world from nuclear disaster. 

    However, AI-powered weapons can significantly change the state of combat. Having semi or fully autonomous weapons decreases armed casualties and the need for a large standing army. Furthermore, global actors like China and Russia are placing significant emphasis on AI weapons proliferation, threatening U.S security. 

    The government has been slow to utilize more sophisticated AI-driven weaponry. This led to the 2021 conclusion by the National Security Commission that the U.S is ill-prepared for future AI combat. To address this, President Biden appointed Margaret Palmieri as the new deputy chief digital and artificial intelligence officer to spearhead the movement toward AI defense systems. Additionally, the administration created the National Artificial Intelligence Resource Research Task Force which focuses on increasing access to this technology to promote innovation and incentivize engineers, researchers, and data scientists to join the defense sector. However, this comes with limitations. The United States will need to ensure access to more defense data, especially data held by private companies. Additionally, incentivizing data talent is an obstacle as STEM talent flocks to start-ups and private companies due to the promise of money and unregulated research. 

    In 2020, the United States defense sector set aside $4 billion for AI research and automated technology. This number is trivial compared to overall defense spending, which was $100 billion in 2020 for general weapons research and development. However, it is important to keep in mind that the cost of AI technologies is decreasing rapidly as hardware becomes more affordable in the private sphere.

    Weapons Automation Among Global Actors

    France is developing an ethics committee to oversee the development of novel automated weapons systems. This committee will work with the Ministry of Defense to ensure the systems implemented are reliable and safe. Similarly, Germany is taking a multilateral approach to AI integration. The government is dedicated to seeing AI technology used ethically and sustainably in private and military sectors. 

    Israel currently leads the Western world in technological development and has a symbiotic relationship with the United States in terms of weapons development, research, and funding. The most notable achievement in Israeli defense is the Iron Dome missile defense system. This automated system immediately detects and shoots down adversary artillery to prevent threats to civilians. This system operates with little human oversight, meaning there is no chain of command for initiating the defense response. 

    China holds much of the world’s attention on automated weapons systems. The majority of China’s AI systems are used in the surveillance sector to monitor movement and social activity through biometric data. In terms of automated defense systems, China is currently developing new forms of weaponry as opposed to automating existing technology. Most notable is the implementation of swarming technology, or the use of many small, fully autonomous drones to surround enemies. 

    Russia, with Chinese aid, is currently developing AI weaponry to bolster security ambitions. This technology, however, is largely absent from the current conflict in Ukraine where forces are using traditional war tactics. Instead, a large portion of Russian antagony has consisted of deepfake media and misinformation campaigns. For instance, during the lead-up to the 2016 U.S Presidential elections, Russia made use of troll farms on Facebook to sow discord in key swing districts. Furthermore, Russia used similar tactics to foster pro-Russian sentiment in eastern parts of Ukraine to bolster rebel forces in Donbas. While misinformation is certainly a war tactic, these actions stray away from typical AI-powered weaponry. 

    More on Global Actors and AI-Weaponry:

    More on the Implications of AI Automation

  • Abigail Gaetz, George Washington University

    Abigail Gaetz, George Washington University

    Abigail is a rising senior at George Washington University majoring in International Relations. Throughout her college career, she has maintained a dual focus on technology policy and global studies, concentrating on  digital culture, misinformation, and polarization. As a member of the Global Bachelor’s Program at GW, Abigail has spent most of her college career abroad, living and studying in Northern Ireland, Morocco, and Spain. These experiences have enhanced her approach to academic inquiry and fostered a deep appreciation for the international community. From these experiences, Abigail will channel her insight toward her Senior Thesis focusing on the hidden economy of disinformation. Her research attempts to uncover where, why, and how content producers disseminate false information at home and abroad. Once a student fellow on the Technology Policy team, Abigail is excited to return to ACE as a Research Associate to lead a team of dedicated students to produce meaningful and insightful work. 

    Linkedin