Category: Technology

  • The TikTok Ban: Overview and New Developments

    The TikTok Ban: Overview and New Developments

    An Overview of TikTok Security Concerns

    In 2016, the Chinese company ByteDance launched Douyin, a social media app focused on short-form content in China. Following the app’s success, ByteDance expanded the app overseas, launching TikTok to the international market in late 2017. TikTok was a major success internationally, amassing two million downloads and 800 about million active users within a few years.

    Despite its massive popularity, TikTok has come under scrutiny for alleged cybersecurity threats. In 2019, the U.S. government sued the company for violating child online privacy laws, leading TikTok to settle for a $5.7 million fine. After reports of alleged censorship suggested that TikTok was working in tandem with Chinese government interests, the Pentagon banned the app on all military devices. In a joint letter to the Director of National Intelligence later that year, Senators Chuck Schumer (D-NY) and Tom Cotton (R-AR) argued that weak Chinese cybersecurity laws might compel ByteDance “to support and cooperate with intelligence work controlled by the Chinese Communist Party” by handing over data to the Chinese government upon its request. However, more recently, security experts argued that China has been collecting personal data from Americans for nearly a decade such that TikTok is not necessarily a novel or unique threat.

    A Brief History of the TikTok Ban

    After TikTok gained widespread attention in 2019 for alleged security concerns, President Trump signed two consecutive executive orders that restricted American business transactions with ByteDance and demanded ByteDance to shift management of TikTok’s U.S. operations. While these orders prompted consideration of a potential sale to Microsoft, the second order was blocked by a federal judge and the Trump administration eventually loosened its deadlines after losing the 2020 election.

    The momentum toward a national TikTok ban slowed for the first few years of the Biden Administration, until a 2023 hearing with TikTok’s CEO drew public attention for lawmakers’ heated interrogations. The same year, a bipartisan “sell-or-ban” bill, which proposed a national TikTok ban if the app was not sold to an American buyer in 270 days, was introduced and gained traction. On April 24, 2024, President Biden signed the “sell-or-ban” bill into law. In response, TikTok sued the U.S. Justice Department, arguing that the ban violated the constitutional right to free speech. The Department of Justice argued that the bill was justified due to pressing national security threats. 

    Recent Developments

    On Friday, January 17th, 2025 – two days before the ban was to go into effect – the Supreme Court ruled against TikTok, holding that the “sell-or-ban” bill was constitutional because it did not target TikTok for the content of the platform’s speech, and was based on sufficient evidence of national security threats. Additionally, the Supreme Court reasoned that the law was not an outright ban because TikTok still had the opportunity to operate if the platform was sold to different management. 

    On Saturday, January 18th, the app went offline for about twelve hours before returning with a message thanking then President-elect Trump for his efforts to restore the app. Before TikTok resumed service on Sunday the 19th, Trump had promised to issue an executive order delaying enforcement of the “sell-or-ban” law after his inauguration the following day. 

    Now, Trump has issued a 75-day delay in enforcing the TikTok ban, ordering that ByteDance either sell the app within that time or reach a deal with Trump in which 50% of ownership of the app would be given to the United States

  • Understanding the Debate on Fair Access to Mental Augmentation in Neurotechnology

    Understanding the Debate on Fair Access to Mental Augmentation in Neurotechnology

    Neurotechnology is an area of technology that specifically applies to the monitoring, regulation, or enhancement of brain activity. As neurotechnologies advance, the once far-fetched idea that humans might leverage technology to augment their nervous system has become closer to fact than fiction. Mental augmentation encompasses any means by which people enhance their mental functions beyond what is necessary to maintain health. Although potentially useful, the application of mental augmentation technologies today presents challenges and controversy.

    Applications of Mental Augmentation: Medical vs. Recreational

    Neurotechnology devices serve either medical or recreational purposes. Medically, these devices treat mental health disorders, learning disabilities, and neurological conditions by stimulating the brain. Recreationally, they enhance learning and cognition or improve efficiency.

    Transcranial magnetic stimulation (TMS) and deep brain stimulation (DBS) are both FDA-regulated medical treatments. Although their primary purpose is treatment, they have also been proven to improve brain functions. TMS, primarily used to treat depression, improves cognitive functions such as episodic and working memory, and motor learning, with treatment costs ranging from $6,000 to $12,000. DBS, used to treat Parkinson’s disease, enhances learning and long-term memory, with an average procedure cost of $39,152. Candidates for DBS must have severe symptoms, unmanageable by medications. 

    Transcranial direct current stimulation (tDCS) is currently unregulated by the FDA and is considered a non-medical device. tDCS devices can be sold for “wellness” purposes and recreational use. tDCS increases neuronal plasticity, encouraging the formation of connections that reinforce learning. Research suggests that tDCS enhances cognitive and behavioral performance, and potentially improves language acquisition, math ability, and memory. tDCS devices are available online and can cost as little as $40 to around $500

    Transcranial Direct Current Stimulation

    Given its easy accessibility for mental enhancement and recreational use, tDCS plays a central role in the debate on fair access to neurotechnologies. It is difficult to determine how many people use tDCS for mental enhancement rather than medical treatment, since the line between medical and recreational use is blurred. Users like Phil Doughan seek mental improvement exclusively, while others, like Kathie Kane-Willis, use the device to fix issues like brain fog, a medical symptom that is often subjective and difficult to measure. 

    tDCS is not widely used; however, one study suggests that brain stimulation could one day become mainstream, similar to the way people use caffeine to increase alertness. A study done by Pew Research found that nearly half of Americans (47%) say they would be at least somewhat excited about mental augmentation techniques that allow people to process information more quickly and accurately. 

    tDCS is not FDA approved, which means companies currently have the freedom to bring tDCS devices to the market with claims of treating medical conditions and enhancing brain function. Advertisements can be misleading as tDCS studies have not yet conclusively shown that the technique provides real benefits. One neuroscientist notes that the public adoption of tDCS is happening at a faster pace than related research. International organizations suggest that this under-researched and unregulated use of neurotechnologies entails unprecedented risks for human rights. 

    The NeuroRight to Fair Access to Mental Augmentation

    Amidst ethical concerns about neurotechnology, Dr. Rafael Yuste founded The Neurorights Foundation to advocate for human rights directives and ethical guidance on neuro technological innovation. Concerned with the possible exacerbation of inequality between people who can and cannot afford neurotechnologies, professors have proposed a framework entitled The NeuroRight to Fair Access to Mental Augmentation. The framework states, “There should be established guidelines at both international and national levels regulating the use of mental enhancement neurotechnologies. These guidelines should be based on the principle of justice and guarantee equality of access.” 

    Brazil and Chile have both enacted legislation to ensure equitable access to neurotechnology. Brazil’s Article 13-E of bill No. 522/2022 states that “The State shall take measures to ensure equitable access to advances in neurotechnology”. Similarly, Chile passed the “Neuroprotection Bill”, which establishes that “The State will guarantee the promotion and equitable access to advances in neurotechnology and neuroscience”. Chile’s “Neuroprotection Bill” faces criticism for its vague scope, limitations, and obligations, highlighting the need for more nuanced discussion of the issue. 

    The Central Debate

    Proponents of Fair Access to Mental Augmentation are concerned that cognitive enhancements may primarily benefit the wealthy due to high pricing, which may widen social, cultural, and economic divides. They suggest that the enhanced mental abilities afforded to those with the purchasing power to buy mental augmentation devices will further exacerbate wealth gaps. Moreover, proponents argue that the social polarization caused by augmentation technology would have knock-on consequences for a range of human rights, raising questions about how far a “neurotech divide” could set back equality and inclusion. They warn that augmentation devices require especially careful consideration when used in classrooms, workplaces, and other competitive environments where wealth differences are amplified. And even in the context of medical treatment, the benefits of neurotechnologies will not be financially accessible to everyone. Proponents of the NeuroRight to Fair Access also point out that if augmentation becomes a widespread practice, enhanced abilities may become a standard. This raises the challenge of respecting people’s will to not use neurotechnologies – an issue touched on in the NeuroRights framework. 

    Critics of Fair Access to Mental Augmentation question the feasibility of implementing the policy framework. If equal access to neuroenhancement is established in a way where states are responsible for guaranteeing equal access to all, this would imply a fiscal burden to already-underfunded health systems that cannot provide access to more basic human needs. Critics also argue that the NeuroRights framework must be adapted to various economic, cultural, and social contexts before its implementation, since augmentation technology is not equally perceived nor equally accessible across the globe. For example, mental augmentation may go against religious precepts and morals where the modification of human nature is not viewed favorably. Moreover, enshrining access to mental augmentation as an international human right risks punishments for developing nations with less access to such technologies, which may widen gaps between wealthy and historically-exploited countries.

    Conclusion and Future Prospects

    Neurotechnologies have recently entered the market, increasing the availability of products for mental augmentation purposes. The global neurotech market is growing at a compound annual rate of 12% and is expected to reach $21 billion by 2026. This rapid pace of innovation suggests that we may be late to regulate neurotechnologies. To promote responsible and ethical use, it is crucial to engage in proactive and thoughtful discussions on how to regulate neurotechnologies effectively.

  • Pros and Cons of the Kids Online Safety Act

    Pros and Cons of the Kids Online Safety Act

    The Kids Online Safety Act (KOSA) responds to the escalating youth mental health crisis and its ties to harmful online content and addictive social media algorithms. Given the rising rates of depression, anxiety, and loneliness among children in the U.S., KOSA aims to establish a legal framework that promotes a healthier online ecosystem for youth. 

    KOSA: An Introduction

    For clarity, it is essential to differentiate between two important technology policies: the Kids Online Safety Act (KOSA) and the Children’s Online Privacy Protection Act (COPPA), which are often considered together due to their concurrent passage in the Senate. This brief will focus specifically on KOSA. COPPA primarily addresses privacy through provisions like prohibiting the collection of personal data from children 13 and under without parental consent, but KOSA takes a different approach to online safety that emphasizes the mental health impacts of social media use.

    In a time of increasing legislative partisanship, KOSA is a strongly bipartisan proposal with 72 co-sponsors and the support of President Biden. However, despite passing 91-3 in the Senate, KOSA now faces resistance from House GOP members over potential censorship issues, rendering it unlikely for the bill to advance in its current form. This opposition emphasizes the challenge of balancing online safety regulations and First Amendment rights. 

    KOSA: The Youth Mental Health Crisis

    Research suggests that youth today struggle with mental illness at higher rates than past generations. From 2007 to 2018, rates of suicide among the ages 10-24 increased by nearly 60%. According to the CDC, three in five teen girls report feeling “persistently sad or hopeless,” and 30% have seriously contemplated suicide. One study shows that increased social media use is linked to higher levels of anxiety, depression, and self-doubt. Moreover, a study by the NIH found that the risk of depression rose by 13% for each hour of increase in social media use among adolescents.

    Modern-day social media algorithms are intentionally addictive, promoting compulsive use. Congress defines compulsive use as any behavior driven by external stimuli that causes individuals to engage in repetitive behavior reasonably likely to cause psychological distress, loss of control, anxiety, or depression. Furthermore, algorithms often recommend inappropriate or harmful content. For example, as teens click on advertisements for weight loss products, their feeds can become dominated by the content, negatively impacting body image. A study reported that thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse.

    KOSA’s Major Provisions

    KOSA emerged in response to the research linking youth mental health struggles to social media use. The bill aims to protect children’s mental and emotional safety online by:

    • Requiring platforms to modify their content moderation policies in order to reduce major harms such as mental illness onset, sexual exploitation, and the sale of illicit drugs to minors on their platforms.
    • Giving minors the tools to restrict the collection and visibility of private information. 
    • Allowing minors to opt out of addictive algorithmic recommendations. 
    • Requiring platforms to default to the strongest safety settings for child users. 
    • Holding online platforms accountable through annual independent audits. 

    Arguments in Favor of KOSA

    Proponents of the Kids Online Safety Act (KOSA) argue that the bill is crucial for protecting youth from mental health challenges. Pediatricians across the nation have spoken out about the urgent need to create a healthier digital environment for children and adolescents. Additionally, the U.S. Surgeon General released an advisory stating social media carries a “profound risk of harm” to youth. The U.S. surgeon general called on Congress to require warning labels on social media platforms and their effects on young people’s lives, similar to those on cigarette boxes. This emphasis on mental health is a central aspect of the arguments in favor of KOSA. 

    Autonomy is a key argument in the discussion surrounding KOSA as well. Addictive algorithms keep kids endlessly scrolling and seeking dopamine hits from follower engagement. There is currently no mechanism for opting out of recommendation algorithms. KOSA aims to change this by requiring social-media platforms to provide parental controls and disable features like algorithmic recommendations. Supporters argue that this provision would provide parents and kids with more autonomy over their online experiences, allowing them to gain greater control over their usage times. 

    Supporters also argue that KOSA is an important step toward holding companies accountable for the harm they cause. They emphasize that Big Tech companies have designed social media to maximize profit by exploiting children’s developmental vulnerabilities for commercial gain. After years of operating with few regulations, tech firms are receiving increasing scrutiny for alleged harms to young users, exposed by whistleblower revelations. Frances Haugen, a former Facebook employee, disclosed tens of thousands of pages of internal Facebook documents to Congress and the Securities and Exchange Commission proving that Facebook was aware of their negative impact on teens’ mental health. Microsoft, Snap, and X have all endorsed KOSA, likely in recognition that profits can still be made while taking reasonable steps to protect children. 

    Arguments Against KOSA

    Critics of KOSA express concerns about potential censorship of online content. On one hand, many conservatives argue that empowering the Federal Trade Commission (FTC) with censorship authority could lead to the suppression of legitimate speech. For example, if a Republican leads the FTC, content discussing LGBTQ+ lives, reproductive health, and climate change might be deemed harmful to youth, while a Democratic leader could censor discussions on automatic weapons, shootings, and religious viewpoints on the LGBTQ+ community. Critics fear KOSA could be enforced for political purposes.

    On the other hand, many liberal legislators are concerned with KOSA’s Duty of Care provision, which requires companies to “prevent and mitigate” harms to children such as bullying, violence, and negative mental health impacts. They worry that this provision could lead to the censorship of LGBTQ+ content and reproductive health content as companies over-filter content to avoid legal repercussions. In September 2022, the Heritage Foundation seemingly endorsed KOSA in an editorial, praising the Duty of Care provision’s potential to limit what they claim is Big Tech’s influence on children’s gender identities. The Heritage Foundation later expressed intentions to use similar bills to limit transgender content online, fueling concerns about KOSA’s potential dangers for the LGBTQ+ community.  

    The Duty of Care provision has been revised since its initial introduction. Originally, KOSA specified various mental health conditions that companies needed to mitigate, but after revisions, the emphasis shifted to preventing the promotion of inherently dangerous behaviors. Despite these changes, critics maintain that the provision remains too broad and vague, potentially leading to censorship of crucial information. Although KOSA includes a limiting principle saying nothing in the Duty of Care will prevent a minor from deliberately searching for a specific type of content, companies may still censor content to prevent compliance issues

    Another significant concern is that KOSA could be counterproductive, potentially increasing the risks of online harm to children. Critics argue that by restricting access to lawful speech on topics such as addiction and bullying, KOSA may hinder minors from finding supportive online communities and feeling comfortable in their identities. 

    Conclusion

    In conclusion, KOSA aims to address the rising youth mental health crisis by holding platforms accountable for their negative mental health impacts and enhancing parents’ and young users’ autonomy over their online experiences. While supporters believe the bill will create a safer digital environment, concerns about potential censorship and the implications of KOSA’s Duty of Care provision underscore the complexities of balancing safety with free speech. This balance presents a continuing challenge as lawmakers debate the future of the bill.

  • Pros and Cons of New York’s Regulation of Financial Institutions and Chatboxes

    Pros and Cons of New York’s Regulation of Financial Institutions and Chatboxes

    Chatbots, powered by artificial intelligence (AI), have become increasingly prevalent in the banking industry. The implementation of chatbots in banking increases customer satisfaction and loyalty through offering instant and round-the-clock support to handle a large volume of customer queries and address customer inquiries promptly. Around 37% of the U.S. population engaged with a bank’s chatbot in 2022, and this number is expected to increase in the future. Goldman Sachs launched AI technology called ChatGS AI to enhance their customer support procedures and boost efficiency within their customer service systems. This move is part of a trend where many institutions are turning to chatbots as a more cost-effective solution compared to human customer service.

    Background  

    The evolving technological sophistication of chatbots in banking sparks questions about cybersecurity policies, such as the New York Department of Financial Services Cybersecurity Regulation (Regulation 500). The NYDFS Cybersecurity Regulation establishes cybersecurity requirements on all Covered Entities (financial institutions and financial services companies). It includes requirements for developing and implementing an effective cybersecurity program, requiring Covered Entities to assess their cybersecurity risk, and developing a plan to proactively address risks.

    What are the Key Components for the NYDFS Cybersecurity Regulation (23 NYCRR 500)?

    The main goal of the regulation is to ensure comprehensive cybersecurity measures in financial institutions. This includes information security, access controls, business continuity planning, systems and network security, risk assessments, cybersecurity policies and procedures, third-party security, data retention policies, data security controls, detection of cybersecurity events, restoration of operations after an event, and third-party risk assessments. The regulation emphasizes the importance of aligning with industry best practices and ISO 27001 standards, while also requiring the use of qualified cybersecurity personnel, ongoing training and education, notification of cybersecurity events, and the implementation of multi-factor authentication.

    Benefit of Third Party Regulations under 23 NYCRR 500

    Many financial institutions use third party vendors to store customer data and provide AI technology, creating another gateway for attackers to infiltrate customers’ networks through backdoors. This means third-party risk protections are crucial. By enforcing minimum regulations on vendors, financial institutions aim to enhance the security and integrity of sensitive data. High-profile incidents like the Target 2013 hack and SolarWinds breach demonstrate the impact of such vulnerabilities.

    Moreover, third-party risk protections promote transparency and accountability in the vendor relationship. By clearly defining minimum security requirements and establishing a vendor risk assessment framework, financial institutions set expectations and hold vendors accountable for meeting those requirements. This fosters a culture of shared responsibility and ensures that vendors prioritize cybersecurity measures and continuously enhance their security posture.

    Furthermore, third-party risk protections support incident response preparedness. In the event of a cybersecurity incident, financial institutions need to restore normal operations promptly. By including third-party obligations in their incident response plans, organizations can outline the roles and responsibilities of vendors in responding to and recovering from cyber threats. This coordinated approach enhances incident response effectiveness and minimizes potential disruptions caused by third-party vulnerabilities.

    Challenges of Third Party Regulations under 23 NYCRR 500

    Third-party regulations can present certain burdens and challenges. Financial institutions face an increased compliance burden as they are responsible for ensuring their third-party vendors adhere to the stringent cybersecurity requirements. Financial institutions must allocate additional resources, time, and effort to effectively monitor and assess vendor compliance, requiring ongoing assessments and validations of their security practices and controls. Additionally, the regulations may impose limitations on vendor selection, potentially reducing the pool of available vendors and limiting competition and innovation in the market.

    Utilizing third-party vendors in the development of chatbot technology has resulted in enhanced customer service and cost savings for financial institutions. Chatbots offer increased operational efficiency and cost savings for financial institutions. By automating basic inquiries and transactions, chatbots handle routine tasks. Reports show that when compared to the use of human agent customer service models, chatbots deliver $8 billion per annum in cost savings, approximately $0.70 saved per customer interaction. Wells Fargo has utilized third-party vendors in the launch of Fargo, a new chatbot virtual assistant that uses Alphabet’s Google Cloud platform to process customer’s input and provide tailored responses. Additionally, the U.S. Bank has introduced its Smart Assistant, exemplifying a new, growing reliance on chatbots in banking. However, it also introduces complexities and costs related to implementing and managing third-party risk programs.These complexities involve hiring specialized personnel, investing in cybersecurity tools and technologies, and conducting regular  assessments. These expenses can add up and impact the overall operational costs of financial institutions.

    Recent Advancements

    Several advancements in chatbots in banking can help financial institutions comply with the regulations set by the New York Department of Financial Services (NYDFS). Here are some notable advancements:

    • Enhanced Security Measures: Implementing robust data encryption protocols, secure data storage, and adhering to industry-standard cybersecurity practices ensures compliance with NYDFS regulations and safeguards customer data.
    • Natural Language Processing (NLP) and Machine Learning (ML) Improvements: Continual training of chatbot algorithms on real customer interactions and incorporating feedback loops enhances their understanding of customer queries, improving response accuracy and aligning with the NYDFS’s focus on reliable customer information.
    • Contextual Understanding and Personalization: Leveraging customer data and contextual information enables chatbots to provide personalized recommendations and tailored banking services, enhancing the customer experience and meeting the NYDFS’s emphasis on meeting customer needs.
    • Continuous Monitoring and Compliance Auditing: Regularly reviewing chatbot conversations and conducting compliance audits helps identify and rectify any compliance issues proactively, ensuring compliance with NYDFS regulations and accurate information provision.
    • Regular Updates and Compliance Training: Staying updated with NYDFS regulations, incorporating regulatory changes into chatbot processes, conducting compliance training, and maintaining up-to-date documentation demonstrates commitment to compliance and customer data protection.
  • American Privacy Rights Act: Pros, Cons, and Impact on Consumer Data Protection

    American Privacy Rights Act: Pros, Cons, and Impact on Consumer Data Protection

    Introduction

    The American Privacy Rights Act (APRA) is a new bill introduced in Spring 2024 that seeks to implement a nationwide set of consumer privacy laws. Currently, no national framework exists. A previous bill with similar goals, the American Data Privacy Protection Act, failed to pass in 2022

    The APRA would, among other things:

    • Restrict the data that large organizations (such as corporations, nonprofits, and third-party data brokers) would be able to collect about consumers 
    • Give consumers the ability to view and control any data being collected
    • Create a “private right of action”, giving consumers (rather than only the government) the ability to sue organizations that violate the laws introduced by the act
    • Prohibit data-collecting organizations from discriminating against customers based on their privacy decisions (for instance, by providing slower service to someone who opts out of data sharing)
    • Restrict the use of “dark patterns”, or design methods used to manipulate consumer behavior, such as making the data sharing opt-out button very small

    Support for the APRA

    Supporters of the APRA argue that comprehensive privacy legislation is necessary, and that this act is well suited to fill that role. A study conducted by the Pew Research Center found that roughly 70% of Americans feel confused and concerned about the use of their data by private companies and the government. Another Pew study found that roughly 60% of Americans read privacy policies before agreeing to them, and one third of that group understands very little to none of the content. Additionally, many websites have nonexistent or difficult-to-access privacy policies. The APRA would address these issues by requiring organizations to make easy-to-read privacy policies readily available to users. Advocates of the bill emphasize its likelihood of passage, in contrast to previous failed attempts at nationwide privacy legislation, given that APRA has bipartisan support in both the House and Senate. Privacy advocates and technology industry leaders praise the bill for its balance of consumer and business interests. They hold that APRA increases protections while creating a uniform national standard that will be easier for companies to comply with. 

    Criticism of the APRA: Going Too Far

    Criticism of the APRA generally comes from two angles, one of which is that it encroaches upon private sector revenue and autonomy. Some argue that there are benefits to collecting consumer data, such as contributions to health and energy efficiency research. Moreover, critics highlight that companies rely on ad data for revenue on platforms that are otherwise free to consumers; restrictions on data sharing might make this business model less sustainable. Others criticize APRA’s use of the vague term “dark patterns”, which they argue can lead to regulatory overreach. Critics also claim that the APRA would be expensive to implement and maintain. Private organizations covered by the act would have to spend money on compliance, which could disproportionately impact smaller businesses. Finally, opponents object to the inclusion of the private right of action, arguing it will empower private prosecutors over government regulators who may be better able to balance the interests of consumers and businesses.

    Criticism of the APRA: Not Doing Enough

    The other group of critics argues that the APRA does not go far enough to protect consumer data, and may actually force the regression of protections in states with already robust privacy laws. Some states, such as California and Illinois, have passed extensive data privacy acts that include protections that the APRA does not. For instance, unlike the APRA, the California Consumer Privacy Act (CCPA) protects data about sexual orientation and immigration status. Under the APRA, CCPA’s protections would be overridden and replaced with a less stringent national baseline. The head of the agency that enforces the CCPA criticizes the APRA’s limitations on state laws, asserting that “Congress should set a floor, not a ceiling”. 

    Conclusion

    While most agree that a clearly defined national data privacy bill would benefit consumers and businesses alike, many disagree over whether the APRA is the right solution. Although bipartisan support makes the APRA’s passage a realistic possibility, any privacy legislation will have to face the challenge of balancing the interests of private enterprise with consumer protection.

  • Understanding PADFA: The Pros and Cons of the Federal Government’s Approach to American Data Privacy

    Understanding PADFA: The Pros and Cons of the Federal Government’s Approach to American Data Privacy

    A new law addressing the national security risks of the data brokerage industry took effect on June 23, 2024, following bipartisan approval in January. The bill, known as the Protecting Americans’ Data from Foreign Adversaries Act or PADFAA, seeks to limit the transfer of sensitive American data to firms owned or controlled by Russia, China, North Korea, and Iran

    PADFAA is the first piece of federal legislation to take aim at the data brokerage ecosystem, an industry that buys, sells, and analyzes online data for third-party usage. Furthermore, the bill represents growing bipartisan efforts to address data privacy at large. While advocates celebrate the narrow and targeted nature of the bill to address specific challenges in the data brokerage industry, some claim that the bill either goes too far or doesn’t go far enough. 

    What is the data brokerage industry? 

    The data brokerage industry encompasses the wide network of buyers, sellers, and contractors who collect, license, and share data from public and private sources online. The data broker industry generates $200 billion in annual revenue in the United States alone. 

    Data brokers collect information from both public and private sources. Companies may sell the data they collect on their platforms to data brokers, including user details, purchase history, and cookie information. Alternatively, data brokers can scrape the internet for publicly available information found on public sites and social media platforms. 

    Data collected online is then analyzed and aggregated to create packages of information tied to certain groups, such as their purchase habits, interests, health history, ideology, and identity. Even if a data broker doesn’t specifically collect personal information such as names and phone numbers, holistic data can still be used to identify individual users. 

    The demand for packaged data is wide-reaching across sectors. Packaged data can be used for advertising, fraud detection, risk assessment, and populating people-search sites: websites where users can search for an individual’s personal details given only their name. Potential buyers include banks, credit agencies, insurance firms, internet service providers, loan companies, advertisers, and law enforcement agencies.  A Duke University study shows that data buyers can acquire access with varying levels of vetting, suggesting that nefarious actors can often buy access to data for dangerous purposes.

    Arguments in Favor of PADFAA

    Proponents of PADFAA praise the bill for its comprehensive approach to data privacy. Six months before PADFAA came into effect, President Biden passed an executive order to address similar concerns about adversarial countries acquiring sensitive American data. PADFAA not only enshrines the executive order’s provisions into law, but expands its scope from government-affiliated Americans to all Americans. Additionally, the bill applies to all data transactions, both big and small, which supporters argue will better protect the average American citizen. 

    Supporters also argue that PADFAA creates an even national standard for data privacy, replacing a patchwork web of state laws and more niche federal laws. Through its comprehensive definition of “sensitive data,” PADFAA also creates a legal precedent that can be used as the basis for future legislation in the area of data brokerage. Under the bill, sensitive data includes geolocation data, passport information, social security and driver’s license numbers, bank details, biometric and genetic information, private communication, personal identities such as age, gender, and race, and online activity. Proponents argue the breadth of this definition will make it difficult for data brokers to exploit loopholes in existing data privacy laws. 

    Arguments Against PADFAA

    Critics argue that the law’s focus on third-party data brokers, who collect and analyze data for sale, leaves much of the industry unregulated. PADFAA’s definition of a “data broker” does not include first-party data collectors, allowing apps, social media platforms, and healthcare services to sell American data directly to companies owned or controlled by Russia, China, North Korea, or Iran.

    Additionally, the law does not prohibit selling American data to the four listed countries if the seller does not reside in one of those countries. Data privacy advocates stress that under PADFAA, if a company licensed outside of Russia, China, North Korea, or Iran acquires American data, it is still permissible for that company to sell American data to any of the four countries. 

    Opponents also claim that PADFAA will overburden the Federal Trade Commission (FTC). The FTC has long specialized in consumer privacy and data protection. However, critics argue that the FTC does not have the capacity to enforce foreign policy. Mainly, the FTC lacks the security clearances necessary for obtaining critical intelligence information about adversarial attempts to acquire American data. Critics also argue that the FTC’s privacy division is underfunded and overstretched, ill-equipped for the task. 

    On the other hand, some argue that PADFAA is an unnecessary addition to an already-complex legal landscape concerning the data broker industry. Federal measures like the Fair Credit Reporting Act (FCRA) and state laws such as those in Vermont and  California take steps to protect consumer data from harmful use, and critics of PADFAA argue that those existing measures provide adequate protection. 

    Conclusion

    PADFAA marks the first step in regulating the data broker industry and protecting against its harmful effects. However, the law does not fully encompass the scale of the issues raised by privacy advocates, such as discretionary data collection, predatory and dangerous uses of individual information, and the lack of transparency in the industry. Nonetheless, the bipartisan support for a policy measure of this kind makes the path for future legislation less opaque.

  • Understanding the Debate on Neurorights and Personal Identity in Neurotechnology

    Understanding the Debate on Neurorights and Personal Identity in Neurotechnology

    In recent years, significant advancements in neurotechnology promise to profoundly impact healthcare and human capabilities. As technologies such as Brain-Computer Interfaces (BCIs), Transcranial Direct-Current Stimulation (tDCS), and Deep Brain Stimulation (DBS) evolve, they can potentially blur the boundary between a person’s consciousness and external technological influences. 

    Examples of Neurotechnologies

    Conditions such as Parkinson’s, Alzheimer’s, and epilepsy are caused by disruptions in brain function due to inactive neurons. Neurotechnology offers a potential solution by monitoring brain activity and selectively stimulating these faulty brain regions. This technology can help restore lost functions by activating the disrupted neurons, improving the quality of life for patients. Non-invasive methods use external devices for stimulation, whereas invasive methods involve surgically implanted electrodes. 

    BCIs are devices that enable control of computers through thought. In January 2024, Neuralink, founded by Elon Musk, implanted an invasive BCI into Noland Arbaugh, a quadriplegic participant in the PRIME study. The procedure aimed to restore his autonomy after a spinal cord injury. After surgery, he was able to control his laptop cursor for the first time since his injury, allowing him to reconnect with friends and family and regain independence. Neuralink aims to implant chips in 10 individuals by the end of 2024, pending FDA approval. 

    DBS involves implanting a small device under the skin near the collarbone, with wires reaching the brain to deliver mild electrical currents. This technology treats medical conditions like Parkinson’s disease and epilepsy. According to the Cleveland Clinic, as of 2019, experts estimated that about 160,000 people have had a DBS procedure since the 1980s and that 12,000 procedures happen each year.  

    tDCS, a non-invasive technique, applies low electric currents to the scalp. tDCS devices are being explored to treat depression, schizophrenia, aphasia, chronic pain, and other medical conditions. tDCS is used for non-medical applications as well including accelerated learning, focus, relaxation, and meditation. tDCS devices are currently available online ranging in cost from about $40-$500

    It is difficult to find a number that encompasses how many people use neuromodulation devices, however, the potential treatment population is vast including the millions affected by epilepsy, migraine, Parkinson’s disease, urinary incontinence, and other medical conditions. The global neurotech market grows at a compound annual rate of 12% and is expected to reach $21 billion by 2026

    Neurotechnology Regulation

    The FDA regulates the implementation of neurological devices, classifying them by their degree of risk and forming the pathways necessary to bring the device to the market. In general, for high-risk devices, companies will get an Investigational Device Exemption to clinically test their devices by proving that the benefits justify the risks. The gathering of clinical data is a key step in supporting pre-market approval. BCIs are still in the clinical trial period of gaining FDA approval. DBS technologies have been gaining FDA approval to treat different medical conditions since 1997. tDCS is FDA-cleared, which is a lower standard to be met than FDA approval due to lower risk compared to other neurostimulation methods.

    Neuroright to Personal Identity

    Neurotechnologies have the potential to alter perception, behavior, emotion, cognition, and memory, reshaping identity and notions of “humanness”. Technologies like BCIs, tDCS, and DBS use electrical impulses to influence brain activity. This can lead to changes in emotional and behavioral responses. For instance, a study on DBS found that stimulating the subthalamic nucleus, a brain area involved in cognitive and motivational functions, led to increased positive mood and reduced anxiety in participants. Depending on how invasive the devices are and the part of the brain they target, effects on mental processes vary. While neurotechnology holds the potential for positive therapeutic benefits, the possibility of changes to human behavior has raised concerns. 

    At Columbia University, academic leaders united to discuss the ethical concerns of neurotechnology. This discussion led to a new human rights framework called “Neurorights”, reflecting the consensus that advances in neurotechnology outpace global, national, and corporate governance. The 5 Neurorights proposed by the Neurorights Foundation include the right to Personal Identity. The foundation writes, “Boundaries must be developed to prohibit technology from disrupting the sense of self. When neurotechnology connects individuals with digital networks, it could blur the line between a person’s consciousness and external technological inputs.”

    The Neurorights Foundation, founded by Dr. Rafael Yuste, worked with the Senate of the Republic of Chile in 2021 to pass a Neurorights law and plans for a constitutional amendment. This development made Chile the world’s first country to have legislation protecting personal identity, free will, and mental privacy in reference to emergent neurotechnology.

    The Main Debates

    Proponents of Neurorights argue that Neurotechnology infringes on personal identity by causing unexpected changes to personality, identity, and decision-making, with evidence shown in research. In a 2016 study, a man using a brain stimulator to treat his depression for 7 years began to wonder whether the way he interacted with others was due to the device, stating “It blurs to the point where I’m not sure… frankly, who I am.” In another study, Dr. Yuste realized that by controlling specific brain circuits scientists could manipulate a mouse’s experience including its behaviors, emotions, awareness, perception, and memories. Yuste stated, “The brain works the same in the mouse and the human, and whatever we can do to the mouse today, we can do to the human tomorrow.” 

    Opponents of Neurorights question the sophistication of neurotechnology and its ability to cause widespread human rights concerns, seeing altering personal identity as something far in the future. Additionally, opponents draw attention to the concern that depending on the definition of identity, the Neuroright to Personal Identity may imply prohibiting neurotechnologies in general. This would significantly slow down important scientific progress in the field of neurotechnology. 

    Opponents of Neurorights also argue that considering existing legislation, Neurorights are unnecessary, and that passing additional human rights could be harmful. Human rights are powerful tools transforming the lives of billions of people. One central worry in the debate is that the inflation of rights may result in their devaluation. Human rights could lose their distinction, significance, and effectiveness if the passing of legislation is not considered cautiously. Proponents of Neurorights take a reformist position arguing that Neurorights must go beyond current fundamental human rights to effectively protect the right to personal identity, seeing this as a new and unique issue requiring additional legislation. 

    Conflicting ideas of “personal identity” add more complexity to the argument. Some view the right to use neurotechnology as an expression of personal identity, while others see preventing brain manipulation as a way to preserve and promote the freedom of the human mind.

    Conclusion

    In summary, the rapid advancement of neurotechnology, including BCIs, tDCS, and DBS, presents ethical dilemmas regarding personal identity. As research progresses and investments increase, these technologies could impact various aspects of human life from medical treatments to personal enhancement. The Neurorights Foundation actively works to incorporate Neurorights into international human rights law, national legal and regulatory frameworks, and ethical guidelines. The foundation has made developments in Chile, Brazil, Mexico, the United Nations, and Spain, and strives to incorporate Neurorights in the United States. As of now, at least two US states are considering legislation to protect private thoughts, reflecting a growing awareness of this issue.

  • Understanding The Debate on Facial Recognition Technology in Policing: Pros, Cons, and Privacy Concerns

    Understanding The Debate on Facial Recognition Technology in Policing: Pros, Cons, and Privacy Concerns

    Introduction

    A facial image is like a fingerprint: a unique piece of human data that can identify an individual or connect them to a crime. Law enforcement uses facial recognition to identify suspects, monitor large crowds, and ensure public safety.

    Facial recognition software is used by local, state, and federal law enforcement, but its adoption is uneven. Some cities, like San Francisco and Boston, have banned its use for law enforcement, while others have embraced it. Despite this, the technology has been instrumental in solving cold cases, tracking suspects, and finding missing persons, and is considered a game changer by some in law enforcement.

    Facial recognition software can be integrated with existing police databases, including mugshots and driver’s license records. Private companies like Clearview AI and Amazon’s Rekognition also provide law enforcement with databases containing information gathered from the internet. 

    Here’s how police use facial recognition technology:

    1. Law enforcement collects a snapshot of a suspect drawn from traditional investigative methods, such as surveillance footage or independent intelligence. 
    2. They then input the image into the facial recognition software database to search for potential matches. 
    3. The system populates a series of similar facial images ranked by the software’s algorithm, along with personal information such as name, address, phone number, and social media presence 
    4. Law enforcement analyzes the results and gathers information about a potential suspect, which is later confirmed through additional police work

    The exact number of law enforcement agencies using facial recognition software is difficult to know.

    Much like other investigative techniques, law enforcement tends to keep the practice of facial recognition identification out of the public eye to protect ongoing investigations. Furthermore, facial recognition technology is not used as evidence in court proceedings, meaning that it is hard to track the frequency of use of this technology in criminal prosecutions. 

    However, studies conducted on facial recognition and law enforcement give a broad understanding of the scope and scale of this debate. A 2017 study conducted by Georgetown Law’s Center on Privacy & Technology, estimates that 3,947 out of roughly 15,388 state and local law enforcement agencies in 2013, or one in four, “can run face recognition searches of their own databases, run those searches on another agency’s face recognition system, or have the option to access such a system.” Furthermore, a 2021 study from the Government Accountability Office (GAO) found that 42 federal agencies used facial recognition technology that they either owned or were provided by another agency or company. 

    Supporters of this technology celebrate the use of facial recognition to solve crimes and find suspects faster than ever before. 

    “The obvious effect of [this] technology is that ‘wow’ factor,” said Liu. “You put any photo in there, as long as it’s not a very low-quality photo, and it will find matches ranked from most likely to ones that are similar in a short second,” says Terrance Liu, vice president of research at Clearview AI, a leading service provider of facial recognition software to law enforcement. 

    Before facial recognition technology, identifying suspects caught on surveillance cameras was difficult, especially without substantial leads. Law enforcement argues that this technology can help investigators develop and pursue leads at faster rates. 

    How accurate are the results produced by this software? 

    Due to the advances in processing power and data availability in recent years, facial recognition technology is more accurate than it was ten years ago, according to a study conducted by The National Institute of Standards and Technology (NIST). 

    However, research conducted by Joy Bulowami at MIT Media Lab demonstrates that while some facial recognition software boasts more than 90% accuracy, this number can be misleading. When broken down into demographic categories, the technology is 11.8% – 19.2% less accurate when matching faces of color. Critics argue that this reliability gap endangers people of color, making them more likely to be misidentified by the technology. After the initial release of the study, the research noted that IBM and Microsoft were able to correct the accuracy differentials across specific demographics, indicating that with more care and attention when crafting these technologies, adverse effects like these can be prevented. 

    Image clarity plays a large role in determining the accuracy of a match. A 2024 study from NIST found that matching errors are “in large part attributable to long-run aging, facial injury, and poor image quality.” Furthermore, when the technology is tested against a real-world venue, such as a sports stadium, NIST found that the accuracy ranged between 36% and 87%, depending on the camera placement. However, as low-cost, high-resolution cameras become more widely available, researchers suggest the technology will improve. 

    Law enforcement emphasizes that because facial recognition cannot be used as probable cause, investigators must use traditional investigative measures before making an arrest, safeguarding against misuse of the technology. However, a study conducted by Georgetown Law’s Center for Privacy and Technology says that despite the assurance, there is evidence that facial recognition technology has been used as the primary basis for arrest. Publicized misidentification cases, such as Porcha Woodruff, a black woman who was eight months pregnant when wrongfully convicted of a carjacking and armed robbery from a false match on the police’s facial recognition software, reaffirm the belief that police can use matches in a facial recognition return to make an arrest and that the technology is less reliable for faces of color. 

    Opponents argue that law enforcement is not doing enough to ensure that their systems are accurate and reliable. This is in part due to how law enforcement uses facial recognition software, and for what purpose. Facial recognition software allows users to adjust the confidence scores, or reliability, of the image returns. Images with high confidence scores are more accurate but produce fewer returns. In some cases, law enforcement might use a lower confidence score to generate as many leads as possible. 

    For example, the ACLU of Northern California captured media attention with an investigative study that found that Amazon’s Rekognition software falsely matched 28 members of Congress with mugshots, 40% of which were congresspeople of color. The study used Rekognition’s default confidence score, 80% which is considered relatively low, to generate the matches. However, in response to the study, Amazon advised users that a 95% confidence threshold is recommended to find human matches. 

    Both proponents and critics advocate for comprehensive training to help law enforcement navigate the software and protect against misuse. A GAO study of seven federal agencies using facial recognition technology found that all seven agencies used the software without prior training. After the study, the DHS and DOJ concurred with the recommendations and made steps to rectify this issue. 

    Civil liberties advocates and policy groups argue that without regulation and transparency in this area, it is hard to ensure that the systems are used correctly and in good faith. 

    Privacy Concerns and Freedom of Assembly 

    Social media companies have raised concerns about how the information hosted on their platform is used in facial recognition software. Facial recognition software is powered by algorithms that scrape the internet of facial images and personal information which is then used to generate returns for the software. Facebook, YouTube, and Twitter are among the largest companies to speak out against the practice, filing cease and desist orders. However, legal precedence established in a 2019 ruling, ​​HiQ Labs V. LinkedIn, allows third parties to harvest publicly available information on the internet due to its nature as public domain. 

    Facial recognition technology can be used to monitor large crowds and events, which is commonplace in airports, sports venues, and casinos. Furthermore, law enforcement is known to have used facial recognition software to find individuals present at the January 6th Insurrection and the nationwide Black Lives Matter movements. Law enforcement argues that surveilling large crowds with this technology can help protect the public from unlawful actors, and help catch suspects quickly. However, privacy and civil liberties activists worry about the impact of surveillance on the freedom of assembly. 

    Regulatory Landscape and Conclusions 

    In 2023, Senator Ed Markey (D) reintroduced a bill to place a moratorium on the use of facial recognition technology by local, state, and federal entities, including law enforcement, which has yet to progress through Congress. However, states like Maine and California have enacted laws that address some of the challenges presented by the technology, along with a patchwork of other local laws across the country. 
    Critics continue to argue that a lack of transparency and accountability among law enforcement drives uncertainty in this area. The ACLU is currently suing the FBI, DEA, ICE, and Customs and Border Protection to turn over all records regarding facial recognition technology usage. However, advocates argue that the benefits outweigh the concerns, and are a useful tool for law enforcement.

  • Pros and Cons of Cybersecurity Regulation

    Pros and Cons of Cybersecurity Regulation

    Cybersecurity is the practice of protecting online networks, systems, and information from cyber attacks. Cybersecurity regulation involves policies that mandate specific cybersecurity strategies in both the private and public sector. With the increasing reliance on digital systems and networks by both individuals and organizations, cyber attacks have become more common and detrimental. As a result of this, the role of the federal government in regulating cybersecurity has been a topic of discussion and debate.

    Advocates of heightened federal cybersecurity regulations support two main arguments:

    1. It is critical to protect national security. Cyber attacks are targeting critical infrastructure such as pipelines and power grids, leaving vulnerabilities in national security. Because so much of US critical infrastructure lies in the private sector, it is becoming increasingly important to protect private companies with federally mandated cybersecurity guidelines. Government regulation can help protect national security in many ways. Lowering the barriers to cyber risk information sharing can promote a better understanding of the cyber threat landscape and lead to improved cybersecurity protections. Introducing federally mandated liability provisions can incentivize businesses to better protect their systems from cyberattacks.    
    1. Public-private partnerships in cybersecurity are effective and could benefit from being federally mandated. The federal government has a better grasp on cyber threats due to their intelligence capabilities, but private companies often have more advanced cybersecurity capabilities. Combining these unique abilities leads to the most effective cybersecurity protections as companies can greatly benefit from the federal government’s surveillance, forecasting, and notification of cyber threats. The EU has pioneered these partnerships through the successful enactment of public-private partnership (PPP) on cybersecurity in 2016.   

    Critics of federal cybersecurity regulations argue the following:

    1. The government should be limited in its access to private information. Privacy risks that occur when sharing cybersecurity information are not worth the tradeoff for better cybersecurity regulations. The American Civil Liberties Union (ACLU) and the Electronic Frontier Foundation (EFF) have stated that sharing cybersecurity related information with the government will introduce serious privacy concerns, thereby infringing upon the privacy rights of citizens. Specifically, the privacy concerns mainly involve the sharing and dissemination of personally identifiable information (PII) throughout the government. This leads to further questions over how that data will be used as well as who can access the shared information. Additionally, some cybersecurity professionals and technology companies have argued that the sharing of private consumer information with the government violates individual privacy rights. They say that the introduction of these privacy risks are not worth the limited benefit of information sharing with the government. 
    1. Mandating cybersecurity guidelines can inhibit companies. Threats of liability can stifle innovation for many companies. For example, ensuring that software products adhere to federally mandated cybersecurity standards creates additional, costly steps in the innovation of such products. Opponents of mandatory cybersecurity regulations further argue that acting in compliance could reveal trade secrets and make products less competitive in the market. Additionally, some also argue that federal cybersecurity mandates may actually impede the current cybersecurity measures of businesses’ by forcing them to adapt to government mandates.  

    Currently, there is a lack of comprehensive federal cybersecurity regulation, yet recent developments suggest that such regulations may be coming. For example, in March 2022, President Biden signed the Cyber Incident Reporting for Critical Infrastructure Act into law, which requires certain critical infrastructure entities to report cyber incidents to the Cybersecurity and Infrastructure Security Agency (CISA). In March 2023, the Biden-Harris administration announced a new federal cybersecurity strategy, with an emphasis on holding companies liable for protecting their cyberspace. While it remains unclear what specific policies will be designed, this announcement represents a major step towards more comprehensive federal cybersecurity regulation.

  • Pros and Cons of the 2015 Cybersecurity Information Sharing Act

    Pros and Cons of the 2015 Cybersecurity Information Sharing Act

    At the end of 2015, the Cybersecurity Information Sharing Act (CISA) was signed into law by President Obama as part of a larger omnibus spending bill. In the years prior to 2015, the US suffered many major cyberattacks including the 2013 Target Corp data breach that leaked the private information of 110 million people and the 2014 cyberattack on the United States’ Office of Personnel Management that affected 22.1 million American citizens. In 2015 alone, multiple major cyberattacks leaked the information of 300 million people and led to $1 billion in damages. Recognizing the need for increased cybersecurity protections, CISA was passed with bipartisan support, although controversy over the bill still remains. Broadly, the act allows for cybersecurity information sharing between private and public entities in the interest of national security. A key provision of this act is that information sharing with the government is completely voluntary.    

    Advocates of CISA support two main arguments:

    1. It is critical to protect private data. Given the cyber environment leading up to the passage of CISA, it was clear that cyber criminals had begun using increasingly complex tactics. In the early months of 2015, the Department of Defense had begun advancing and streamlining its cyber capabilities and some cybersecurity proponents argued that the private sector should follow its lead. Thus, CISA represents an attempt to develop more capable defense and responses to cyber incidents in order to protect private information in the United States.  
    1. It is important to develop public-private cooperation in cybersecurity. Neither private companies nor the federal government alone possess the requisite capabilities to protect critical infrastructure and data from cyberattacks. Public-private cooperation provides a cost-effective and dynamic approach to cybersecurity protection and advocates have argued that the US should take advantage of such a model. CISA allows for the Department of Homeland Security (DHS) to receive cyber information (cyberattack indicators, malicious code, etc) from private organizations, integrate that data, and provide comprehensive defense strategies for all to use. In addition, if one company were to discover signs of an attack, this information could be sent to DHS and a warning could be distributed to other companies within minutes. 

    Critics of CISA argue the following:

    1. CISA does not properly control how shared information can be used. Those against CISA argue that once data is shared with the federal government, there are no provisions in place to ensure that the data is only being used for cybersecurity related purposes. Privacy advocates like the Electronic Frontier Foundation say that CISA takes cyber control away from DHS and allows other government entities to access shared information. They argue that CISA creates an environment conducive to excess sharing and loss of oversight on the regulation of sensitive shared data. Other critics say that such practices would lead to a surveillance state where the government could conduct unauthorized searches using the data collected via CISA.   
    1. The government is not capable of rapidly processing cyber information. Some against CISA argue that the government is not equipped to deal with the fast-paced nature of cyberattacks. They say that cyber criminals do not require consensus decisions to organize their attacks, while the government cannot move at such speed. Additionally, CISA critics argue that private companies are already engaging in extensive information sharing practices, and adding the government into such frameworks only slows these processes down. Additionally, they say that the government already has more data than it can process, so the input of additional information is useless.  

    In the years following the passage of the Cybersecurity Information Sharing Act, cyberattacks are still an ever-present threat as exemplified by the attack on Colonial Pipeline in 2021 and Uber in 2022. Accordingly, CISA has undergone multiple revisions since its passage in 2015 in attempts to improve its efficacy and address privacy concerns. CISA has been effective in incentivizing public-private information sharing, yet adjustments are still needed to improve the quality of data being shared.