Author: Oliva Scott

  • Pros and Cons of the Behavioral Health Information Technology Coordination Act

    Pros and Cons of the Behavioral Health Information Technology Coordination Act

    Despite an unprecedented demand for mental health and substance use services in the U.S., psychiatric hospitals utilize electronic health records (EHRs) at significantly lower rates than general medical practices. EHRs are electronic versions of patient health profiles, and include information such as relevant demographics, past diagnoses and treatments, lab report data, and vaccination history. EHRs are nearly ubiquitous in general medicine due to their ability to facilitate coordination between providers and reduce duplication in testing and treatment. The Behavioral Health Information Technology (BHIT) Coordination Act of 2023 aims to develop standards for mental health EHRs, promoting their adoption nationwide. If passed, the bill would allocate $20 million annually to grant funding for mental health providers over five years beginning in 2024. 

    The Problem

    The U.S. is facing a growing mental health and substance use crisis, with 21% of adults experiencing mental illness and 15% affected by substance use disorders. The pandemic exacerbated this issue, increasing symptoms of anxiety and overdose deaths, yet the demand for mental health services has not fully rebounded. In response, experts emphasize the urgent need for technology infrastructure to support behavioral healthcare and address these escalating issues.

    Research indicates a significant gap in EHR adoption between general and psychiatric hospitals. An analysis of American Hospital Association survey data from 2019 and 2021 found that 86% of general acute care hospitals had adopted a 2015 Edition certified EHR, compared to only 67% of psychiatric hospitals. Outdated privacy laws requiring providers to segregate their health records from others and inconsistent state regulations create challenges for mental health record sharing. According to the National Institutes of Health (NIH), privacy and disclosure laws for mental health records vary widely from state to state. Some states, like Massachusetts and Colorado, require strict consent procedures to share mental health records, while others, like Kansas and Mississippi, allow broader disclosures without patient consent.

    While the HITECH Act of 2009 allocated federal funds to incentivize the use of EHRs in healthcare systems, behavioral health systems were excluded from these incentive payments. Researchers believe this omission was likely due to the difficulty of reconciling national standards with the patchwork of differing state-level mental health privacy laws. 

    The Plan

    Behavioral health integration is becoming an increasingly high priority for both the healthcare industry and the federal government. Leaders at the Office of Policy of the National Coordinator for Health IT (ONC) and the Substance Abuse and Mental Health Services Administration (SAMHSA) are looking to revamp EHR systems to be friendlier to behavioral health needs. Beginning in FY25, the BHIT Coordination Act aims to reach this goal by providing $20 million a year in grant funding. Specifically, under the Act, the ONC must grant awards to behavioral health care providers, including physicians, psychologists, and social workers, to support integration and coordination of services. These grants can be used to purchase or upgrade technology to meet specified federal certification standards for health information technology.

    The Pros 

    Proponents argue that enhanced adoption of Electronic Health Records (EHR) under the BHIT Coordination Act can significantly strengthen behavioral healthcare by improving coordination and expanding access to resources for patients. These systems promote better integration of mental health and addiction treatment, improving the quality of behavioral healthcare overall. Considering the increasing oversight of the behavioral health industry, including financial penalties for underperformance and underreporting, proponents say EHR technology can help create more accountable care models.

    Supporters argue that the integration of services through EHR systems can also bridge the gap between physical and behavioral health, enabling a “no wrong door” approach. This ensures that no matter how patients enter the healthcare system, they will have access to all available services. As Senator Cortez Masto, a strong supporter of the Act, stated, “Mental health is just as important as physical health, and it is essential that behavioral health care providers have the same access to the technology and electronic health records that other practices utilize daily.”

    Proponents also point to research that suggests implementing EHRs helps improve patient safety by reducing errors and streamlining care processes. They hold that coordinated care improves efficiency across the system, which can lead to better outcomes. A study of more than 90 clinical trials found that collaborative care improves access to mental health services and is more effective and cost-efficient than standard treatments for anxiety and depression. Proponents say the BHIT Coordination Act could benefit psychiatric hospitals and residential treatment centers especially, which are crucial parts of the mental healthcare system.

    Another potential benefit of the BHIT Coordination Act is that health IT data can be used to address clinical priorities, improve workflows, and provide technical information that helps better integrate services across behavioral health settings. Proponents emphasize that the data organization capabilities that come with EHRs can be valuable in identifying problems and improving healthcare delivery.

    The Cons

    The implementation of electronic health records (EHRs) and IT tools in behavioral health faces significant privacy and security challenges. As noted by the NIH, while these tools offer great potential to enhance behavioral health, they also create risks, particularly with sensitive data like narrative records which include personal histories and psychiatric diagnoses. This information, when exposed to breaches or misinterpretation, can have serious consequences for both patients and clinicians. If public health authorities disclose intimate information, individuals may suffer embarrassment, stigma and discrimination in employment, insurance and government programs. Although HIPAA privacy and security rules have been applied to protect psychotherapy notes and other behavioral health information, protecting this data remains a complex issue. Keeping records secure is a challenge that doctors, public health officials, and regulators are still working to fully address. Firewalls, antivirus software, and intrusion detection systems are needed to protect data. Employees must also follow strict rules to maintain privacy, such as avoiding sharing their EHR login information, always logging off, and using their own ID to access patient records. Critics of increasing EHR use in mental healthcare argue that providers must go to great lengths to ensure patient information stays confidential, which poses a risk of sensitive data leaks. 

    Distrust for EHRs and health IT technology also stands in the way of being able to integrate these technologies in behavioral health settings. Concerns about unauthorized access are particularly prominent. According to a study, more than half of adults over 30 report being either “very” or “somewhat” concerned that an unauthorized person may access their records. Another study, published by the NIH, found that a majority of mental health clinicians expressed concern over privacy issues that arose after the adoption of EHRs, with 63% expressing low willingness to record confidential information in an EHR, and 83% preferring that the EHR system be modified to limit access to their patients’ psychiatric records. This distrust can lead patients to avoid clinical tests and treatments, withdraw from research, or provide inaccurate or incomplete health information.

    The lack of clarity around health IT standards further complicates EHR implementation. There are still many unknowns regarding how to integrate EHRs in behavioral health settings. Although the BHIT Coordination Act plans to invest in developing EHRs and necessary equipment to enable the exchange of electronic health information, there is a lack of clear guidelines on how to fulfill these goals while ensuring the IT tools meet the needs of behavioral health practices.  

    Conclusion 

    The BHIT  Coordination Act aims to address the significant gap in EHR adoption between behavioral health providers and general medical practices. By providing funding to enhance the development and interoperability of mental health EHRs, the Act strives to improve care coordination for patients with mental health and substance use disorders. However, privacy concerns remain a critical issue; protecting sensitive information is essential to gaining the trust of both providers and patients. Balancing the need for improved data sharing with strong privacy protections will be key to the Act’s success.

  • Understanding the Debate on Fair Access to Mental Augmentation in Neurotechnology

    Understanding the Debate on Fair Access to Mental Augmentation in Neurotechnology

    Neurotechnology is an area of technology that specifically applies to the monitoring, regulation, or enhancement of brain activity. As neurotechnologies advance, the once far-fetched idea that humans might leverage technology to augment their nervous system has become closer to fact than fiction. Mental augmentation encompasses any means by which people enhance their mental functions beyond what is necessary to maintain health. Although potentially useful, the application of mental augmentation technologies today presents challenges and controversy.

    Applications of Mental Augmentation: Medical vs. Recreational

    Neurotechnology devices serve either medical or recreational purposes. Medically, these devices treat mental health disorders, learning disabilities, and neurological conditions by stimulating the brain. Recreationally, they enhance learning and cognition or improve efficiency.

    Transcranial magnetic stimulation (TMS) and deep brain stimulation (DBS) are both FDA-regulated medical treatments. Although their primary purpose is treatment, they have also been proven to improve brain functions. TMS, primarily used to treat depression, improves cognitive functions such as episodic and working memory, and motor learning, with treatment costs ranging from $6,000 to $12,000. DBS, used to treat Parkinson’s disease, enhances learning and long-term memory, with an average procedure cost of $39,152. Candidates for DBS must have severe symptoms, unmanageable by medications. 

    Transcranial direct current stimulation (tDCS) is currently unregulated by the FDA and is considered a non-medical device. tDCS devices can be sold for “wellness” purposes and recreational use. tDCS increases neuronal plasticity, encouraging the formation of connections that reinforce learning. Research suggests that tDCS enhances cognitive and behavioral performance, and potentially improves language acquisition, math ability, and memory. tDCS devices are available online and can cost as little as $40 to around $500

    Transcranial Direct Current Stimulation

    Given its easy accessibility for mental enhancement and recreational use, tDCS plays a central role in the debate on fair access to neurotechnologies. It is difficult to determine how many people use tDCS for mental enhancement rather than medical treatment, since the line between medical and recreational use is blurred. Users like Phil Doughan seek mental improvement exclusively, while others, like Kathie Kane-Willis, use the device to fix issues like brain fog, a medical symptom that is often subjective and difficult to measure. 

    tDCS is not widely used; however, one study suggests that brain stimulation could one day become mainstream, similar to the way people use caffeine to increase alertness. A study done by Pew Research found that nearly half of Americans (47%) say they would be at least somewhat excited about mental augmentation techniques that allow people to process information more quickly and accurately. 

    tDCS is not FDA approved, which means companies currently have the freedom to bring tDCS devices to the market with claims of treating medical conditions and enhancing brain function. Advertisements can be misleading as tDCS studies have not yet conclusively shown that the technique provides real benefits. One neuroscientist notes that the public adoption of tDCS is happening at a faster pace than related research. International organizations suggest that this under-researched and unregulated use of neurotechnologies entails unprecedented risks for human rights. 

    The NeuroRight to Fair Access to Mental Augmentation

    Amidst ethical concerns about neurotechnology, Dr. Rafael Yuste founded The Neurorights Foundation to advocate for human rights directives and ethical guidance on neuro technological innovation. Concerned with the possible exacerbation of inequality between people who can and cannot afford neurotechnologies, professors have proposed a framework entitled The NeuroRight to Fair Access to Mental Augmentation. The framework states, “There should be established guidelines at both international and national levels regulating the use of mental enhancement neurotechnologies. These guidelines should be based on the principle of justice and guarantee equality of access.” 

    Brazil and Chile have both enacted legislation to ensure equitable access to neurotechnology. Brazil’s Article 13-E of bill No. 522/2022 states that “The State shall take measures to ensure equitable access to advances in neurotechnology”. Similarly, Chile passed the “Neuroprotection Bill”, which establishes that “The State will guarantee the promotion and equitable access to advances in neurotechnology and neuroscience”. Chile’s “Neuroprotection Bill” faces criticism for its vague scope, limitations, and obligations, highlighting the need for more nuanced discussion of the issue. 

    The Central Debate

    Proponents of Fair Access to Mental Augmentation are concerned that cognitive enhancements may primarily benefit the wealthy due to high pricing, which may widen social, cultural, and economic divides. They suggest that the enhanced mental abilities afforded to those with the purchasing power to buy mental augmentation devices will further exacerbate wealth gaps. Moreover, proponents argue that the social polarization caused by augmentation technology would have knock-on consequences for a range of human rights, raising questions about how far a “neurotech divide” could set back equality and inclusion. They warn that augmentation devices require especially careful consideration when used in classrooms, workplaces, and other competitive environments where wealth differences are amplified. And even in the context of medical treatment, the benefits of neurotechnologies will not be financially accessible to everyone. Proponents of the NeuroRight to Fair Access also point out that if augmentation becomes a widespread practice, enhanced abilities may become a standard. This raises the challenge of respecting people’s will to not use neurotechnologies – an issue touched on in the NeuroRights framework. 

    Critics of Fair Access to Mental Augmentation question the feasibility of implementing the policy framework. If equal access to neuroenhancement is established in a way where states are responsible for guaranteeing equal access to all, this would imply a fiscal burden to already-underfunded health systems that cannot provide access to more basic human needs. Critics also argue that the NeuroRights framework must be adapted to various economic, cultural, and social contexts before its implementation, since augmentation technology is not equally perceived nor equally accessible across the globe. For example, mental augmentation may go against religious precepts and morals where the modification of human nature is not viewed favorably. Moreover, enshrining access to mental augmentation as an international human right risks punishments for developing nations with less access to such technologies, which may widen gaps between wealthy and historically-exploited countries.

    Conclusion and Future Prospects

    Neurotechnologies have recently entered the market, increasing the availability of products for mental augmentation purposes. The global neurotech market is growing at a compound annual rate of 12% and is expected to reach $21 billion by 2026. This rapid pace of innovation suggests that we may be late to regulate neurotechnologies. To promote responsible and ethical use, it is crucial to engage in proactive and thoughtful discussions on how to regulate neurotechnologies effectively.

  • Pros and Cons of the Kids Online Safety Act

    Pros and Cons of the Kids Online Safety Act

    The Kids Online Safety Act (KOSA) responds to the escalating youth mental health crisis and its ties to harmful online content and addictive social media algorithms. Given the rising rates of depression, anxiety, and loneliness among children in the U.S., KOSA aims to establish a legal framework that promotes a healthier online ecosystem for youth. 

    KOSA: An Introduction

    For clarity, it is essential to differentiate between two important technology policies: the Kids Online Safety Act (KOSA) and the Children’s Online Privacy Protection Act (COPPA), which are often considered together due to their concurrent passage in the Senate. This brief will focus specifically on KOSA. COPPA primarily addresses privacy through provisions like prohibiting the collection of personal data from children 13 and under without parental consent, but KOSA takes a different approach to online safety that emphasizes the mental health impacts of social media use.

    In a time of increasing legislative partisanship, KOSA is a strongly bipartisan proposal with 72 co-sponsors and the support of President Biden. However, despite passing 91-3 in the Senate, KOSA now faces resistance from House GOP members over potential censorship issues, rendering it unlikely for the bill to advance in its current form. This opposition emphasizes the challenge of balancing online safety regulations and First Amendment rights. 

    KOSA: The Youth Mental Health Crisis

    Research suggests that youth today struggle with mental illness at higher rates than past generations. From 2007 to 2018, rates of suicide among the ages 10-24 increased by nearly 60%. According to the CDC, three in five teen girls report feeling “persistently sad or hopeless,” and 30% have seriously contemplated suicide. One study shows that increased social media use is linked to higher levels of anxiety, depression, and self-doubt. Moreover, a study by the NIH found that the risk of depression rose by 13% for each hour of increase in social media use among adolescents.

    Modern-day social media algorithms are intentionally addictive, promoting compulsive use. Congress defines compulsive use as any behavior driven by external stimuli that causes individuals to engage in repetitive behavior reasonably likely to cause psychological distress, loss of control, anxiety, or depression. Furthermore, algorithms often recommend inappropriate or harmful content. For example, as teens click on advertisements for weight loss products, their feeds can become dominated by the content, negatively impacting body image. A study reported that thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse.

    KOSA’s Major Provisions

    KOSA emerged in response to the research linking youth mental health struggles to social media use. The bill aims to protect children’s mental and emotional safety online by:

    • Requiring platforms to modify their content moderation policies in order to reduce major harms such as mental illness onset, sexual exploitation, and the sale of illicit drugs to minors on their platforms.
    • Giving minors the tools to restrict the collection and visibility of private information. 
    • Allowing minors to opt out of addictive algorithmic recommendations. 
    • Requiring platforms to default to the strongest safety settings for child users. 
    • Holding online platforms accountable through annual independent audits. 

    Arguments in Favor of KOSA

    Proponents of the Kids Online Safety Act (KOSA) argue that the bill is crucial for protecting youth from mental health challenges. Pediatricians across the nation have spoken out about the urgent need to create a healthier digital environment for children and adolescents. Additionally, the U.S. Surgeon General released an advisory stating social media carries a “profound risk of harm” to youth. The U.S. surgeon general called on Congress to require warning labels on social media platforms and their effects on young people’s lives, similar to those on cigarette boxes. This emphasis on mental health is a central aspect of the arguments in favor of KOSA. 

    Autonomy is a key argument in the discussion surrounding KOSA as well. Addictive algorithms keep kids endlessly scrolling and seeking dopamine hits from follower engagement. There is currently no mechanism for opting out of recommendation algorithms. KOSA aims to change this by requiring social-media platforms to provide parental controls and disable features like algorithmic recommendations. Supporters argue that this provision would provide parents and kids with more autonomy over their online experiences, allowing them to gain greater control over their usage times. 

    Supporters also argue that KOSA is an important step toward holding companies accountable for the harm they cause. They emphasize that Big Tech companies have designed social media to maximize profit by exploiting children’s developmental vulnerabilities for commercial gain. After years of operating with few regulations, tech firms are receiving increasing scrutiny for alleged harms to young users, exposed by whistleblower revelations. Frances Haugen, a former Facebook employee, disclosed tens of thousands of pages of internal Facebook documents to Congress and the Securities and Exchange Commission proving that Facebook was aware of their negative impact on teens’ mental health. Microsoft, Snap, and X have all endorsed KOSA, likely in recognition that profits can still be made while taking reasonable steps to protect children. 

    Arguments Against KOSA

    Critics of KOSA express concerns about potential censorship of online content. On one hand, many conservatives argue that empowering the Federal Trade Commission (FTC) with censorship authority could lead to the suppression of legitimate speech. For example, if a Republican leads the FTC, content discussing LGBTQ+ lives, reproductive health, and climate change might be deemed harmful to youth, while a Democratic leader could censor discussions on automatic weapons, shootings, and religious viewpoints on the LGBTQ+ community. Critics fear KOSA could be enforced for political purposes.

    On the other hand, many liberal legislators are concerned with KOSA’s Duty of Care provision, which requires companies to “prevent and mitigate” harms to children such as bullying, violence, and negative mental health impacts. They worry that this provision could lead to the censorship of LGBTQ+ content and reproductive health content as companies over-filter content to avoid legal repercussions. In September 2022, the Heritage Foundation seemingly endorsed KOSA in an editorial, praising the Duty of Care provision’s potential to limit what they claim is Big Tech’s influence on children’s gender identities. The Heritage Foundation later expressed intentions to use similar bills to limit transgender content online, fueling concerns about KOSA’s potential dangers for the LGBTQ+ community.  

    The Duty of Care provision has been revised since its initial introduction. Originally, KOSA specified various mental health conditions that companies needed to mitigate, but after revisions, the emphasis shifted to preventing the promotion of inherently dangerous behaviors. Despite these changes, critics maintain that the provision remains too broad and vague, potentially leading to censorship of crucial information. Although KOSA includes a limiting principle saying nothing in the Duty of Care will prevent a minor from deliberately searching for a specific type of content, companies may still censor content to prevent compliance issues

    Another significant concern is that KOSA could be counterproductive, potentially increasing the risks of online harm to children. Critics argue that by restricting access to lawful speech on topics such as addiction and bullying, KOSA may hinder minors from finding supportive online communities and feeling comfortable in their identities. 

    Conclusion

    In conclusion, KOSA aims to address the rising youth mental health crisis by holding platforms accountable for their negative mental health impacts and enhancing parents’ and young users’ autonomy over their online experiences. While supporters believe the bill will create a safer digital environment, concerns about potential censorship and the implications of KOSA’s Duty of Care provision underscore the complexities of balancing safety with free speech. This balance presents a continuing challenge as lawmakers debate the future of the bill.

  • Olivia Scott,University of Pennsylvania

    Olivia Scott,University of Pennsylvania

    Olivia Scott is a pre-medical student at the University of Pennsylvania, majoring in neuroscience. Her focus is on the convergence of medicine and technology to enhance healthcare delivery, and policies governing technology implementation in healthcare systems. These interests were ignited during her experience as a Cadet in Emergency Medical Services. In EMS, she directly observed the transformative impact of technology on medical outcomes. She joined ACE to explore technology policy and related research while contributing to the mission of promoting informed, democratic engagement.

    Olivia is eager to delve into this field of research and prepare herself to contribute meaningfully to the healthcare field. She has prior experience in research through her completion of the AP Capstone Program which involves writing research-based essays and conducting a year-long research-based investigation. Beyond academics, Olivia enjoys playing sports, spending time with friends, traveling, and being outdoors. LinkedIn

  • Understanding the Debate on Neurorights and Personal Identity in Neurotechnology

    Understanding the Debate on Neurorights and Personal Identity in Neurotechnology

    In recent years, significant advancements in neurotechnology promise to profoundly impact healthcare and human capabilities. As technologies such as Brain-Computer Interfaces (BCIs), Transcranial Direct-Current Stimulation (tDCS), and Deep Brain Stimulation (DBS) evolve, they can potentially blur the boundary between a person’s consciousness and external technological influences. 

    Examples of Neurotechnologies

    Conditions such as Parkinson’s, Alzheimer’s, and epilepsy are caused by disruptions in brain function due to inactive neurons. Neurotechnology offers a potential solution by monitoring brain activity and selectively stimulating these faulty brain regions. This technology can help restore lost functions by activating the disrupted neurons, improving the quality of life for patients. Non-invasive methods use external devices for stimulation, whereas invasive methods involve surgically implanted electrodes. 

    BCIs are devices that enable control of computers through thought. In January 2024, Neuralink, founded by Elon Musk, implanted an invasive BCI into Noland Arbaugh, a quadriplegic participant in the PRIME study. The procedure aimed to restore his autonomy after a spinal cord injury. After surgery, he was able to control his laptop cursor for the first time since his injury, allowing him to reconnect with friends and family and regain independence. Neuralink aims to implant chips in 10 individuals by the end of 2024, pending FDA approval. 

    DBS involves implanting a small device under the skin near the collarbone, with wires reaching the brain to deliver mild electrical currents. This technology treats medical conditions like Parkinson’s disease and epilepsy. According to the Cleveland Clinic, as of 2019, experts estimated that about 160,000 people have had a DBS procedure since the 1980s and that 12,000 procedures happen each year.  

    tDCS, a non-invasive technique, applies low electric currents to the scalp. tDCS devices are being explored to treat depression, schizophrenia, aphasia, chronic pain, and other medical conditions. tDCS is used for non-medical applications as well including accelerated learning, focus, relaxation, and meditation. tDCS devices are currently available online ranging in cost from about $40-$500

    It is difficult to find a number that encompasses how many people use neuromodulation devices, however, the potential treatment population is vast including the millions affected by epilepsy, migraine, Parkinson’s disease, urinary incontinence, and other medical conditions. The global neurotech market grows at a compound annual rate of 12% and is expected to reach $21 billion by 2026

    Neurotechnology Regulation

    The FDA regulates the implementation of neurological devices, classifying them by their degree of risk and forming the pathways necessary to bring the device to the market. In general, for high-risk devices, companies will get an Investigational Device Exemption to clinically test their devices by proving that the benefits justify the risks. The gathering of clinical data is a key step in supporting pre-market approval. BCIs are still in the clinical trial period of gaining FDA approval. DBS technologies have been gaining FDA approval to treat different medical conditions since 1997. tDCS is FDA-cleared, which is a lower standard to be met than FDA approval due to lower risk compared to other neurostimulation methods.

    Neuroright to Personal Identity

    Neurotechnologies have the potential to alter perception, behavior, emotion, cognition, and memory, reshaping identity and notions of “humanness”. Technologies like BCIs, tDCS, and DBS use electrical impulses to influence brain activity. This can lead to changes in emotional and behavioral responses. For instance, a study on DBS found that stimulating the subthalamic nucleus, a brain area involved in cognitive and motivational functions, led to increased positive mood and reduced anxiety in participants. Depending on how invasive the devices are and the part of the brain they target, effects on mental processes vary. While neurotechnology holds the potential for positive therapeutic benefits, the possibility of changes to human behavior has raised concerns. 

    At Columbia University, academic leaders united to discuss the ethical concerns of neurotechnology. This discussion led to a new human rights framework called “Neurorights”, reflecting the consensus that advances in neurotechnology outpace global, national, and corporate governance. The 5 Neurorights proposed by the Neurorights Foundation include the right to Personal Identity. The foundation writes, “Boundaries must be developed to prohibit technology from disrupting the sense of self. When neurotechnology connects individuals with digital networks, it could blur the line between a person’s consciousness and external technological inputs.”

    The Neurorights Foundation, founded by Dr. Rafael Yuste, worked with the Senate of the Republic of Chile in 2021 to pass a Neurorights law and plans for a constitutional amendment. This development made Chile the world’s first country to have legislation protecting personal identity, free will, and mental privacy in reference to emergent neurotechnology.

    The Main Debates

    Proponents of Neurorights argue that Neurotechnology infringes on personal identity by causing unexpected changes to personality, identity, and decision-making, with evidence shown in research. In a 2016 study, a man using a brain stimulator to treat his depression for 7 years began to wonder whether the way he interacted with others was due to the device, stating “It blurs to the point where I’m not sure… frankly, who I am.” In another study, Dr. Yuste realized that by controlling specific brain circuits scientists could manipulate a mouse’s experience including its behaviors, emotions, awareness, perception, and memories. Yuste stated, “The brain works the same in the mouse and the human, and whatever we can do to the mouse today, we can do to the human tomorrow.” 

    Opponents of Neurorights question the sophistication of neurotechnology and its ability to cause widespread human rights concerns, seeing altering personal identity as something far in the future. Additionally, opponents draw attention to the concern that depending on the definition of identity, the Neuroright to Personal Identity may imply prohibiting neurotechnologies in general. This would significantly slow down important scientific progress in the field of neurotechnology. 

    Opponents of Neurorights also argue that considering existing legislation, Neurorights are unnecessary, and that passing additional human rights could be harmful. Human rights are powerful tools transforming the lives of billions of people. One central worry in the debate is that the inflation of rights may result in their devaluation. Human rights could lose their distinction, significance, and effectiveness if the passing of legislation is not considered cautiously. Proponents of Neurorights take a reformist position arguing that Neurorights must go beyond current fundamental human rights to effectively protect the right to personal identity, seeing this as a new and unique issue requiring additional legislation. 

    Conflicting ideas of “personal identity” add more complexity to the argument. Some view the right to use neurotechnology as an expression of personal identity, while others see preventing brain manipulation as a way to preserve and promote the freedom of the human mind.

    Conclusion

    In summary, the rapid advancement of neurotechnology, including BCIs, tDCS, and DBS, presents ethical dilemmas regarding personal identity. As research progresses and investments increase, these technologies could impact various aspects of human life from medical treatments to personal enhancement. The Neurorights Foundation actively works to incorporate Neurorights into international human rights law, national legal and regulatory frameworks, and ethical guidelines. The foundation has made developments in Chile, Brazil, Mexico, the United Nations, and Spain, and strives to incorporate Neurorights in the United States. As of now, at least two US states are considering legislation to protect private thoughts, reflecting a growing awareness of this issue.