Scroll to:
The protection of neural rights in the age of neurotechnologies and AI. the ethical challenge for law and neuroscience
https://doi.org/10.21202/2782-2923.2025.1.202-233
Abstract
Objective: to summarize neuroscientific knowledge and experience about neurotechnologies and the neuropsychological, legal, ethical and social consequences of their use; to indicate possible prerequisites for a critical discussion of the legal regulation issues.
Methods: general scientific, abstract-logical, dialectical, phenomenological methods, observation, description, comparative analysis.
Results: a comparative analysis shows that the use of new neurotechnologies lacks clarity and transparency. Moreover, they are studied only superficially and are used without clear documentation for the end user. This is evident, for example, from the recent ruling of the Constitutional Court of Chile. At the same time, excessive and unreasonable efforts are sometimes made to introduce new regulations to create “new rights”. This is often the result of insufficient knowledge of the legislator, as well as excessive activity in regulation. It is worth noting that modern society is passionate about the prospects offered by neurotechnology. Success stories, actively broadcast for commercial purposes, create inflated expectations among the population, giving rise to so-called neuro-charm and contributing to the spread of “neuromythes”. This trend is compounded by a lack of knowledge about the failures and limitations associated with the development of neurotechnology, which creates a distorted view of the real situation. Overcoming these phenomena requires active educational efforts in conjunction with legal regulation mechanisms, in particular, legislation on consumer protection, product safety standards, and antimonopoly legislation.
Scientific novelty: studies of the legal regulation of neurotechnology, as well as studies of neural rights from the perspective of law, ethics and sociology are extremely rare. The article has scientific value as a debatable foundation for future research.
Practical significance: based on the correct definition and application of neurotechnologies and the latest neuro neuroscientific approaches, as well as on the analysis of recent debates about the need to regulate and introduce “new rights”, we conclude that neural rights are already clearly defined. However, their practical application requires the development and strict observance of reliable protection measures in the field of new technologies.
For citations:
Di Salvo M. The protection of neural rights in the age of neurotechnologies and AI. the ethical challenge for law and neuroscience. Russian Journal of Economics and Law. 2025;19(1):202-233. https://doi.org/10.21202/2782-2923.2025.1.202-233
Introduction
Neuroscience owes its centrality in the debate on artificial intelligence to an extraordinary cocktail of which various and heterogeneous factors are ingredients: the incredible revolutionary discoveries of the last fifty years, the association between the brain model and the computational model, the privileged role of the brain as the seat of the mind and personality, to mention only the most macroscopic ones.
While the most obvious theoretical implications of neuroscientific discoveries refer to the basic model to be imitated in the development of computer networks (called neuronal networks), the most immediate applications of neuroscientific discoveries take the form of what has been termed neuro-technologies (NTs).
The theoretical model gave impetus to artificial intelligences, which at present would be more appropriate to describe as ‘hyper-computational capacity’; the neuro-technological model at the same time works inextricably with AI mechanisms and systems.
It therefore seems central to the debate on the topic of ‘AI Ethics’ to address the criticalities of the use of NTs, which consequently address the criticalities of AI in the neuroscientific field.
This article aims to address these topics, mainly by referring to the most accredited literature and developing the analysis of NTs (and related AI and data managing) in the fields of education, health, work, and entertainment.
This article does not pretend to be exhaustive, but to indicate the most thought-provoking and most immediately sensitive points on which to pay profound attention in terms of definition, before turning to normative issues.
The article objective is to summarize neuroscientific knowledge and experience about neurotechnologies and neuropsychological (as well as related aspects of artificial intelligence and data management), legal, ethical and social consequences of their use; to provide possible prerequisites for a critical discussion of the legal regulation issues.
Today, this reflection is urgently needed because technology was developed before the discipline of law on which it operates.
Interdisciplinary engagement with NTs combines different methodological approaches and builds on the strengths of each perspective. This review is based on the most widely shared literature from each discipline to enable an assessment of the effectiveness, threats and potential of different types of NTs in relation to individuals, specific socio-demographic groups and for specific social spheres.
Moreover, the fact that the subject is relevant and as urgent as ever is evident from the intense debate and discussion also in commentary on regulatory proposals, which in all the specialised journals involves numerous and heterogeneous researchers (for example, the following reference works: P. S. Gulyaeva, D. V. Bakhteev, & A. A. Shutova; A. A. Shutova & I. R. Begishev; E. A. Alferova, all published in 2023).
Research results
The neuroscientific approach
After the cognitive revolution (Gardner, 1987), the neuroscientific revolution is probably the area in which the integration of transversal specialisations has been most witnessed.
The term neuroscience historically refers to the set of disciplines that study the various morpho-functional aspects of the nervous system through the contributions of numerous branches of biomedical research: from neurophysiology to pharmacology, from biochemistry to molecular biology, from cell biology to neuroradiology techniques.
Historically, they arose with the identification of a neuron as an autonomous and functionally independent cellular unit of the nervous system. The studies carried out to define the neuron properties benefited from advances in various disciplines, in particular using methodologies to measure ionic and molecular displacements at the sub-cellular level and, thanks to original approaches in psychopharmacology, from advances in our knowledge of the integrated systems underlying behavioural variations in the individual.
One of the main topics of discussion for philosophers and scientists in the 20th century was whether ‘mental’ activities such as thought, emotions, self-awareness and will are different functions from ‘cerebral’ activities such as the movement of a limb, the perception of a colour, etc., or whether they too represent functional expressions of the neurons that make up the brain. The distinction between mental and cerebral activities, in the light of current knowledge, appears artificial to those practising one of the many disciplines that constitute neuroscience.
Mental and brain activities are in fact simply the unique and indivisible expression of the activities of the neuronal and glial elements that make up the brain. Although the expression is different in quality and in the ways in which it manifests itself, both activities are due to a single mechanism by which neurons communicate with each other and with the rest of the organism.
According to this conception, so-called mental activities must be considered as emergent properties, the result of such a complex sum of simpler neuronal activities that they constitute a quantitative leap that is essentially still undecipherable.
With multi-disciplinary and interdisciplinary approaches, we envisage the possibility of elucidating the mechanisms by which neurons, organised in three-dimensional structures of varying nature and magnitude, process incoming information, store it, if necessary, and emit a behavioural response.
These studies are also beginning to provide fundamentally important information on the nature of mental processes such as consciousness, will, and memory – enormously complex issues that form the core of the third level of brain functions.
It is not surprising, in this context, that more and more neuroscientists and cross-disciplinary research groups engaged in neuroscience have become interested in the great issues of the human mind: from consciousness to free will, from the redefinition of apex concepts, such as life, death, intelligence, the unconscious, memory and not least autism, to the search for neuronal correlates and the neurophysiological basis of psychological theories.
Neurotechnology (NT) and neural rights
Advances in neuro-technological development have led to an increase in the use and accessibility of neurotechnologies (NTs), which allow brain activity to be recorded, analysed and manipulated using neurotechnological devices.
Neurotechnologies are essential for the recovery and preservation of physiological and mental health and thus the quality of life of clinical patients. However, technological advances and research results have led to the application of these technologies outside the clinical setting. For example, cognitive enhancement is used in the work, education and entertainment environments, where consumer-level devices can be freely purchased on the market and used without supervision.
Since devices can be hacked and data are mostly stored in corporate-owned cloud services, mostly located outside the EU, the question of data security arises. Hacking attacks can cause psychological and physiological damage and threaten the mental identity of users. Their use in work and educational environments requires explicit consent and strong regulation, as there is a danger of burnout, increased stress levels and misuse by authorities and private companies. Furthermore, the high reputation of neuroscience, coupled with the immense seductive power of neurotechnological devices, makes them inseparable from ‘neuromyths’ and ‘neuroenchantments’. This also makes the users of neurotechnology very prone to manipulation.
The first ‘organic’ debate on neurorights culminated in 2017 with an initial definition of a core set of five neurorights: ‘right to mental privacy’, ‘right to personal identity’, ‘right to free will’, ‘right to equal access to mental augmentation’, and ‘right to protection from algorithmic bias’. These rights indicate which interventions and restrictions on an individual are considered unfair. This necessarily applies not only to isolated individuals, but to all individuals within a socio-political structure, which emphasises aspects such as solidarity, co-determination, and equality.
The NT ethical challenge
Consumer-level devices are rather easy to use and follow the ‘plug-and-play’ principle. Consequently, it may simply be necessary to establish new rules mainly in the area of privacy and manipulation and access to functions and data. Self-improvement is taking primacy over other organisational and institutional goals at the expense of other values and social relations. In this context, it is certainly necessary to rethink how a society’s notion of disability is co-constructed through these technologies. However, not all NTs function superficially. Some have to be surgically implanted into the brain. The differences in invasiveness, risks, complications, side effects and degree of commitment to the product required by invasive and non-invasive NTs are not clear to the public.
The implications and consequences of both NT methods are manifold and differ drastically. From the ethical point of view, particular attention must be paid to NT, as its real field of action is the brain. Interventions and manipulations on the brain hold immense disruptive potential insofar as they influence the autonomous actions and self-perception of the individual.
Furthermore, in the case of devices used to improve performance beyond medical applications, it is important to consider what changes they can trigger in society, what dependencies they create and to what extent and at what price these devices offer an improvement in people’s lives in real terms.
Neurotechnology
1. Invasive neurotechnology
The use of invasive NT requires neurosurgical procedures in which electrodes or stimulation devices are placed directly on or inside the brain. There are varying degrees of invasiveness among NTs. Electrode grids placed in the extradural space are less invasive because they do not perforate brain tissue and can be removed more easily. Sensors placed in the subdural space and those that perforate brain tissue (e.g. Elon Mask’s Neuralink system) are considerably more invasive (Yadav et al., 2020).
One of the most frequently used invasive methods is deep brain stimulation that works with an implantable pulse generator (known as IPG) that sends pulses to modulate brain circuits and measure pathological brain activity. It is one of the most important devices in clinical neuroscience developed in the last two decades (Lozano et al., 2019). Its use is indicated for Parkinson’s, ‘major depression’ and obsessive-compulsive disorder, for example (Cagnan et al., 2019).
Use has increased over the years and an estimated 244,000 devices have been implanted globally (Wong et al., 2022).
However, it should be realised that it is still not completely clear how this technology works and exactly what neural effects it causes (Zarzycki & Domitrz, 2020).
Brain-Computer Interface (BCI) implantable systems are another frequently used method: they can record and interpret brain activation via electrodes placed on the brain and give patients the opportunity to communicate and move through brain activation without muscle activity in patients with, for example, amyotrophic lateral sclerosis or quadriplegia (Abdulkader et al., 2015).
Patients can learn to control devices with controlled brain activation patterns via individually adapted systems: trained BCI systems are specialised only on the respective user (Abdulkader et al., 2015). Not only can computers and neuroprostheses be controlled, but researchers have also made it possible for patients with implanted BCI devices to feel touch and sensation with fully robotic arms (Ganzer et al., 2020).
In invasive NT, relevant neuronal groups can be directly stimulated and recorded with a high level of precision and specificity. With such invasive methods, it is possible to reach deeper brain areas that cannot be recorded superficially from the surface of the scalp (Manahan-Vaughan, 2018). Deep brain stimulation is often described as a somewhat reversible technology, in which switching off the pulse generator results in the recrudescence of the original cognitive and motor symptoms (Alomar et al., 2017).
Surgical procedures required for the implantation of invasive NT devices are not without potential risks. There may be perioperative (e.g., convulsions, bleeding), postoperative (e.g., bruising, behavioural changes), technical (e.g., electrode failure, pulse generator malfunction), and stimulation-induced (e.g., dysarthria, confusion) side effects.
In addition to physiological side effects, patients who benefit from invasive NT methods are also at risk of various psychological consequences. Since these invasive devices affect fundamental aspects of the individual self, they can cause significant levels of stress and fear, as well as distortions of self-representation and feelings of agency. Some patients report personality changes and self-extraction when they experience the changes associated with brain stimulation (Baylis, 2013).
Risky surgeries and reactions resulting from a patient’s surroundings can influence personality factors (Gilbert et al., 2017). This makes adequate psychological preparation of users indispensable. There are attempts at pre-surgical training with the support of virtual reality (VR) (Iamsakul et al., 2017). However, indirect positive effects of deep brain stimulation on learning and memory have been reported in patients with implants (Suthana & Fried, 2014), while others report impairments in executive functions (Martínez-Martínez et al., 2017) and verbal fluency (Ehlen et al., 2014).
All the possible risks and complications of such surgeries are also the reason why invasive NTs so far have a very limited scope of use in cases of extreme medical necessity but not for the entertainment or improvement of healthy participants. For healthy populations, non-invasive methods are the primary choice.
In this sense, there has been strong pressure and attempts by Elon Musk’s ‘Neuralink’ company to change American medical-clinical regulations and policies to limit invasive devices to a clinical population and to implant Neuralink chips in the healthy human brain for enhancement purposes.
Although the US Food and Drug Administration initially objected due to safety risks, in May 2023 it approved Neuralink chips for testing in a clinical population (PRIME study; Neuralink clinical trial) after preliminary animal studies (Drew, 2024). On its website, Neuralink even proposes this ‘clinical step’ as a means of obtaining approval for implants in a healthy population.
2. Non-invasive neurotechnology
Apart from invasive NTs, the electrodes or optodes of non-invasive NT devices are only superficially attached to the scalp via, e.g., electrode caps or headbands, to measure fluctuations of the electric current on the scalp (Angrisani et al., 2017) or changes in brain oxygenation of cortical regions (Mihara & Miyai, 2016). Furthermore, the brain can be electromagnetically stimulated non-invasively via transcranial magnetic stimulation (TMS) and transcranial electric current stimulation (tES). Neural activity can be modulated by applying small magnetic pulses, direct current, alternating current or electrical stimulation with random noise (Cinel et al., 2019).
A popular example of non-invasive methods using these NTs is Neurofeedback (NF), a specific form of biofeedback, in which users learn to modulate their brain activation by receiving visual, auditory or tactile feedback.
It was previously used in interventions to reduce symptoms of attention deficit hyperactivity disorder (ADHD), to strengthen and train cognitive functions, e.g., in patients with stroke or multiple sclerosis, among many others (Marzbani et al., 2016).
There are also non-invasive BCI applications where users learn to control external devices, such as prosthetics or spelling devices for speech production (Guy et al., 2018).
There are also ‘consumer’ neurostimulation devices available, with which manufacturers promise to reduce depression1 or increase attention and concentration2 .
Another method based on non-invasive NT is the brain-brain interface in humans (BBI or B2BI) (Jiang et al., 2019). Here, information is extracted from a ‘sender brain’ and delivered to a ‘receiver brain’ by combining neuroimaging and stimulation. In this way, brains can communicate directly with each other. These range from very basic forms in which participants ‘receive’ brain-to-brain information from another person and give answers manually via a keyboard, instead of a simple device controlled by brain activation, to highly sophisticated forms in which communication is based solely on NT (Rao et al., 2014).
An example of such a BBI system is the BrainNet device. Most previous configurations were unidirectional with only a few allowing two-way communication between brains. However, this should not be understood as thought transmission between two people, but as a kind of classification task3.
This field of NT devices and intercerebral communication is accompanied by visions of rehabilitation support in a therapist-patient context. The question of privacy and mental safety also arises here. For ethical and safety reasons, it has only been attempted with non-invasive devices in healthy populations (Jiang et al., 2019).
In addition to clinical and experimental research applications, non-invasive methods are available for the general population. Commercially available consumer-type devices can be purchased and used by a consumer without professional supervision. They are advertised for different purposes, such as relaxation4 and cognitive enhancement.
Cognitive enhancement refers to the improvement of psychological, primarily cognitive abilities such as intelligence, attention or creativity in healthy individuals (Nagl-Docekal & Zacharasiewicz, 2022).
Consequently, the natural limits of humans should be exceeded (Almeida & Diogo, 2019). This could be the enhancement/training of cognitive functions and neural efficiency such as executive functions, memory, language or visuospatial processing (Antal et al., 2022), but also the help to meditate/relax or engage in any other form of self-awareness or healing practices. They are also used in rehabilitation clinics for clinical populations that have suffered stroke, multiple sclerosis or ADHD, among others (Marzbani et al., 2016). However, the use of non-invasive devices also carries some, often overlooked, risks.
Since non-invasive methods are only applied superficially to the scalp, only cortex regions can be recorded and stimulated, but not deeper brain regions. This also gives rise to different fields of application.
External recording makes the scalp EEG signal very prone to body and head movements and to artefacts related to eye movements (Wexler & Thibault, 2019).
Furthermore, when the cables are pulled and mechanical pressure is applied to the electrodes, the signal can be influenced externally. Therefore, EEG measurements obtained under more naturalistic conditions, in which users are allowed to move their eyes, head and even whole body naturally, are expected to be heavily contaminated by artefacts not easily distinguishable from brain activity5.
Near-infrared spectroscopy (NIRS) is more effective than scalp EEG. It is a non-invasive NT that applies near-infrared light to the scalp via optodes and measures the light that is not absorbed. Here, several parameters can tell whether specific areas of the brain contain oxygenated or deoxygenated blood. With this method, the activity levels of brain areas can be deduced. The NIRS is not affected much by movement; however, it is very sensitive to light environments.
One of the main problems in the field of commercial NTs is that little is known about the possible negative side effects. This flaw has been recognised by the scientific community (Ros et al., 2020; Thibault & Raz, 2017) but has not yet influenced the way these interventions are publicised and implemented.
Scientific publications mainly report anecdotes on whether or not study participants reported effects, but rarely are several different aspects of cognitive function tested and reported (Kober et al., 2015; Thibault & Raz, 2017).
Although rare, there is evidence that, for example, NF training can cause negative effects for users. This was found in NF training where its influence on memory performance was tested and a feedback group showed a decrease in performance in short- and long-term memory after training (Kober et al., 2015). This could be due to the reallocation of cognitive resources during the training process, which means that an increase in one domain goes hand in hand with a decrease in another (Sturm et al., 1997). It has been shown in a study of transcranial electrical stimulation (tES) that cognitive enhancement by stimulation shows disturbances in cognitive functions depending on the function that should be enhanced and the brain region stimulated (Iuculano &Cohen Kadosh, 2013).
Moreover, as longitudinal and follow-up studies are scarce, almost no long-term effects are known so far in relation to the possible negative effects of non-invasive NT applications6.
Another problem is the unreliability of commercial EEG systems with poor data quality. The operation and feedback of NT data are only as good as the technical specifications allowed by the devices. When cables are not well insulated, ambient noise is not properly taken into account, such as the hum of the power line, the classification algorithm includes all this noise in its calculation and adequate feedback cannot be provided. Devices advertised only for enhancement and augmentation are superficially regulated, so there is no strong requirement to measure brain signals with a minimum level of quality. Several commercially available devices tend mainly to record artefacts. Headbands such as Emotive or Muse seem to record more facial artefacts than brain activation because the electrodes have to be placed directly on the facial muscles (Whitham et al., 2007). Dry electrodes are preferred over wet electrodes, which have higher noise levels and show higher impedances (Mathewson et al., 2017). It therefore remains an open question whether this may also lead to negative effects. Thibault and Wexler concluded in their review on free consumer devices that there is only little evidence that they actually record brain activity or reflect the brain states and activities they claim to measure (Wexler & Thibault, 2019).
The combination of (i) a high propensity to produce false-positive values combined with (ii) a low stability of scores is particularly problematic, as it indicates the initial presence of an abnormality in brain waves, which requires NF training, and its spontaneous remission at a second measurement, which is typically performed after several NF training sessions (Wood et al., 2024).
Computer security and privacy
One of the main ethical and legal concerns related to the use of NTs is, as with any other modern digital technology, the risk of cyber attacks and being hacked. This concerns not only invasive systems but also, or especially, non-invasive ones.
NT handles a multitude of sensitive user data, i.e. data on the brain, personal information on possible health problems, concentration, etc. (Li et al., 2015). This could lead to PIN mining (Martinovic et al., 2012), data and identity theft (Li et al., 2015). In 2016, a research group termed this threat ‘brainjacking’, describing the ‘ability of attackers to exert malicious control over implants brain’ (Pycroft et al., 2016). They describe two different categories of attacks – blind and targeted, and indicate an increase in potential attack methods along with an increase in the complexity of invasive NT therapies.
Protocol adaptation can be done wirelessly. Wireless data transfer can pose a security risk. A research group led by Sundararajan tested the security of a commercially available portable EEG system ‘Emotiv Insight’ that can be used with a smartphone app (Bernal et al., 2022).
The device worked with Bluetooth Low Energy and the research team was able to perform a man-in-the-middle attack, allowing it to force unwanted activity on the BCI to intercept and modify information. The intercepted data could also be modified and sent back to the system or Emotiv data, the transfer to the system could be blocked, so the device could not connect to the smartphone.
Such attacks are possible even when the data is encrypted. It is possible to save and obtain private data. When it comes to smartphone-based BCI applications, risks already stem from smartphone security issues. Private data can be accessed, transferred and analysed, so that hackers can attack users of BCI devices (Li et al., 2015). Although the effects are neither lethal nor assumed to be longlasting, they are unpleasant and carry a high psychological risk (Pycroft et al., 2016).
Various countermeasures have already been proposed, including periodic firmware updates, standardisation of NT device manufacturing processes, registration, and bug reports. In addition, the firmware of most IPGs is designed in such a way as to block problematic and dangerous pacing parameters. Whether attackers can circumvent these rules remains to be seen.
Most of these problems are more problematic and relevant for invasive NT devices. For these, we distinguish between recording and stimulation devices. Recording devices mainly involve the risk of data theft and security as seen above or manipulation of the device so that it no longer functions properly. However, there are also situations in which technologies designed to function as recording devices can be manipulated to function as stimulation devices and vice versa. Exploration of these scenarios is decisive for mitigating the effects of so-called neurocrime (Ienca et al., 2022). However, so far, no cases of brainjacking have been noted outside the research context. What has previously been recorded is the damaging control of implantable insulin pumps and cardiac defibrillators (Markosian et al., 2020).
Neuroenchantment and manipulation
Post-industrial societies hold neuroscience in high regard due to the culturally attributed properties of the brain, seen as the material locus of the self and the individual mind.
Consequently, measurable activity patterns in an individual’s brain and the opportunity to interact directly with the brain are considered privileged elements (Littlefield, 2018).
The high reputation of neuroscience and its seductive neuro-devices (Giattino et al., 2019) does not prevent the perpetuation of so-called ‘neuromyths’, i.e. beliefs according to which the brain enjoys great popularity despite being scientifically wrong (van Elk, 2019). In the literature, this is referred to as ‘neuromysticism’, defined by Armin Raz and his colleagues (Ali et al., 2014).
Apart from erroneous and resilient beliefs, some forms of reasoning seem to be impervious to scientific facts. This phenomenon has been called ‘intuitive metaphysics’ and describes how an intuitive commitment to specific beliefs such as free will can override scientific evidence during decision-making. People’s intuitive ideas about an indeterministic free will are imported and intruded into their representation of neuroscientific scenarios (Rose et al., 2017). In an experimental scenario, participants were confronted with a hypothetical scenario in which scientists perfectly and deterministically predicted a person’s behaviour. In spite of this information, the participants were convinced that even in a perfect prediction scenario, those people might have decided differently and according to their free will.
This phenomenon poses a significant problem that is mostly overlooked. ‘Neuro-enchantment’ describes the phenomenon whereby people are more likely to believe in products that advertise the ability to measure, stimulate or otherwise interact with the brain (Ali et al., 2014).
This is evidenced by the multitude of products even unrelated to the specific field of the suffix neuro-, such as NeuroRoundbrush, NeuroGum, NeuroWater or NeuroSocks, which can be purchased in shops and online. This persuasion works even when the participants are university students trained in neuro-methods and therefore know that products like this cannot work scientifically. Even though the engineering students knew that mind reading was impossible, they were no more suspicious of a supposed mind-reading machine (Ali et al., 2014). Furthermore, Olson and colleagues were able to show that when participants were made to believe that a machine could read their attitudes towards a topic, they prefered to believe the machine’s assessment of their individual attitudes rather than their own (Olson et al., 2023).
Studies such as this show impressively how easily the human mind can be manipulated through NTs, despite the fact that this was only done in a simple experimental context.
This raises the question of how marketing campaigns and consumer-level NT systems can influence our decisions, our attitudes, the feeling of our own agency and the perception of free will.
Experiments such as this one show the threat of NT devices to its users’ ability to evaluate. One can only speculate how such manipulations would work outside of the experimental context on lay people without any prior knowledge of neurological topics.
This blind trust in NT devices can have fundamental consequences. In India, Brain Electrical Oscillation Signature Profiling (BEOS) has been used to interrogate alleged criminals in a manner similar to a polygraph test in court, although the reliability has not been proven at all (Conitzer et al., 2019).
The high degree of desirability of the promises made by NTs regarding privileged interaction with one’s brain clouds critical thinking. This is crucial with regard to decision-making that should be based on more specific properties of these technologies.
A distinction can be made between invasive and non-invasive technologies, and it is useful, as their properties and capabilities differ considerably. In public communication about NTs these differences are often blurred and foster a specific mechanism that we call here the transfer of reputation between invasive and non-invasive NTs. Comparing the typical properties of invasive and non-invasive NTs reveals a complementary pattern of desirable and undesirable properties of both types of NTs.
Reputation transfer, as the name reveals, describes a confusion between the properties of different groups of technologies, usually leading to a mixture containing only positive features and concomitant neglect of negative features. The focus on the combination of positive features is enhanced by an optimistic tone often seen in public communication about NTs.
In a positively framed context, the desirable properties of NTs receive substantial attention, while the undesirable properties remain largely unattended. Moreover, the failure of these technologies is easily forgotten or reframed as a success7. Due to the inherent persuasive power of NTs, users may be led to believe in technologies that do not work as promised and thus run the risk of being manipulated.
Promoting NT literacy is an important step in educating users, manufacturers and professionals to enable safe use.
Social implications of neurotechnology
So far, the predominant social application field of NTs is the healthcare sector. Overall, NTs are expected to support a range of conditions, such as Parkinson’s disease for tremor arrest, rehabilitation for stroke, Alzheimer’s, obsessive compulsive disorder, and addiction. NTs are not only considered (problematic) tools for the medical profession, but visions of their potential application extend beyond health. As such, they have socio-political and socio-cultural implications on the way we live as a society. To do this, it is necessary to focus on three areas as a priority: education, work, and entertainment.
Education: the education sector is susceptible to many technological transformations (Jarke & Breiter, 2019). The possible development and use of NTs in educational contexts raises fundamental questions about the purpose and meaning of education, how learning is understood and what skills future students should be equipped with (Macgilchrist et al., 2024; Rahm, 2023a).
Work: the potential negative and positive implications for workers and companies, when it comes to the development of NTs, require careful consideration. Due to the centrality of work in many people’s lives, the particular and unique risks that arise in the workplace (Moradi & Levy, 2020) and the ever-increasing blurring of boundaries between the workplace and the private sphere, require a thorough evaluation of the implications.
Entertainment: although risks and opportunities may be underestimated in the entertainment sector, this area also has implications for how technologies are ‘normalised’ and introduced into people’s lives.
The power of narratives shaping technology and policy development
A sociological engagement with NTs allows us to assess the discourses, narratives and imaginaries surrounding NTs and their implementation in different contexts. This ‘talk of artificial intelligence (AI) in being’ (Bareis & Katzenbach, 2022) is based on the assumption that the design of digital technologies is not simply a technical development process but is also embedded in broader socio-political, economic and cultural practices.
For instance, the study on political discourses on AI and their definition of resource allocation, infrastructure and organisational projects demonstrates the key role of technological development, policy and discourse in contributing to what the authors consider the ‘AI hype’.
This may facilitate the deprioritisation of sober assessments of risk and potential, but it also risks positioning the rise of artificial intelligence as an inevitable path in technological development.
A recent commentary by Lucy Suchman criticises the unquestionable positioning of artificial intelligence as ubiquitous and prevalent. Policy and academic literature have paid much attention to artificial intelligence controversies, instead of problematising artificial intelligence itself (Suchman, 2023).
These studies question the use of hype in influencing and shaping our social understanding of what a technology and its capabilities are. Similarly, neuropsychologists have coined the term ‘neuro-hype’ to describe an empirically grounded phenomenon in which an over-promise of NT capabilities leads people to firmly believe in its potential – even if scientifically unfounded (Ali et al., 2014).
In summary, these studies show that narratives, discourses and imaginaries about the potential effects of technologies co-construct ‘regimes of anticipation’ (Adams et al., 2009) in which policy makers feel challenged to respond through policy development and regulation.
The problems and challenges arising from the development of NTs have similarities with the rise of artificial intelligence. Not least because NTs often apply artificial intelligence systems to analyse data, but also because of their alleged ability to infer traits, emotional states and behaviour of individuals.
In the last three decades, we have witnessed the study of ‘imaginaries’, especially in technological development. According to Rahm, these allow us to analyse how problems and their solutions are understood and the implications that arise from them (Rahm, 2023b). Jasanoff and Kim define them as ‘collective, institutionally stabilised and publicly represented visions of desirable futures [...] achievable through and in support of advances in science and technology’ (Jasanoff & Kim, 2015). The focus here lies in collective views and how they are institutionally supported.
Taking the rise of artificial intelligence as an impetus to study the use of NTs in different contexts, we must consider the ambivalent social implications of data-driven technologies. The harms and potentials may resonate in different domains, but the way in which users or those affected are imagined varies from country to country.
The (potential) social implications of NTs must therefore be considered with respect to different socio-technical imaginaries in different social domains. The social implications of NTs reflect existing concerns about the use of data-driven technologies in various social domains. We therefore reconsider the domains of education, work, and entertainment.
Education. The push towards the introduction of artificial intelligence and other data-driven systems in educational contexts often comes with the promise of being able to meet the learners’ diverse needs. The development of these technologies, however, is criticised for producing a limited view of what learning and education are (Selwyn, 2022). Through the introduction of these technologies, responsibility for learning and achievement is delegated to the individual learner instead of supporting social relationships as part of the learning process (Macgilchrist et al., 2024).
Labour. The use of data-driven systems poses many problems; they increase the surveillance capacity of workers and produce new possibilities on how workers could be measured and judged (Ajunwa et al., 2017; Manokha, 2020). Artificial intelligence-based technologies are often introduced to support worker well-being. The literature based on this approach criticises the fact that well-being is framed around productivity (Hull & Pasquale, 2018) and therefore provides a limited scope on how it can be discussed in the workplace (Tirabeni, 2023).
Entertainment. the push towards datafication, the ubiquitous collection and use of data are also about entertainment. As the Internet of Things (IoT) and other sensing devices are used in increasing entertainment functions, user data, including their behaviour, are collected (Hallur et al., 2021).
It is therefore important to explore and analyse the underlying social assumptions that require and are made necessary by the introduction of such NTs.
What is a social problem? How do technologies co-construct social problems?
For a social phenomenon to be understood as a social problem, it should be collectively defined as such and be in congruence with a normative understanding held by a group of people (Spector & Kitsuse, 2001). This depends on the various actors and their positions in society to construct a social problem.
Technological development plays a role in the co-construction of social problems as it often relies on problematisation to justify its approach.
The social model of disability challenges the common view of disability as a medical problem or as a problem within the individual (Beckett & Campbell, 2015; Goering, 2015). Instead, it holds society responsible for creating disabling conditions. Recent advances in the development of artificial intelligence are criticised because they are based on a legacy of ableist technology that makes disability problematic and in need of a technological solution. The concept of ‘technoabilism’ was introduced to describe a ‘rhetoric about disability that simultaneously talks about empowering disabled people through technologies and at the same time reinforces ableist clichés about what is good to have body-mind and who counts as valuable’ (Shew, 2020). As NTs focus on improving or enhancing human capabilities or supporting medical conditions, it is crucial to consider how these contribute to a conception and problematisation of humans as imperfect.
Considering the political economy of technology production, the concept of ‘sphere transgression’ reveals how large players in the technology sector position themselves as experts in multiple domains (Taylor et al., 2023). This then shows that technology produced for a specific purpose is commercialised and adopted in other areas (Sharon & Gellert, 2023). A case in point could be that software created for business purposes is distributed in school environments. According to this argument, a strong democratic society relies on the separation of distinct social spheres. If a company or social actor is able to hold power in several spheres, this damages social cohesion and democracy in general.
School education and training
In the field of education, the ability to measure pupils’ concentration levels is seen as a potential benefit of NTs. This measurement could be conducted using headphones and is intended to contribute to the improvement of individual children’s learning.
The educational literature addresses a number of concerns regarding the availability and use of neural data. On the one hand, it addresses these concerns as a computer security problem, where the robustness of systems is questioned. On the other hand, it warns against the possibility of data being used for unintended purposes, e.g. for possible commercial purposes. The literature raises the issue of consent and decision-making, where this is to be delegated to the learners’ parents or legal guardians.
Apart from data issues, researchers warn against the possibility of manipulation. As children’s brains are developing, it is unclear how they respond to NTs: side effects, damage and other undesirable consequences, especially long-term ones, are not clearly studied. This leads to uncertainty about the impact of NTs on typical brain maturation.
Researchers also warn against NTs being introduced into educational settings with false promises. There is no clarity on how these technologies improve educational outcomes. This relates to NT outcomes: there is a possibility that the data generated by these technologies may be unreliable due to changes in the system or changes in students’ needs. Furthermore, teachers, parents and students may not have the skills to interpret the data correctly or as intended, which could lead to problems such as stigmatisation. What is considered normal brain functioning may change with the advent of NTs. This may also lead to increased pressure to using NTs.
Similarly, the literature addresses issues related to teachers’ skills: they may not know enough about technology or students’ brains to effectively use NTs to facilitate learning. Teachers will need to be trained. This relates to the need to adapt pedagogical materials to support NTs. Another aspect mentioned is the importance of teachers’ attitudes towards NTs as this would have an impact on the success of adoption in classrooms.
Work
A number of possible use cases in the working environment are mentioned. Recruitment procedures could, in the future, use the analysis of brain data to determine a good fit for an organisation. This is seen as highly problematic in particular with regard to possible misinterpretations as well as racial and gender bias as an area of concern. Then there is the issue of the possible use of NTs to monitor workers in addition to other digitised practices in which workers are monitored. The inability to give consent for workers resulting from power disparities needs to be considered in work contexts. The ability to turn words into text through the use of NTs is a promise of some products already on the market such as Facebook Reality Labs’8 EMG bracelet and Emotiv headsets that measure brain data.
The literature on labour issues problematises a number of other problems where synergies with other areas can be created.
The unknown long-term effects on workers may relate to brain damage, body integrity, and changes in the integrity of users. This will ruin the intended purpose of introducing NTs to increase health and safety in the workplace.
The massive use of data in the workplace, as part of broader monitoring of workers, is being addressed as a key issue in the workplace. This relates to the potential misuse of data and the risk of it being used in other contexts such as access to housing, loans, insurance and more. The use of NT and its data can lead to many abuses. This is part of similar concerns regarding privacy, accuracy and lack of explainability. There is also the possibility of data being monetised by organisations selling it to third parties.
Several studies also mention uncertainty about the performance and capabilities of NTs as an important issue to consider in the workplace. NTs may not be able to adapt to the different situations in which they are used, causing inaccuracies. Tools are positioned as unreliable where their effectiveness is not clear to evaluate.
Due to the power relations that exist in the workplace, workers may also find it difficult to freely consent to the use of NTs in an informed manner. Furthermore, the literature cites increased social pressures and performance standards as a problem when NTs are introduced. Since some people may be more prone to experience stress (arousal), workers may lose opportunities due to their neural structure. This adds another level of discrimination and is exacerbated by other factors related to brain data and resulting parameters. Due to socialisation in educational settings, workers from wealthier backgrounds may have an advantage over others as they may have learnt to use these technologies from a young age.
Entertainment
The discussion on NTs in games, art and other forms of entertainment frames users’ emotions around parameters of game success such as confusion, boredom, or satisfaction. There is enthusiasm for the ability to respond to users’ cognitive states in real time and adjust the difficulty levels of games, for example, or to use brain data to create art. The ability to control games through brain input may also open up games to a wider group of players, who may have physical disabilities that prevent them from participating in certain games. The enhancement of human creativity is present in the discussion on NT and entertainment.
In the field of entertainment, researchers also raise concerns about optimistic advertising that could mislead what technologies are actually capable of. Similarly, issues of security, IT security, accuracy and data collection are also mentioned. In this sense, researchers warn against the application of medical devices in non-medical contexts.
Promises and objectives of neuropowering
Remarkable are the differences between the goals of NTs in education (improved educational outcomes) versus work (increased productivity) versus entertainment (to counter boredom). While NTs in education are based on the political objective of improving education, e.g. by supporting customised learning, the use of these devices in the workplace is mainly aimed at reducing accidents in safety-critical roles. In entertainment, the aim is to provide more user-centred gaming experiences. Students and workers focus on measuring and inferring mental states such as attention and fatigue levels.
This potentially reconfigures what we socially value in education (here reduced to improved learning) and well-being at work (here reduced to cognitive processes).
Transformation of social relations
When brain data becomes measurable and accessible, it has implications for how we relate to one another. The possible transformation of the relationship between teachers and learners from the relationship between workers and employers is associated with the new parameters made available by NT.
The literature does not necessarily clarify who should have access to the data and with what benefits. It is unclear how possible organisational embeddedness includes and excludes certain groups of actors. Often, however, workers, educators or students are not central parts of the discussion.
This leads us to reflect on the potential skills required to introduce NTs and raises a normative question about what constitutes a ‘successful’, ‘good’ or ‘ethical’ use of these technologies. Teachers’ lack of competence with regard to these technologies is seen as a factor contributing to ‘unsuccessful’ introduction. As with any technology, there is debate about what and who constitutes the ‘intended purpose’ of NTs and how NTs can be re-appropriated by social actors for (different) purposes or in an attempt to resist their use altogether. Furthermore, evidence suggests that NTs can be easily manipulated and their data falsified if sensors are not used correctly.
Emotions and skills are also negotiated differently in the intended uses of NTs in the three domains we are analysing. Creativity, for instance, is often undervalued in discussions within educational and work contexts, but has a prominent place in discourses on entertainment, indicating a gap in the understanding of its broader implications. Furthermore, there is a tendency in some literature to oversimplify work environments, focusing exclusively on productivity parameters and neglecting the nuanced interplay of emotions.
NT makes judgements about different emotional and mental states. Furthermore, the interpretation of brain data poses challenges, as emotions such as arousal (stress) can have a dual effect, both supportive and potentially harmful, depending on the context.
The literature rightly raises cybersecurity issues when it comes to the governance, storage and transfer of brain data.
It is imperative to assess who has an interest in brain data and the potential effects on social relations9.
Considering the various potential actors involved in the use of NTs, it is difficult to assess the potential and risks of NTs. Therefore, the possibility of manipulation or ‘neuroenchantment’ exists.
Through the introduction of NTs, new social norms are established. Optimisation of the self is taking primacy over other organisational and institutional goals.
One wonders how a society’s notion of disability is co-constructed through these technologies. The social model of disability sheds light on the emergence of new norms of cognitive abilities and how these create a conception of human beings that requires ‘adjustment’.
Moreover, the promised assessment of people’s correct mental states and characteristics is based on the assumption that identities and personalities are ‘stable’ and ‘fixed’ (and weighted within a pre-determined range anyway). It does not take into account the complexity of human dynamics and emotions. At the same time, NT development produces an understanding of certain emotions and mental states as ‘desirable’, ‘undesirable’ and ‘unacceptable’ in different social settings. This then shifts the responsibility for managing these mental states onto the individual. Responsibility for performance is an individual pursuit.
Policy makers should also be encouraged to question the potential of Big Tech companies to maintain an ever-increasing control over power in a range of social spheres and the implications this has for social cohesion and democracy more generally.
NTs raise the issue of defining how we as a society want to relate to each other, what our idea of good education and good work is, and who can access these technologies for the benefit of people.
Results from a sociological point of view
Neurotechnologies represent the human brain as a given instead of humans as socially integrated and evolving beings.
NTs operate according to a reductive approach, in which emotions and mental states are isolated from broader social contexts.
This is in stark contrast to sociological research on emotions as socially integrated, i.e. as produced and responsive to social contexts. In socio-technical imaginaries around the NT, the brain becomes a ‘fact’ before anything else, including personal accounts and interpretations within a social context. In light of these insights, care must be taken to problematise conventional notions, such as ‘levels of attention’, and consider a fuller range of emotional states, including confusion, boredom and satisfaction, in various domains.
Overall, the inference of people’s personality traits is based on the assumption that identities are stable and fixed instead of evolving and changing as part of the continuous transformation and negotiation of social relations. Similarly, the assessment of people’s mental states functions in a reductive manner. Complex social contexts and situations are reduced to cognitive states and the brain becomes a source of seemingly factual and objective knowledge. To add another level, only certain emotions are made measurable and thus articulable. This depends on the social domains in which NTs are expected to be used: while NTs in the entertainment sector are intended to measure boredom and satisfaction, those in the work and education sectors measure attention levels and fatigue. Since these technologies tend to provide a simplified understanding of social contexts, the predominant discourse on NTs undermines the notion that people are embedded and constituted in socio-technical contexts. For example, the social disability model shows how human beings can be constituted as disabled by technologies.
The acquisition, processing and subsequent use of brain data and associated parameters by different social actors reconfigure the way we relate to ourselves and others. NTs can generate knowledge about bodies and brains that is privileged over the embodied experience of one’s emotions, feelings and well-being.
As with other socio-technical innovations, there is a danger that new socially expected and accepted ‘cognitive performance’ norms will develop that lead to the medicalisation and pathologisation of neurodiverse individuals and groups.
Meanwhile, NTs redefine what is conceived as a social problem (requiring a technical solution). For example, socio-technical imaginaries of NT adoption in education define a key problem to be solved as that of attention and concentration levels. Teachers are problematised for being incompetent about what the brain does.
Moreover, human beings are embedded in the existing (and often unequal) power structures in various social spheres. For example, employees have less power than employers in how working conditions are configured. While laws in many countries regulate and protect the interests of employees, this is not the case everywhere and for every sphere of work (e.g. gig workers). Although it is not yet clear who will be able to access brain data and potentially manipulate cognitive processes, research on comparable data-driven technologies has shown how technologies exacerbate and contribute to the unequal distribution of power (Eubanks, 2018).
Socio-technical imaginaries do not remain at the level of discourse, but rather the accompanying anticipations determine how resources are allocated, what and who receives funding, for what kind of ‘techno-solutions’ and solutions to regulate NTs are sought.
Although this article refers to academic publications, it is clear that the discourses are mainly driven by market players who then make claims in various social spheres10.
The concept of sphere transgression shows how this leads to and reinforces power imbalances at the socio-political level as it allows the dominant and resource-rich companies to determine a different understanding of the various social spheres and social roles within them (e.g. the ‘new learner’ or the ‘new worker’). Subsequently, these dominant actors are able to set the agenda on what society perceives as social problems in need of neurotechnical ‘correction’.
The literature review demonstrates a clear empirical gap on the organisational and day-to-day integration of NTs. It is crucial to assess how different actors negotiate the use of these technologies. Further research should focus on experts in the field and the lived experiences of people involved in NT development.
The basics of neurorights
From an ethical point of view, NTs must consider the opportunities, risks and unintended consequences for society and individuals. On a normative level, the philosophical and ethical foundations on which we must scientifically base ourselves must have a materialistic focus on the human ego, the origin of which is identified in the brain.
Although this approach finds broad academic consensus, there is a danger of ‘objectivisation’ and reduction of the human being to brain data. On the pure basis of such materialism, the special status of human dignity, which is a value in Europe and the West, would be difficult to sustain and could unintentionally lead to a weakening of the foundations of fundamental rights.
This would be of particular concern for highly vulnerable people (e.g. coma patients, people with cognitive disabilities).
As far as technology design is concerned, concrete technologies offer a broad spectrum of possibilities, but for these a form of impact and implication assessment is required.
With regard to the experiments conducted by Neuralink company, for example, it is necessary to consider the risks for people, as well as the effects that the mere existence of such technologies could have on society as a whole and the unintended consequences (e.g. pressure to adapt to meet new levels of performance only achievable through invasive NTs). However, the opportunities offered by NTs should also be given due consideration.
In order to assess the proposals to be made in this direction, the philosophical-ethical basis of the normative concerns conceptualised as ‘neuro-rights’ must be taken into account. The orientation point for these considerations are the foundations in the history of ideas, as well as human rights and fundamental European values.
This step is necessary in order to be able to assess the legal level at which neurorights can be meaningfully implemented. This becomes particularly clear in light of the fact that it is claimed to see neurorights as a junction of moral and legal rights (Ienca, 2021a. P. 44). This suggests that the formulated neurorights are not only moral in nature, but also have an inherent claim to be implemented within concrete secular legislation.
There is no moral obligation to positiveise, i.e. to transpose moral rights into legal rights, as morality in itself is sufficient in its claim to validity (Kirchschläger, 2019. P. 28).
From an ethical point of view, the suggestion of the simultaneity of moral and legal rights seems to arise primarily from the way in which the proposed neurorights are presented and imagined. However, if one recognises that not all moral rights per se realise a claim to positivity and if one takes the moral nature of neurorights seriously, one must ask within which moral system neurorights are proposed, what they are aimed at and under what conditions and to what extent they are to be implemented as legal rights within this moral system.
The foundations chosen to propose new neurorights are based on the moral foundations of what has been established in European intellectual history as ‘human rights’. The debate on the extent to which human rights themselves are moral or legal rights touches on the transition from natural law to positive law, in which human rights are recognised in legal frameworks both nationally and internationally (Nickel, 2019). This has to be considered in the context of the proposed neurorights.
However, if neurorights within this system are understood as moral obligations with the importance of establishing new rights, the question arises whether legal implementation should be placed at the highest level of the moral system or whether a subsidiary level is more appropriate.
Those who support neurorights repeatedly refer to:
1) freedom of thought and conscience;
2) mental integrity;
3) privacy
and conceptually relate neurorights to these (Ienca, 2021a. P. 44). The concept of personal identity is also taken up (Ienca, 2021a. Pp. 53–54).
Fundamentally, focusing on freedom of thought as the antecedent for other freedoms, along with a materialistic worldview, entails the danger of reducing people and their freedoms to biological functions, instead of focusing on human dignity as a general and inviolable foundation. Although it must be recognised that having freedom is not the same as exercising it, the latter should not become the criterion for the former, as it would not do justice to the complexity and dignity of human existence. At this point, it should be noted that a freedom does not in itself constitute a legal right, nor does the loss of such a freedom automatically represent a loss of rights. However, the intertwining of freedom and rights must be particularly emphasised, especially in the context of human rights, as it was precisely the protection of freedom that was conceptually relevant to the pioneers of human rights (Bielefeldt, 2023; Funke, 2023; Willoweit, 2023). If neurorights were incorporated at a subsidiary legal level, the danger of functionalisation and objectivisation of a human being could be mitigated by the primacy of the dignity concept.
As an unconditional aspect of being human, dignity is a stable and inviolable foundation that transcends ability, origin, gender, wealth, etc., and constitutes rights that protect the most basic ways of being human. Therefore, it seems favourable to base freedom and the right to specific freedoms on human dignity rather than ability/capacity.
At least two neurorights, the right to free will and the right to personal identity, are based directly or indirectly on freedom of thought and conscience. The right to free will constitutively presupposes a form of freedom of thought, since any restriction of thought itself would fundamentally limit the formation of a will.
With regard to the right to personal identity, proponents of neurorights refer to the tradition of John Locke, according to which a person is an intelligent being who possesses both reason and the ability to reflect, recognising oneself as a self, as the same thinking being at different levels, times and places.
NTs could cause such problems of continuity and coherence through brain stimulation, in addition to drugs, hypnosis and other external influences (Ienca & Andorno, 2017, pp. 21-22). Interventions of this kind in thought contradict the right to freedom of thought and conscience, as it is a manipulation of individual thought that is outside the individual’s influence and causes changes in the individual.
In principle, the human mind is protected from manipulation with psychoactive substances and other manipulations under Article 18 of the ICCPR. This treaty has been signed by all EU Member States and seems to be unquestionable even within the EU institutions.
The protection of freedom of thought, conscience and religion is an absolute right that cannot be weighed against other rights (Ligthart et al., 2022. Pp. 2–3; Shaheed, 2021. P. 25).
Personal identity also has references to mental integrity, which was claimed as a separate right in one of the first mentions of the neurorights. In terms of content, mental integrity is naturally closely related to freedom of thought. In debates, the distinction is sometimes made that freedom of thought is intended to protect against the intrusion (and extraction) of the human cognitive sphere, while mental integrity is intended to protect against harm (Ienca, 2021a. P. 50).
Schauer (2020) addresses this aspect of protection against harm in his considerations on freedom of thought, but in this context he sees harm prevention as something that should go hand in hand with freedom of thought.
Other scholars regard mental integrity as the counterpart of bodily integrity and argue on this basis that the right to mental integrity can be thought of as analogous to the right to bodily integrity and that this is even more fundamental than the aforementioned right to mental integrity (Craig, 2016).
Malicious interventions via NT, whether intentional by malicious persons or due to technical malfunctions, can be considered a radical violation of mental integrity.
Lavazza and Giorgi (2023) point out in this context that a special feature of NTs is that the (malicious) manipulation of individuals could take place without their knowledge, which distinguishes the dangers of NTs from those of psychoactive substances. Recently, however, Tesink et al. (2024) argued that with the help of a potential extension of the mind through NTs, extended protection of mental integrity might also be possible. It should be noted that they assume that ‘[...] the mental states that make up the human mind – including beliefs, desires, and memories – are not only realised by our brain but can also be realised by physical processes and artefacts located outside the brain and even beyond the body’ (Tesink et al., 2024. P. 3).
Based on the concept of privacy, we refer here to the right to mental privacy. Privacy is recognised as an area of concern, just like identity (Goering et al., 2021). Reflections on privacy in the context of neurorights play a key role, where freedom of thought and mental integrity focus on the integrity of the inner level; the main aspect of mental privacy is whether the data read using NTs is sufficiently protected or whether the individual is fully protected from this. A particularly problematic aspect, Yuste argues, is that ‘[...] neurodata (i.e. recording of nervous system activity) can be generated unconsciously and often involuntarily’ (Yuste, 2023).
Furthermore, the decoding of brain activity has already been successful in several areas using non-invasive NTs. These include images, emotions and, in combination with artificial intelligence, listening to spoken words (Yuste, 2023).
Most companies distributing consumer products do not treat neural data with the special care required due to their sensitivity (Genser et al., 2024). With regard to consumer privacy and transparency, the findings are cause for concern and need further investigation into how the right to privacy could be enforced and what it includes with regard to NTs.
Looking at the history of the meaning of privacy, it is evident that since the first mention in the essay ‘The Right to Privacy’ by Warren and Brandeis (1890), who saw this right as a ‘right to be left alone’ in the face of the burgeoning tabloid press, it needed constant updating and expansion. From the beginning, the concept of privacy had an inherent protective function against new technologies and their negative effects on human life. What Warren and Brandeis saw in the spread of cameras expanded to include the possibility of mobile audio recording, digital technologies such as the Internet and, finally, the emerging NTs.
The academic (and legal) discourse followed technological development and elaborated a wide variety of theories on what exactly comprised the right to privacy and, its intellectual child, the right to data protection.
Although, as described above, a materialistic worldview prevails among the arguments in favour of neurorights, there is already a distinction that privacy is primarily aimed at brain data, whereas the aforementioned mental integrity is intended to protect against interference in the cognitive sphere.
If one takes the materialist perspective to its logical conclusion, it could be argued that mental integrity could be reduced to privacy, since the protection of brain data should also include their harmful alteration. The distinction between privacy and mental integrity is thus a distinction not in merit, but in purpose. This distinction may be useful for understanding, but, in principle, regression remains possible, so the separation of privacy and mental integrity would require further argumentative support.
It must be emphasised that considering privacy as merely a matter of protecting data, be it brain, neural or any other data, does not reach the broad dimension that the already established right to privacy should encompass. It is not only a matter of taking and disseminating data, in their case images, but also an unjust intrusion into the personal sphere of a human being.
As far as NT is concerned, it is of utmost importance to keep this dimension of privacy in mind rather than reducing privacy issues to a question of data collection, processing, management, storage, etc. However, data must be addressed and considered, but always with the human being as a whole in mind.
Analysing the foundations of freedom of thought, mental integrity and privacy in the history of ideas, we see particularly clearly that these concepts still form the argumentative basis for the protection of human life in the face of various technical challenges. Based on what was conceptually useful in the past for the formation of specific rights, it is still possible to derive up-to-date considerations today. In this sense, these concepts provide an adequate and stable basis for the demand for new neurorights. One can also recognise the importance of the rights enunciated in neurorights being worthy of protection, although these are by no means necessary, but merely possible derivations leading to the aforementioned rights.
The protection paradigms provided by these foundational concepts are certainly applicable to new technologies such as NTs, even if they were not originally conceived with these in mind. Moreover, the foundations in the history of ideas are not only a conceptual basis, but have themselves been translated into rights.
There are already several rights and treaties on fundamental concepts, of which Article 18 of the International Covenant on Civil and Political Rights (ICCPR) is a founding element, safeguarding individuals from manipulation through psychoactive substances and other forms of mental interference.
The unanimous endorsement of the ICCPR by the EU Member States and the bloc’s support for the integration of the ICCPR with the International Covenant on Economic, Social and Cultural Rights (ICESCR) into a single document underline the EU’s commitment to the protection of human rights, including in the NT sphere.
Furthermore, freedom of thought is guaranteed by Article 9 of the European Convention on Human Rights (ECHR).
Considering the area of NTs, it is useful to also consider the freedom of thought aspects of mental integrity and privacy.
As Ienca states: ‘If freedom of thought protects the human brain and mind from undue external interference and the right to privacy protects personal information (including mental information) from external intrusion, other normative principles protect the human brain and mind from harm’ (Ienca, 2021a. P. 50).
The right to mental integrity finds legal support in a number of instruments, such as Article 3 of the Charter of Fundamental Rights of the European Union (CFR) and Article 8 of the European Convention on Human Rights (ECHR), all of which affirm the need to respect physical and mental integrity. Moreover, the European Charter of Fundamental Rights explicitly recognises the right to mental integrity, reflecting a broader understanding of human dignity that includes both the physical and psychological dimensions. This perspective is reinforced by similar provisions in the Convention on the Rights of Persons with Disabilities (Article 17), which recognises the fundamental importance of protecting both physical and psychological integrity.
The legal framework provided by Article 17 of the ICCPR and Article 8 of the ECHR constitute the fundamental elements of the right to privacy. Furthermore, the interpretation of Article 17 has been updated through General Comment No. 16 of the ICCPR to explicitly include information obtained or processed by digital means (Office of the High Commissioner for Human Rights, 1988).
The case of Chile on the protection of neural rights11
On 8 August 2023, the Chilean Constitutional Court, the first in the world, adopted a ruling destined to remain a milestone on the boundary between technology and the protection of the integrity of the person.
It basically stated that even devices intended to track people’s brain activity for ‘private use’ must be authorised by the health authorities; if the users’ data, then, are processed for scientific use, the consent given by the user must be informed, express, specific about the research and its purposes, and dynamic, i.e. required whenever the purpose of the research changes over time.
Lawyer Maite Sanz de Galdeano broke the news at the Global Summit of the Legal Hackers community held on September 8 and 9 in Madrid.
“The constitutionalisation of neurorights in Chile has opened up the possibility of a ‘constitutional protection action’ against the commercialisation of a device that puts them at risk. The ruling shows that the lack of specific regulation exposes users to uncontrolled risks, which justifies a particular rigour in the application of the current law and, on the other hand, a reflection on the necessary changes: if these technologies escape the controls of medical devices, in Chile (and in the rest of the states) at least the regulations on consumer protection, product safety and privacy, which currently do not guarantee the safety of users because they do not take into account these new risks, should be reviewed. In terms of neurodata, in addition to the risks to privacy, a ‘new’ vulnerability of the human being, hitherto unexplored, is revealed: the knowledge and consequent control of brain activity, for purposes that are not exclusively medical. The answer can only be an explicit regulation of neurodata, as a category of sensitive personal data, enabling the defence and development of neurorights. In Europe, the GDPR must be amended in this sense”.
The market for hi-tech wearables
The research agency International Data Corporation (IDC) predicted a shipment of around 442.7 million wearable devices in 2024, equivalent to a 6.3 % year-on-year growth.
These are 325 million headsets, 162.2 million connected watches, 33.8 million smart bands, the health monitoring devices, and 2.2 million products in other so-called ‘wearable’ categories.
In terms of market share, the numbers translate into 62.1 % earbuds, 31 % smartwatches, 6.5 % smartbands and 0.4 % other devices.
We do not know the amount of devices intended to measure our brain and cognitive activities; but we do know that companies such as Elon Musk’s Neurolink are already particularly active in this direction.
The Chilean case: wearables and brain activity
Chile is the first country in the world to have legislated on neurotechnology and included the ‘rights of the brain’ in its Constitutional Charter.
In 2021, an amendment to Article 19 was passed to ‘protect the mental integrity and immunity of the brain from the advances and capabilities developed by neurotechnology’12.
Although appearing to be a premature choice, the Chilean parliament preferred to put its hands on the development of neurotechnology, whose ability to act on the human brain is still limited but which applications are spreading beyond the medical field. The case before the Chilean Constitutional Court can be described as a ‘pilot case’, both in terms of the plaintiff and its genesis.
The plaintiff, Guido Girardi Lavin, is a Member of Parliament and promoter of the protection of human rights also through The Neurorights Foundation, but he brought the case before the judges, first of merit and then constitutional, as a buyer and user of the Insight device, a wireless device that through a band of sensors collects information on the electrical activities of the brain and collects data on gestures, preferences, reaction times and cognitive activity. Insight is produced by a US company, Emotiv.
The case submitted to the Constitutional Court
Girardi Lavin purchased the device via the web and followed the instructions to activate it: he created an account on the company’s cloud, agreeing to the terms and conditions; he downloaded software onto his PC, again agreeing to the terms and conditions of service. But he decided to use the free and not the pro licence; a choice that – he reported to the courts – did not allow him to export or import his brain data records.
The plaintiff also pointed out that everything was recorded and saved in Emotiv’s cloud. In short, he complained about the potential risk of hacking, surveillance, unauthorised capture, and commercialisation of his neuro-data.
The appeal therefore concerned the violation of the Chilean Data Protection Act (No 19.628) both in the part where it establishes the liability profiles of the data controller; and in the part where it assigns the data owner’s right to obtain the deletion or blocking of his data in the event of the closure of his or her account.
Emotiv stores user data for scientific and historical purposes.
Gilardi therefore asked the court to force Emotiv to amend its privacy policy and to prohibit the marketing of the device in Chile until this fulfilment, jointly warning the company to delete its database immediately.
The company argued that Insight is not a medical device but a self-quantification device; that it has no invasive purpose; that the terms and conditions are detailed; that there is a requirement of express consent for both the processing of personal and brain data; and that no proof of actual harm suffered by the plaintiff had been provided.
In addition, Emotiv argued, there had been no violation of either the Chilean Privacy Act or the more restrictive GPDR (General Data Protection Regulation), the European regulation on the processing of personal data, which obliges pseudonymisation, an activity that prevents the data collected from being attributed to a specific or determinable person.
With regard to the violation of Article 13 of the Chilean Privacy Act, the company had noted that data are saved as long as the user’s account is active and that with regard to ‘brain data’, the user can at any time revoke consent to the processing, as the privacy policy prescribes.
As for further processing, for scientific or historical purposes, the company noted that the data are anonymised, encrypted, stored securely and separated from other information. They therefore acquire the nature of statistical data, as such removed from the protection of privacy.
The Constitutional Court’s decision
The Constitutional Court overturned the judgments on the merits, deeming relevant the fact that the marketing of Insight was not subject to medical (as well as customs) authorisation, reminding the authorities in charge of analysing the device in the light of the regulations in force with a view to its future marketing in Chile.
The judges ruled that ‘prior to the development of new technologies involving more and more aspects of the human person, aspects that were unthinkable a few years ago that they could meet, the State must pay particular attention and care in the control, in order to prevent and anticipate the possible effects, in addition to directly protecting human integrity in its entirety, issues that include privacy and confidentiality and the rights of mental integrity and the subject of scientific experimentation. In this way, before the arrival of a new technology such as the one at issue in these proceedings, which treats the electrical activity of the brain in a dimension that was once absolutely private and personal, and outside strictly medical contexts, it is absolutely necessary that before allowing it to be marketed and used in the country, technologies and devices be analysed by the competent authority, on the understanding that they raise issues that have not been previously investigated by it’13.
Neurorights in international law
The Chilean Constitutional Court has had to recognise that, despite the direct protection mandate contained in the amendment to the Constitution, there is currently no ordinary law that unravels all the knots imposed by the advancement of applied neuroscience, the requirements, conditions, permissible risks and use by individuals.
There is therefore a new fork in the road.
And yet, thanks to the international normative dimension, it is possible to draw a protective boundary in the sense that follows and referred to by the judgment.
The International Covenant on Economic, Cultural and Social Rights prescribes that people should be able to ‘enjoy’ scientific progress. The Unesco Declaration on Science and the Use of Knowledge and Programme of Science prescribe that science should respect human rights and the dignity of the person in the sense already indicated by the Universal Declaration of Human Rights. The Universal Declaration on the Human Genome specifies that some scientific applications can be harmful and that scientists and other agents have a ‘special’ responsibility of an ethical nature, which must be incorporated into the debate through public discussion. The Unesco Universal Declaration on Bioethics and Human Rights has already imposed the general principles of human vulnerability and the integrity of the person, together with the rights to privacy and confidentiality.
The Court also referred to the Chilean legislation (No. 20.120) on scientific investigations on persons and the genome that banned human cloning, recalling the provisions that deal very specifically with the consent of persons involved in medical research. Under this legislation, the consent of those involved in research not only follows certain precautions when it is collected, but must be renewed every time the scientific investigation undergoes major changes.
With respect to the case at hand, the Constitutional Court found that the company producing the Insight device had failed to request this specific consent, which certainly cannot be considered implicit in the other consents, which are of ‘commercial’ nature.
The final decision
The Court therefore overturned the Court of Appeal’s decision, emphasising the need for new technologies, particularly those that deal with human activities that have hitherto been strictly private, such as brain activity, to be submitted to the competent authorities for scrutiny before neuro-capable devices are marketed and used in the country.
It therefore considered that the constitutional guarantees of Article 19 on mental and physical integrity had been violated since Insight was marketed without authorisation and without an assessment by the health authorities.
The Chilean court thus upheld the appeal, prohibiting the marketing of Insight until it obtains authorisation under the aforementioned regulations. The Institute of Public Health and the Customs Authority will have to assess whether the management of data collected with Insight strictly complies with the applicable regulations outlined in the ruling. It also warned Emotiv to delete all information that was stored in the cloud or on the portal.
Neurotechnology: a socio-ethical point of view
With regard to an ethical-normative reflection on these technologies, it seems obvious to draw on ethical frameworks and principles from the fields of medical ethics, bioethics and technology ethics. For example, the traditional principles of Beachamp and Childress (2019) (autonomy, non-maleficence, beneficence, justice) can serve as important reference points for an ethical discussion.
In view of the massive intertwining of man and machine (and thus also with artificial intelligence) in NTs, it seems sensible to also consider the principles discussed in the field of technology ethics.
Floridi (2023), for example, takes the four bioethical principles, expands them by one principle and reflects on them against the background of AI.
A socio-ethical perspective is essential because, in the light of current discussions in the field of medicine and bioethics, it can introduce new approaches that we consider relevant in the context of NTs and thus open up a more holistic view of the need for ethical reflection on NTs.
It is obvious that not all ethically relevant aspects can be addressed, let alone adequately discussed, within the scope of this article, so limitations are inevitable.
Although the euphoria around NTs has influenced research, both the market and society, there are important differences between these areas: NT research is usually conducted under strict ethical conditions and is accompanied and monitored by qualified third parties, which is why, in the context of existing regulation, individuals should generally not expect deception or negligence in this context.
NTs are fundamentally very ambivalent. On the one hand, they could help a large number of people suffering from illnesses (e.g., Parkinson’s, epilepsy, etc.) by effectively alleviating the symptoms of the disease. Other very positive application contexts arise, for example, with regard to prostheses (Raspopovic, 2020) and new possibilities in relation to rehabilitation.
In general, these technologies could be used to respond very specifically to individual clinical conditions and the needs of those affected. NTs could thus make a considerable contribution to the individual and common good.
On the other hand, however, there are also challenges and even risks and dangers. These begin with ‘collateral damage’ during treatment. For example, the current pulses continuously emitted by deep brain stimulation do not only affect the target area, but also affect other areas of the brain and can thus lead to known and/or unknown, possibly irreparable damage and further consequences (such as addiction symptoms or personality changes) (Gilbert et al., 2019).
Particularly risky and worthy of discussion appear to be BCI applications, although new opportunities may also arise for people with physical disabilities or these technologies may be used to treat mental illnesses.
Adopting a socio-ethical perspective means, in particular, addressing issues of justice, solidarity, equality, rule of law and participation (Koska & Filipovic, 2017). This perspective also seems to be particularly relevant in the context of NTs and against the background of neurorights under discussion. It is not without reason that the above-discussed foundations of the history of ideas form the starting point for discourses on the creation of new rights or their adoption and actualization.
Freedom of thought and conscience, mental integrity and privacy are closely linked to the aspect of justice towards the individual, since they indicate which interventions and restrictions on the individual are at least considered unjust. The basis thus created necessarily applies not only to isolated individuals but to all individuals within a social structure, which in turn brings out aspects such as solidarity, participation and equality.
Furthermore, as part of the shift in perspective from the individual to the community, it is important to consider which aspects of justice require more attention accordingly. In short, just because something is only right for the individual does not mean that it is generally right for the community and vice versa.
In addition, a further set of relevant ethical issues arise, particularly with regard to the social implications of NTs: for example, questions of social justice and, in particular, issues in the field of equal opportunities and participation seem to be central. While, for example, evidence-based, safe and effective disease-fighting NTs raise the question of equitable access to these technologies and thus address the aspect of participatory justice, the question also arises of what NTs mean for individual groups in this context. What do NTs mean, especially when they are available as commodities for the ‘self-optimisation’ of particularly vulnerable groups such as children, the poor and the sick? Moreover, in light of the socio-ethical principle of sustainability (Vogt, 2009), it is particularly important to consider future generations and aspects of intergenerational justice. These issues already illustrate the broad need for ethical reflection that goes hand in hand with NTs and that expands any question of research ethics and, because of the obvious technology-related issues, from the field of medicine and bioethics.
Democratic societies are characterised by a strained relationship between the individual and the common good. The concept of human dignity, human rights and the Charter of Fundamental Rights emphasise the centrality of the individual. It is therefore the individual, the person endowed with dignity and thus with freedom and autonomy – every individual – who is the central and ultimate yardstick for all matters relating to the organisation of our society, including technological issues (Heimbach-Steins, 2022). However, we know that people do not live in a vacuum, but in concrete social contexts, that they are social beings. This social nature of human beings is often described as one of their central characteristics (Vester, 2009).
Living as a human being therefore necessarily means being part of a society and interacting, communicating and working together with other people on an almost daily basis. Together we build social structures and institutions, pursue work or hobbies and try to lead a good life. Without interaction and cooperation between individuals, our societies today would be inconceivable and would not function.
This raises the central issue of the responsibility to shape society as a whole, which affects all individuals as part of that society. It results in a tension between individual and societal interests. These different positions require appropriate reflection and balancing processes that take into account individual freedom as well as issues of social justice. NTs could and will affect both spheres, the individual and the social. Individuals are the potential bearers of NTs, but changes in the individual give rise to new social challenges, which is why the relationship between NTs and society also requires special attention.
Human actions are embedded in specific social contexts and the autonomy and freedom must also be considered in this regard. It is precisely in this context that the influence of NTs must be considered. Although reference is often made to the positive aspects of various NTs, invasive and non-invasive, or that significant sections of the population believe in them, the consequences go beyond the physical and psychological dimension, often ignored. From an ethical perspective, interventions of this kind also raise the question of what this means for human action.
Action is closely linked to intentionality: ‘[...] [a] being has the capacity to exercise agency only in the case where it has the capacity to act intentionally, and the exercise of agency consists in the performance of intentional actions and, in many cases, in the performance of unintentional actions (which result from the performance of intentional actions)’ (Schlosser, 2024).
This last reference to unintentional actions refers to the fact that intentional actions lead to events involving unintentional actions. In this context, the concept of a sense of agency becomes particularly important, as it encompasses the direct knowledge of our actions, which is also related to the judgement of our actions. The sense of agency describes the perception that we actually do something in the course of our actions and control them (Legaspi et al., 2024). It describes the sense of having ownership of our actions. This in turn has particular significance for an individual’s self-perception and self-image.
Interference with a sense of agency has the highest disruptive potential as it could lead individuals to doubt themselves. In this aspect in particular, NT should be regarded with particular caution.
From an ethical point of view, in addition to the question of whether NTs are non-invasive or invasive, it is also relevant whether they are used in the context of research or are consumer goods. Particular attention must be paid to consumer products, as neurotechnology in the context of research and health interventions is in any case subject to very strict regulations and is studied in a closed context.
As consumer goods, they affect a large number of people and the central question is whether people are able to adequately assess the potential impact of the use of such technology on themselves and society.
There is also the risk that social pressure (both direct and indirect), in the face of various aspects such as the pressure to perform, promises or hopes placed on technology, may encourage the increasingly reckless use of non-invasive or even invasive NTs. They are also part of the debate on valorisation as consumer goods.
For many years, people have been trying to improve and optimise themselves and, from an ethical point of view, there are good reasons both for (e.g., increased social performance, increased possibilities for individual happiness in life) and against (e.g., possible pressure to conform towards the use of improvement, open questions in relation to equity of access) the possibility of improvement (Schöne-Seifert, 2007).
However, digital central nervous system enhancement represents a relatively new quality in the enhancement debate. The generic term ‘neuro-enhancement’ refers to several areas of medical-technical intervention in the central nervous system. A distinction is usually made here between emotional, cognitive, moral, sensory and motor enhancement.
However, more in-depth ethical analyses, both individual and societal, are needed to correctly classify this complex issue (Fenner, 2019).
Digitalisation is not a sudden and unexpected natural phenomenon that has happened and is sweeping us away, but a man-made transformation. Technological innovations – which today take place mainly in large multinational companies or young innovative start-ups – are affecting the lives of many people with unprecedented intensity.
Many modern technologies are already accessible to the masses and this is a circumstance with enormous potential for transformation with regard to the human-machine relationship in the most diverse areas of life (Kirchschläger, 2022). NTs are (and presumably will become even more intensively so in the future) part of this digital transformation. There is no doubt that these new technologies present many opportunities, but also significant challenges. Technologies are never without value and are associated with power and the exercise of power – perhaps not even visible to many at first glance. Behind technological innovations are the interests and values of developers and producers. These interests and values are implicitly and/or explicitly part of the respective technologies.
At first sight, the question arises as to how these interests and values, which are consciously or unconsciously part of individual technological systems, could influence users. This raises, for instance, questions concerning freedom of action, but also in general concerning people’s vulnerability. In view of the increasing attacks on digital infrastructures in the sense of cyber-warfare, the recurring hacker attacks against individuals and companies, and thus the overall vulnerability of digital infrastructures in the 21st century, we should also address the misuse of such technologies. An increase in the connection between man and machine would presumably also transfer this vulnerability to humans and lead to new dangers. These aspects are particularly important when considering high-risk technologies and their potential impact on individuals and society, as such complex technologies always involve a certain amount of energy. However, the various stakeholders (politicians, businesses, consumers, etc.) must not forget that power always entails dependencies and pressures, but also responsibilities.
Considering the subversive potential of new technologies, in-depth research efforts are needed, particularly with regard to these power relations and liability issues.
Finally, another central perspective should be emphasised that is intensively discussed in the debate on transhumanism but should be discussed even more intensively in the broader debate on digitisation: the anthropological perspective. Technology ethicist Armin Grunwald points out that behind the ethical question of digital transformation lies the question of who the human being is; who does a person want to be in the face of a highly technologised world and how can a person experience freedom, responsibility and creativity in this context (Grunwald, 2019a, 2019b, 2021). Grunwald therefore emphasises the question of the image of man in increasingly technologised societies because the image of man is influenced by the increasing degree of digitisation. Technological developments in the field of NT are particularly disruptive in this context. The image of a ‘man in need of optimisation’ is often outlined, who without the synthesis of man and machine – especially in light of developments in the field of artificial intelligence – risks falling further and further behind.
With the spread of NTs, it is rather possible to influence the central nervous system; the potential – individual and societal – consequences (both medium- and long-term) are unpredictable. In view of the implications, this topic should not be an innovation process driven by corporate interests but requires a broad scientific (especially from the humanities) and social and political discourse and debate.
Conclusions
Neurorights as human rights in the age of AI
An attempt is made to summarize neuroscientific knowledge and experience in the context of neurotechnologies and the legal, ethical and social consequences of their use. Notably, reality will prompt us to take further steps to improve ethical and legal norms. According to the previously identified five neuro-rights, the author assesses the necessity and expediency of their integration at the level of human rights and fundamental rights. These are: ‘the right to mental privacy’, ‘the right to personal identity’, ‘the right to free will’, ‘the right to equal access to mental augmentation, and ‘the right to protection from algorithmic bias’.
It is clear that the existing framework of human and fundamental rights provides a well-established and effective protective shield.
The vulnus, if anything, is in the moment of interpretation and in a political choice of rigorous and strong application of the existing regulatory framework, if necessary with maximum extension and a strict precautionary criterion.
For example, the ‘right to mental integrity’ is already explicitly protected by Article 3 CFR as the ‘right to physical and mental integrity’. Changing this to a separate ‘right to mental integrity’ would require differentiation, assuming that the legislature did not introduce this right frivolously; it must mean something different from the right already enshrined in Article 3. However, this raises new questions: is the understanding developed under Article 3 transferable to the new right? Where are the limits? Would other aspects of integrity also need explicit regulation? What is new?
The EU body of regulation, with strong consumer protection, fair competition, high product safety standards and comprehensive digital integration, is well prepared to address the recognised problems associated with NT. For example, medical devices and some NT devices are already heavily regulated by the MDR. Other NT devices not covered by the MDR have to meet the general high level of protection. However, adaptations could be considered, such as the explicit inclusion of neurological data in Article 9 GDPR or in an NT law, comparable to the AI law. In both cases, the focus is exclusively on the regulation of high-impact technologies.
The next logical step is to intensify law enforcement efforts.
To this end, the EU should actively participate in telling the story of NTs, which implies the encouragement of science and education, as well as strict control of their commercial communication. This comprehensive approach ensures that the public receives accurate and balanced information, which is crucial for the effective regulation and responsible development of NTs.
One of the main problems identified with NTs is our insufficient knowledge, which predominantly promotes commercially used success stories for advertising, leading to ‘neuro-enchantment’ and ‘neuromyth’.
Consumer law, competition law and product safety law are well suited to counter false or exaggerated claims; they just have to be used. There is a social need to tell people the real story of NTs, clarifying what we know and do not know, where the opportunities, dangers and risks lie.
This means actively promoting NT research, publishing failures as well as successes, communicating results in a comprehensible form not only through science but also through public administration, integrating it into education and strictly controlling commercial representation.
APPENDIX/ ПРИЛОЖЕНИЕ
Regulation sources
Directive (EU) 2019/1024 of the European Parliament and of the Council of 20 June 2019 on open data and the re-use of public sector information.
Directive (EU) 2019/2161 of the European Parliament and of the Council of 27 November 2019 amending Council Directive 93/13/EEC and Directives 98/6/EC, 2005/29/EC and 2011/83/EU of the European Parliament and of the Council as regards the better enforcement and modernisation of Union consumer protection rules.
Directive (EU) 2019/771 of the European Parliament and of the Council 20 May 2019 on certain aspects concerning contracts for the sale of goods, amending Regulation (EU) 2017/2394 and Directive 2009/22/EC, and repealing Directive 1999/44/EC.
Directive (EU) 2024/825 of the European Parliament and of the Council of 28 February 2024 amending Directives 2005/29/EC and 2011/83/EU as regards empowering consumers for the green transition through better protection against unfair practices and through better information.
Directive 2005/29/EC of the European Parliament and of the Council of 11 May 2005 concerning unfair business-to-consumer commercial practices in the internal market and amending Council Directive 84/450/EEC, Directives 97/7/EC, 98/27/EC and 2002/65/EC of the European Parliament and of the Council and Regulation (EC) No 2006/2004 of the European Parliament and of the Council ('Unfair Commercial Practices Directive').
Directive 2006/114/EC of the European Parliament and of the Council of 12 December 2006 concerning misleading and comparative advertising.
Directive 2011/24/EU of the European Parliament and of the Council of 9 March 2011 on the application of patients' rights in cross-border healthcare.
Directive 2011/83/EU of the European Parliament and of the Council of 25 October 2011 on consumer rights, amending Council Directive 93/13/EEC and Directive 1999/44/EC of the European Parliament and of the Council and repealing Council Directive 85/577/EEC and Directive 97/7/EC of the European Parliament and of the Council Text with EEA relevance.
European Convention on Human Rights.
European Parliament. (2022). European Parliament resolution of 3 May 2022 on artificial intelligence in a digital age (2020/2266(INI)). https://www.europarl.europa.eu/doceo/document/TA-9-2022-0140_EN.html
OECD. (2019). Responsible innovation in neurotechnology enterprises. https://www.oecd-ilibrary.org/science-and-technology/responsible-innovation-in-neurotechnology-enterprises_9685e4fd-en
Office of the High Commissioner for Human Rights (Ed.). CCPR General Comment No. 16: Article 17 (Right to Privacy) The Right to Respect of Privacy, Family, Home and Correspondence, and Protection of Honour and Reputation: Adopted at the Thirty-second Session of the Human Rights Committee, on 8 April 1988.
Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts. (2024, March 18). https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CONSIL:ST_7536_2024_INIT
Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts. P9_TA(2024)0138, https://www.europarl.europa.eu/RegData/seance_pleniere/textes_adoptes/definitif/2024/03-13/0138/P9_TA(2024)0138_EN.pdf
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).
Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC.
Regulation (EU) 2019/1020 of the European Parliament and of the Council of 20 June 2019 on market surveillance and compliance of products and amending Directive 2004/42/EC and Regulations (EC) No 765/2008 and (EU) No 305/2011.
Regulation (EU) 2019/881 of the European Parliament and of the Council of 17 April 2019 on ENISA (the European Union Agency for Cybersecurity) and on information and communications technology cybersecurity certification and repealing Regulation (EU) No 526/2013 (Cybersecurity Act) (Text with EEA relevance).
Regulation (EU) 2021/2282 of the European Parliament and of the Council of 15 December 2021 on health technology assessment and amending Directive 2011/24/EU.
Regulation (EU) 2023/988 of the European Parliament and of the Council of 10 May 2023 on general product safety, amending Regulation (EU) No 1025/2012 of the European Parliament and of the Council and Directive (EU) 2020/1828 of the European Parliament and the Council, and repealing Directive 2001/95/EC of the European Parliament and of the Council and Council Directive 87/357/EEC.
Regulation (EU) No 1025/2012 of the European Parliament and of the Council of 25 October 2012 on European standardisation, amending Council Directives 89/686/EEC and 93/15/EEC and Directives 94/9/EC, 94/25/EC, 95/16/EC, 97/23/EC, 98/34/EC, 2004/22/EC, 2007/23/EC, 2009/23/EC and 2009/105/EC of the European Parliament and of the Council and repealing Council Decision 87/95/EEC and Decision No 1673/2006/EC of the European Parliament and of the Council Text with EEA relevance.
Regulation (EU) No 910/2014 of the European Parliament and of the Council of 23 July 2014 on electronic identification and trust services for electronic transactions in the internal market and repealing Directive 1999/93/EC.
Treaty of Lisabon Amending the treaty on European Union and the treaty establishing the European community.
Treaty on European Union.
Treaty on the Functioning of the European Union.
UNESCO. (2023). The Risks and Challenges of Neurotechnologies for Human Rights. https://unesdoc.unesco.org/ark:/48223/pf0000384185.locale=en
Constitution of the United Nations Educational, Scientific and Cultural Organization.
1. e. g. https://www.flowneuroscience.com/
2. e. g. https://www.getliftid.com/
3. The ‘sender’ focuses on a 17 or 15 Hz flashing LED light. Focusing on one of them will lead to a slightly different brain activation. This pattern is recognised by the system and leads to a stimulation on the other person’s brain via an external device. The participants recognise this and can act accordingly. Furthermore, in 2019, the first attempt was made to include more than two brains involved with multiple senders and a receiver (Nam et al., 2021).
4. e. g. https://choosemuse.com/
5. Compared to eye movements and muscle artefacts, the EEG data related to brain activity are very weak and are in the μV range, as the signal has to make its way through the scalp, dura mater, skin and hair, so the electrical signals from the muscles are very strong and easily overlap the delicate brain signals. Consequently, even frowning, blinking, laughing, etc. in a very imperceptible way disturbs the signal irreparably. This limits the users’ possibilities of interaction and movement and should be considered during application. Unfortunately, these problems are barely addressed in practice.
6. The studies are mainly conducted over the course of several weeks, and it is unclear whether prolonged and regular use over several months to years will have any adverse effects. This is particularly problematic since some of these applications, such as NF, are already used regularly in neurotherapeutic contexts and are advertised for daily use in healthy populations.
7. One example was the exoskeleton worn by a paraplegic patient who was tasked with starting the 2014 World Cup. Although the exoskeleton used was not very different from many other models available at the time, and although several aspects of the prototype demonstration did not work as planned, the public memory of the event is that of a huge breakthrough and a complete success. https://www.livescience.com/46317-world-cup-paralyzed-man-exoskeleton.html
8. The social network belongs to Meta, which is recognized as an extremist organization, its functioning is prohibited in the territory of the Russian Federation.
9. For example, classroom dynamics could change dramatically if the brain data of all students were available on a screen for all to view.
10. This is – it should be made explicitly clear – when scientific and academic research itself is not ‘prepared’ and financed and implemented directly by the private sector.
11. On the broader and more comprehensive topic of the relationship between AI and justice and its implications, see: Di Salvo, M. (2024). Artificial Intelligence and the cyber utopianism of justice. Why AI is not intelligence and man’s struggle to survive himself. Russian Journal of Economics and Law, 18(1), 264–279. https://doi.org/10.21202/2782-2923.2024.1.264-279
12. Capítulo III: De Los Derechos Y Deberes Constitucionales - Senado – República de Chile. https://tramitacion.senado.cl/capitulo-iii-de-los-derechos-y-deberes-constitucionales
13. Sentence Emotiv/Girardi Supreme Court of justice of Cile. 9 august 2023, n. 217225-2023. https://img.lpderecho.pe/wp-content/uploads/2023/08/sentencia-217225-2023-LPDerecho.pdf
References
1. Abdulkader, S. N., Atia, A., & Mostafa, M.-S. M. (2015). Brain computer interfacing: Applications and challenges. Egyptian Informatics Journal, 16(2), 213–230. https://doi.org/10.1016/j.eij.2015.06.002
2. Adams, V., Murphy, M., & Clarke, A. E. (2009). Anticipation: Technoscience, life, affect, temporality. Subjectivity, 28(1), 246–265. https://doi.org/10.1057/sub.2009.18
3. Ajunwa, I., Crawford, K., & Schultz, J. (2017). Limitless Worker Surveillance. California Law Review, 105(3), 735–776. https://doi.org/10.15779/Z38BR8MF94
4. Ali, S. S., Lifshitz, M., & Raz, A. (2014). Empirical neuroenchantment: From reading minds to thinking critically. Frontiers in Human Neuroscience, 8. https://doi.org/10.3389/fnhum.2014.00357
5. Almeida, M., & Diogo, R. (2019). Human enhancement: Genetic engineering and evolution. Evolution, Medicine, and Public Health, 2019(1), 183–189. https://doi.org/10.1093/emph/eoz026
6. Alomar, S., King, N. K. K., Tam, J., Bari, A. A., Hamani, C., & Lozano, A. M. (2017). Speech and language adverse effects after thalamotomy and deep brain stimulation in patients with movement disorders: A meta-analysis. Movement Disorders: Official Journal of the Movement Disorder Society, 32(1), 53–63. https://doi.org/10.1002/mds.26924
7. Angrisani, L., Arpaia, P., & Casinelli, D. (2017). Instrumentation and measurements for non-invasive EEG-based brain-computer interface. In 2017 IEEE International Workshop on Measurement and Networking (M&N) (pp. 1–5). IEEE. https://doi.org/10.1109/IWMN.2017.8078383
8. Antal, A., Luber, B., Brem, A.-K., Bikson, M., Brunoni, A. R., Cohen Kadosh, R., Dubljević, V., Fecteau, S., Ferreri, F., Flöel, A., Hallett, M., Hamilton, R. H., Herrmann, C. S., Lavidor, M., Loo, C., Lustenberger, C., Machado, S., Miniussi, C., Moliadze, V., … Paulus, W. (2022). Non-invasive brain stimulation and neuroenhancement. Clinical Neurophysiology Practice, 7, 146–165. https://doi.org/10.1016/j.cnp.2022.05.002
9. Bareis, J., & Katzenbach, C. (2022). Talking AI into Being: The Narratives and Imaginaries of National AI Strategies and Their Performative Politics. Science, Technology, & Human Values, 47(5), 855–881. https://doi.org/10.1177/01622439211030007
10. Baylis, F. (2013). “I Am Who I Am”: On the Perceived Threats to Personal Identity from Deep Brain Stimulation. Neuroethics, 6(3), 513–526. https://doi.org/10.1007/s12152-011-9137-1
11. Beauchamp, T. L., & Childress, J. F. (2019). Principles of biomedical ethics (8th ed.). Oxford University Press.
12. Beckett, A. E., & Campbell, T. (2015). The social model of disability as an oppositional device. Disability & Society, 30(2), 270–283. https://doi.org/10.1080/09687599.2014.999912
13. Bernal, S. L., Celdrán, A. H., Pérez, G. M., Barros, M. T., & Balasubramaniam, S. (2022). Security in Brain-Computer Interfaces: State-of-the-art, opportunities, and future challenges. ACM Computing Surveys, 54(1), 1–35. https://doi.org/10.1145/3427376
14. Bielefeldt, H. (2023). Freiheit als Anspruch: Eine menschenrechtliche Perspektive. In N. J. Saam & H. Bielefeldt (Eds.), Sozialtheorie. Die Idee der Freiheit und ihre Semantiken: Zum Spannungsverhältnis von Freiheit und Sicherheit (pp. 187–196). (In German). https://doi.org/10.1515/9783839461884-017
15. Cagnan, H., Denison, T., McIntyre, C., & Brown, P. (2019). Emerging technologies for improved deep brain stimulation. Nature Biotechnology, 37(9), 1024–1033. https://doi.org/10.1038/s41587-019-0244-6
16. Cinel, C., Valeriani, D., & Poli, R. (2019). Neurotechnologies for Human Cognitive Augmentation: Current State of the Art and Future Prospects. Frontiers in Human Neuroscience, 13, 13. https://doi.org/10.3389/fnhum.2019.00013
17. Conitzer, V., Hadfield, G., & Vallor, S. (Eds.) (2019). Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. ACM.
18. Craig, J. N. (2016). Incarceration, Direct Brain Intervention, and the Right to Mental Integrity – a Reply to Thomas Douglas. Neuroethics, 9(2), 107–118. https://doi.org/10.1007/s12152-016-9255-x
19. Drew, L. (2024). Elon Musk's Neuralink brain chip: What scientists think of first human trial. Nature. https://doi.org/10.1038/d41586-024-00304-4
20. Ehlen, F., Schoenecker, T., Kühn, A. A., & Klostermann, F. (2014). Differential effects of deep brain stimulation on verbal fluency. Brain and Language, 134, 23–33. https://doi.org/10.1016/j.bandl.2014.04.002
21. Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor (1st ed.). St. Martin's Press.
22. Fenner, D. (2019). Selbstoptimierung und Enhancement: Ein ethischer Grundriss. UTB Philosophie: Vol. 5127. Narr Francke Attempto Verlag.
23. Floridi, L. (2023). The ethics of artificial intelligence: Principles, challenges, and opportunities. Oxford University Press.
24. Funke, A. (2023). Freiheit als konstitutives Prinzip der Rechtsordnung. In N. J. Saam, & H. Bielefeldt (Eds.), Sozialtheorie. Die Idee der Freiheit und ihre Semantiken: Zum Spannungsverhältnis von Freiheit und Sicherheit (pp. 169–176). transcript. https://doi.org/10.1515/9783839461884-015
25. Ganzer, P. D., Colachis, S. C., Schwemmer, M. A., Friedenberg, D. A., Dunlap, C. F., Swiftney, C. E., Jacobowitz, A. F., Weber, D. J., Bockbrader, M. A., & Sharma, G. (2020). Restoring the Sense of Touch Using a Sensorimotor Demultiplexing Neural Interface. Cell, 181(4), 763–773.e12. https://doi.org/10.1016/j.cell.2020.03.054
26. Gardner, H. (1987) The Mind’s New Science. New York: Basic Books.
27. Genser, J., Damianos, S., & Yuste, R. (2024). Safeguarding Brain Data: Assessing the Privacy Practices of Consumer Neurotechnology Companies. https://www.perseus-strategies.com/wp-content/uploads/2024/04/FINAL_Consumer_Neurotechnology_Report_Neurorights_Foundation_April-1.pdf
28. Giattino, C. M., Kwong, L., Rafetto, C., & Farahany, N. A. (2019). The Seductive Allure of Artificial Intelligence-Powered Neurotechnology. In V. Conitzer, G. Hadfield, & S. Vallor (Eds.), Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 397–402). ACM. https://doi.org/10.1145/3306618.3314269
29. Gilbert, F., Cook, M., O'Brien, T., & Illes, J. (2019). Embodiment and Estrangement: Results from a First-in- Human "Intelligent BCI" Trial. Science and Engineering Ethics, 25(1), 83–96. https://doi.org/10.1007/s11948-017-0001-5
30. Gilbert, F., Goddard, E., Viaña, J. N. M., Carter, A., & Horne, M. (2017). I Miss Being Me: Phenomenological Effects of Deep Brain Stimulation. AJOB Neuroscience, 8(2), 96–109. https://doi.org/10.1080/21507740.2017.1320319
31. Goering, S. (2015). Rethinking disability: The social model of disability and chronic disease. Current Reviews in Musculoskeletal Medicine, 8(2), 134–138. https://doi.org/10.1007/s12178-015-9273-z
32. Goering, S., Klein, E., Specker Sullivan, L., Wexler, A., Agüera Y Arcas, B., Bi, G., Carmena, J. M., Fins, J. J., Friesen, P., Gallant, J., Huggins, J. E., Kellmeyer, P., Marblestone, A., Mitchell, C., Parens, E., Pham, M., Rubel, A., Sadato, N., Teicher, M., … Yuste, R. (2021). Recommendations for Responsible Development and Application of Neurotechnologies. Neuroethics, 14(3), 365–386. https://doi.org/10.1007/s12152-021-09468-6
33. Grunwald, A. (2019a). Digitalisierung als Prozess. Ethische Herausforderungen inmitten allmählicher Verschiebungen zwischen Mensch, Technik und Gesellschaft. Zeitschrift Für Wirtschafts- Und Unternehmensethik, 20(2), 121–145. (In German). https://doi.org/10.5771/1439-880X-2019-2-121
34. Grunwald, A. (2019b). Der unterlegene Mensch: Die Zukunft der Menschheit im Angesicht von Algorithmen, künstlicher Intelligenz und Robotern (Originalausgabe, 1. Auflage). riva Premium. (In German).
35. Grunwald, A. (Ed.). (2021). Wer bist du, Mensch? Transformationen menschlicher Selbstverständnisse im wissenschaftlich-technischen Fortschritt. Herder. (In German).
36. Guy, V., Soriani, M.-H., Bruno, M., Papadopoulo, T., Desnuelle, C., & Clerc, M. (2018). Brain computer interface with the P300 speller: Usability for disabled people with amyotrophic lateral sclerosis. Annals of Physical and Rehabilitation Medicine, 61(1), 5–11. https://doi.org/10.1016/j.rehab.2017.09.004
37. Hallur, G. G., Prabhu, S., & Aslekar, A. (2021). Entertainment in Era of AI, Big Data & IoT. In S. Das & S. Gochhait (Eds.), Digital Entertainment: The Next Evolution in Service Sector (pp. 87–109). Springer Nature. https://doi.org/10.1007/978-981-15-9724-4_5
38. Heimbach-Steins, M. (2022). Sozialprinzipien. In M. Heimbach-Steins, M. Becka, J. J. Frühbauer, & G. Kruip (Eds.), Christliche Sozialethik: Grundlagen, Kontexte, Themen: ein Lehr- und Studienbuch (pp. 170–186). Verlag Friedrich Pustet.
39. Hull, G., & Pasquale, F. (2018). Toward a critical theory of corporate wellness. BioSocieties, 13(1), 190–212. https://doi.org/10.1057/s41292-017-0064-1
40. Iamsakul, K., Pavlovcik, A. V., Calderon, J. I., & Sanderson, L. M. (2017). Project HEAVEN: Preoperative Training in Virtual Reality. Surgical Neurology International, 8, 59. https://doi.org/10.4103/sni.sni_371_16
41. Ienca, M. (2021). Common Human Rights Challenges raised by different Applications of Neurotechnologies in the Biomedical Field. Commitee on Bioethics of the Council of Europe. https://rm.coe.int/report-final-en/1680a429f3
42. Ienca, M., & Andorno, R. (2017). Towards new human rights in the age of neuroscience and neurotechnology. Life Sciences, Society and Policy, 13(1), 5. https://doi.org/10.1186/s40504-017-0050-1
43. Ienca, M., Fins, J. J., Jox, R. J., Jotterand, F., Voeneky, S., Andorno, R., Ball, T., Castelluccia, C., Chavarriaga, R., Chneiweiss, H., Ferretti, A., Friedrich, O., Hurst, S., Merkel, G., Molnár-Gábor, F., Rickli, J.-M., Scheibner, J., Vayena, E., Yuste, R., & Kellmeyer, P. (2022). Towards a Governance Framework for Brain Data. Neuroethics, 15(2). https://doi.org/10.1007/s12152-022-09498-8
44. Iuculano, T., & Kadosh, R. C. (2013). The mental cost of cognitive enhancement. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 33(10), 4482–4486. https://doi.org/10.1523/JNEUROSCI.4927-12.2013
45. Jarke, J., & Breiter, A. (2019). Editorial: the datafication of education. Learning, Media and Technology, 44(1), 1–6. https://doi.org/10.1080/17439884.2019.1573833
46. Jasanoff, S., & Kim, S. (Eds.). (2015). Dreamscapes of modernity: Sociotechnical imaginaries and the fabrication of power. London: University of Chicago Press. http://www.degruyter.com/isbn/9780226276663
47. Jiang, L., Stocco, A., Losey, D. M., Abernethy, J. A., Prat, C. S., & Rao, R. P. N. (2019). BrainNet: A Multi-Person Brain-to-Brain Interface for Direct Collaboration Between Brains. Scientific Reports, 9(1), 6115. https://doi.org/10.1038/s41598-019-41895-7
48. Kirchschläger, P. G. (2019). Menschenrechte, Demokratie und Religionen. LIMINA – Grazer Theologische Perspektiven, 2(1), 17–39. https://doi.org/10.25364/17.2:2019.1.2
49. Kirchschläger, P. G. (2022). Ethische KI? Datenbasierte Systeme (DS) mit Ethik. HMD Praxis Der Wirtschaftsinformatik, 59(2), 482–494. https://doi.org/10.1365/s40702-022-00843-2
50. Kober, S. E., Schweiger, D., Witte, M., Reichert, J. L., Grieshofer, P., Neuper, C., & Wood, G. (2015). Specific effects of EEG based neurofeedback training on memory functions in post-stroke victims. Journal of NeuroEngineering and Rehabilitation, 12, 107. https://doi.org/10.1186/s12984-015-0105-6
51. Koska, C., & Filipović, A. (2017). Gestaltungsfragen der Digitalität: Zu den sozialethischen Herausforderungen von künstlicher Intelligenz, Big Data und Virtualität. In R. Bergold, J. Sautermeister, & A. Schröder (Eds.), Dem Wandel eine menschliche Gestalt geben: Sozialethische Perspektiven für die Gesellschaft von morgen: Festschrift zur Neueröffnung und zum 70-jährigen Bestehen des Katholisch-Sozialen Instituts (pp. 173–191). Verlag Herder. (In German).
52. Lavazza, A., & Giorgi, R. (2023). Philosophical foundation of the right to mental integrity in the age of neurotechnologies. Neuroethics, 16(1), 10. https://doi.org/10.1007/s12152-023-09517-2
53. Legaspi, R., Xu, W., Konishi, T., Wada, S., Kobayashi, N., Naruse, Y., & Ishikawa, Y. (2024). The sense of agency in human – AI interactions. Knowledge-Based Systems, 286, 111298. https://doi.org/10.1016/j.knosys.2023.111298
54. Li, Q., Ding, D., & Conti, M. (2015). Brain-Computer Interface applications: Security and privacy challenges. In 2015 IEEE Conference on Communications and Network Security (CNS) (pp. 663–666). IEEE. https://doi.org/10.1109/CNS.2015.7346884
55. Ligthart, S., Bublitz, C., Douglas, T., Forsberg, L., & Meynen, G. (2022). Rethinking the Right to Freedom of Thought: A Multidisciplinary Analysis. Human Rights Law Review, 22(4), Article ngac028, 1–14. https://doi.org/10.1093/hrlr/ngac028
56. Littlefield, M. M. (2018). Instrumental Intimacy: EEG Wearables and Neuroscientific Control. John Hopkins University Press.
57. Lozano, A. M., Lipsman, N., Bergman, H., Brown, P., Chabardes, S., Chang, J. W., Matthews, K., McIntyre, C. C., Schlaepfer, T. E., Schulder, M., Temel, Y., Volkmann, J., & Krauss, J. K. (2019). Deep brain stimulation: Current challenges and future directions. Nature Reviews. Neurology, 15(3), 148–160. https://doi.org/10.1038/s41582-018-0128-2
58. Macgilchrist, F., Allert, H., Cerratto Pargman, T., & Jarke, J. (2024). Designing Postdigital Futures: Which Designs? Whose Futures? Postdigital Science and Education, 6(1), 13–24. https://doi.org/10.1007/s42438-022-00389-y
59. Manahan-Vaughan, D. (Ed.). (2018). Handbook of Behavioral Neuroscience: Volume 28. Handbook of In Vivo Neural Plasticity Techniques: A Sytstems Neuroscience Approach to the Neural Basis of Memory and Cognition. Elsevier.
60. Manokha, I. (2020). Covid-19: teleworking, surveillance and 24/7 work. Some reflexions on the expected growth of remote work after the pandemic. Political Anthropological Research on International Social Sciences (PARISS), 1(2), 273–287.
61. Markosian, C., Taruvai, V. S., & Mammis, A. (2020). Neuromodulatory hacking: A review of the technology and security risks of spinal cord stimulation. Acta Neurochirurgica, 162(12), 3213–3219. https://doi.org/10.1007/s00701-020-04592-3
62. Martínez-Martínez, A. M., Aguilar, O. M., & Acevedo-Triana, C. A. (2017). Meta-Analysis of the Relationship between Deep Brain Stimulation in Patients with Parkinson's Disease and Performance in Evaluation Tests for Executive Brain Functions. Parkinson's Disease, 2017(1), 9641392. https://doi.org/10.1155/2017/9641392
63. Martinovic, I., Davies, D., Frank, M., Perito, D., Ros, T., & Song, D. (2012). On the feasibility of side-channel attacks with brain-computer interfaces. In 21st USENIX Security Symposium (USENIX Security 12) (pp. 143–158). USENIX Association.
64. Marzbani, H., Marateb, H. R., & Mansourian, M. (2016). Neurofeedback: A Comprehensive Review on System Design, Methodology and Clinical Applications. Basic and Clinical Neuroscience, 7(2), 143–158. https://doi.org/10.15412/J.BCN.03070208
65. Mathewson, K. E., Harrison, T. J. L., & Kizuk, S. A. D. (2017). High and dry? Comparing active dry EEG electrodes to active and passive wet electrodes. Psychophysiology, 54(1), 74–82. https://doi.org/10.1111/psyp.12536
66. Mihara, M., & Miyai, I. (2016). Review of functional near-infrared spectroscopy in neurorehabilitation. Neurophotonics, 3(3), 31414. https://doi.org/10.1117/1.NPh.3.3.031414
67. Moradi, P., & Levy, K. (2020). The Future of Work in the Age of AI. In M. D. Dubber, F. Pasquale, S. Das, P. Moradi, & K. Levy (Eds.), The Oxford Handbook of Ethics of AI (pp. 269–288). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780190067397.013.17
68. Nagl-Docekal, H., & Zacharasiewicz, W. (Eds.). (2022). Artificial Intelligence and Human Enhancement. De Gruyter. https://doi.org/10.1515/9783110770216
69. Nam, C. S., Traylor, Z., Chen, M., Jiang, X., Feng, W., & Chhatbar, P. Y. (2021). Direct Communication Between Brains: A Systematic PRISMA Review of Brain-To-Brain Interface. Frontiers in Neurorobotics, 15, 656943. https://doi.org/10.3389/fnbot.2021.656943
70. Nickel, J. (2019). Stanford Encyclopedia of Philosophy: Human Rights. https://plato.stanford.edu/entries/rights-human/
71. Olson, J. A., Cyr, M., Artenie, D. Z., Strandberg, T., Hall, L., Tompkins, M. L., Raz, A., & Johansson, P. (2023). Emulating future neurotechnology using magic. Consciousness and Cognition, 107, 103450. https://doi.org/10.1016/j.concog.2022.103450
72. Pycroft, L., Boccard, S. G., Owen, S. L. F., Stein, J. F., Fitzgerald, J. J., Green, A. L., & Aziz, T. Z. (2016). Brainjacking: Implant Security Issues in Invasive Neuromodulation. World Neurosurgery, 92, 454–462. https://doi.org/10.1016/j.wneu.2016.05.010
73. Rahm, L. (2023a). Education, automation and AI: a genealogy of alternative futures. Learning, Media and Technology, 48(1), 6–24. https://doi.org/10.1080/17439884.2021.1977948
74. Rahm, L. (2023b). Educational imaginaries: governance at the intersection of technology and education. Journal of Education Policy, 38(1), 46–68. https://doi.org/10.1080/02680939.2021.1970233
75. Rao, R. P. N., Stocco, A., Bryan, M., Sarma, D., Youngquist, T. M., Wu, J., & Prat, C. S. (2014). A direct brain-to-brain interface in humans. PloS One, 9(11), e111332. https://doi.org/10.1371/journal.pone.0111332
76. Raspopovic, S. (2020). Advancing limb neural prostheses. Science, 370(6514), 290–291. https://doi.org/10.1126/science.abb1073
77. Ros, T., Enriquez-Geppert, S., Zotev, V., Young, K. D., Wood, G., Whitfield-Gabrieli, S., Wan, F., Vuilleumier, P., Vialatte, F., van de Ville, D., Todder, D., Surmeli, T., Sulzer, J. S., Strehl, U., Sterman, M. B., Steiner, N. J., Sorger, B., Soekadar, S. R., Sitaram, R., … Thibault, R. T. (2020). Consensus on the reporting and experimental design of clinical and cognitive-behavioural neurofeedback studies (CRED-nf checklist). Brain: A Journal of Neurology, 143(6), 1674–1685. https://doi.org/10.1093/brain/awaa009
78. Rose, D., Buckwalter, W., & Nichols, S. (2017). Neuroscientific Prediction and the Intrusion of Intuitive Metaphysics. Cognitive Science, 41(2), 482–502. https://doi.org/10.1111/cogs.12310
79. Schauer, F. (2020). Freedom of Thought? Social Philosophy and Policy, 37(2), 72–89. https://doi.org/10.1017/S0265052521000054
80. Schlosser, M. (2024). Agency. In E. N. Zalta, U. Nodelman, C. Allen, Kim Hannah, & P. Oppenheimer (Eds.), The Stanford Encyclopedia of Philosophy (Winter 2019). https://plato.stanford.edu/archives/win2019/entries/agency/
81. Schöne-Seifert, B. (2007). Grundlagen der Medizinethik. Alfred Kroner Verlag. http://gbv.eblib.com/patron/FullRecord.aspx?p=4341681
82. Selwyn, N. (2022). The future of AI and education: Some cautionary notes. European Journal of Education, 57(4), 620–631. https://doi.org/10.1111/ejed.12532
83. Shaheed, A. (2021, October 5). Freedom of thought: Interim report of the Special Rapporteur on freedom of religion or belief (A/76/380). https://documents.un.org/doc/undoc/gen/n21/274/90/pdf/n2127490.pdf?token=CeR9BnQALayfZJBp3f&fe=true
84. Sharon, T., & Gellert, R. (2023). Regulating Big Tech expansionism? Sphere transgressions and the limits of Europe's digital regulatory strategy. Information, Communication & Society, 1–18. https://doi.org/10.1080/1369118X.2023.2246526
85. Shew, A. (2020). Ableism, Technoableism, and Future AI. IEEE Technology and Society Magazine, 39(1), 40–85. https://doi.org/10.1109/MTS.2020.2967492
86. Spector, M., & Kitsuse, J. I. (2001). Constructing Social Problems. New Brunswick. Transactions Publisher.
87. Sturm, W., Willmes, K., Orgass, B., & Hartje, W. (1997). Do Specific Attention Deficits Need Specific Training? Neuropsychological Rehabilitation, 7(2), 81–103. https://doi.org/10.1080/713755526
88. Suchman, L. (2023). The uncontroversial 'thingness' of AI. Big Data & Society, 10(2), 1–5. https://doi.org/10.1177/20539517231206794
89. Suthana, N., & Fried, I. (2014). Deep brain stimulation for enhancement of learning and memory. NeuroImage, 85(3), 996–1002. https://doi.org/10.1016/j.neuroimage.2013.07.066
90. Taylor, L., Martin, A., Souza, S. P. de, & Lopez-Solano, J. (2023). Why are sector transgressions so hard to govern? Reflections from Europe's pandemic experience. Information, Communication & Society, 27(15), 2721–2725. https://doi.org/10.1080/1369118X.2023.2264919
91. Tesink, V., Douglas, T., Forsberg, L., Ligthart, S., & Meynen, G. (2024). Right to mental integrity and neurotechnologies: Implications of the extended mind thesis. Journal of Medical Ethics, 50(10), 656–663. https://doi.org/10.1136/jme-2023-109645
92. Thibault, R. T., & Raz, A. (2017). The psychology of neurofeedback: Clinical intervention even if applied placebo. The American Psychologist, 72(7), 679–688. https://doi.org/10.1037/amp0000118
93. Tirabeni, L. (2023). Bounded Well-Being: Designing Technologies for Workers' Well-Being in Corporate Programmes. Work, Employment and Society, 38(6), 1506–1527. https://doi.org/10.1177/09500170231203113
94. van Elk, M. (2019). Socio-cognitive biases are associated to belief in neuromyths and cognitive enhancement: A pre-registered study. Personality and Individual Differences, 147, 28–32. https://doi.org/10.1016/j.paid.2019.04.014
95. Vester, H.-G. (2009). Kompendium der Soziologie I: Grundbegriffe und II: Die Klassiker. Wiesbaden: VS-Verlag. (In German).
96. Vogt, M. (2009). Prinzip Nachhaltigkeit: Ein Entwurf aus theologisch-ethischer Perspektive. Zugl.: Luzern, Univ., Habil.-Schr. Hochschulschriften zur Nachhaltigkeit (Vol. 39). München: Oekom-Verl., Ges. für Ökologische Kommunikation. (In German).
97. Warren, S., & Brandeis, L. (1890). The Right to Privacy. Harvard Law Review, 4(5), 193–220. https://doi.org/10.2307/1321160
98. Wexler, A., & Thibault, R. (2019). Mind-Reading or Misleading? Assessing Direct-to-Consumer Electroencephalography (EEG) Devices Marketed for Wellness and Their Ethical and Regulatory Implications. Journal of Cognitive Enhancement, 3(1), 131–137. https://doi.org/10.1007/s41465-018-0091-2
99. Whitham, E. M., Pope, K. J., Fitzgibbon, S. P., Lewis, T., Clark, C. R., Loveless, S., Broberg, M., Wallace, A., DeLosAngeles, D., Lillie, P., Hardy, A., Fronsko, R., Pulbrook, A., & Willoughby, J. O. (2007). Scalp electrical recording during paralysis: Quantitative evidence that EEG frequencies above 20 Hz are contaminated by EMG. Clinical Neurophysiology, 118(8), 1877–1888. https://doi.org/10.1016/j.clinph.2007.04.027
100. Willoweit, D. (2023). Die vielen Freiheiten, die eine Freiheit und das Recht. In N. J. Saam & H. Bielefeldt (Eds.), Sozialtheorie. Die Idee der Freiheit und ihre Semantiken: Zum Spannungsverhältnis von Freiheit und Sicherheit (pp. 161–167). https://doi.org/10.1515/9783839461884-014
101. Wong, J. K., Mayberg, H. S., Wang, D. D., Richardson, R. M., Halpern, C. H., Krinke, L., Arlotti, M., Rossi, L., Priori, A., Marceglia, S., Gilron, R., Cavanagh, J. F., Judy, J. W., Miocinovic, S., Devergnas, A. D., Sillitoe, R. V., Cernera, S., Oehrn, C. R., Gunduz, A., … Okun, M. S. (2022). Proceedings of the 10th annual deep brain stimulation think tank: Advances in cutting edge technologies, artificial intelligence, neuromodulation, neuroethics, interventional psychiatry, and women in neuromodulation. Frontiers in Human Neuroscience, 16, 1084782. https://doi.org/10.3389/fnhum.2022.1084782
102. Wood, G., Willmes, K., Koten, J. W., & Kober, S. E. (2024). Fat tails and the need to disclose distribution parameters of qEEG databases. PloS One, 19(1), e0295411. https://doi.org/10.1371/journal.pone.0295411
103. Yadav, D., Yadav, S., & Veer, K. (2020). A comprehensive assessment of Brain Computer Interfaces: Recent trends and challenges. Journal of Neuroscience Methods, 346, 108918. https://doi.org/10.1016/j.jneumeth.2020.108918
104. Yuste, R. (2023). Advocating for neurodata privacy and neurotechnology regulation. Nature Protocols, 18(10), 2869–2875. https://doi.org/10.1038/s41596-023-00873-0
105. Zarzycki, M. Z., & Domitrz, I. (2020). Stimulation-induced side effects after deep brain stimulation – a systematic review. Acta Neuropsychiatrica, 32(2), 57–64. https://doi.org/10.1017/neu.2019.35
About the Author
M. Di SalvoItaly
Michele Di Salvo, Doctor in Law, CrossMediaLabs; мember of Society for Neuroscience; of Federation of European Neuroscience Societies; of The International Neuropsychoanalysis Society; of Cognitive Neuroscience Society
Naples
Review
For citations:
Di Salvo M. The protection of neural rights in the age of neurotechnologies and AI. the ethical challenge for law and neuroscience. Russian Journal of Economics and Law. 2025;19(1):202-233. https://doi.org/10.21202/2782-2923.2025.1.202-233