Forgot password?

The Fondation Brocher is an essential player in this vital thinking process: one which will help make us aware of the real challenges in using our resources for maximum impact on the health of the people of the world.

 

 

Professor Daniel Wikler, Harvard University

Donations

The Brocher Foundation is a Swiss non-profit private foundation  recognized of public interest. Your donations are tax deductible according to the regulations in force.

Back


November 25 - 27, 2019

Closing the regulatory gap for consumer neurotechnology

Schedule:

Tentative Program

Monday, November 25, 2019
Day 1: Neurotechnology, consumer applications and conceptual foundations

10:00 - 12:00 – Informal welcome at Brocher Foundation of arriving participants and registration

12:00 - 12:15 – Introduction and presentation of the workshop by the organizers

Session 1 - Chair: Ralf Jox

12:15 - 12:35 – Tonio Ball (Uni Freiburg): State of the Art Neurotechnology

12:35 - 12:55 – Fabrice Jotterand (Medical College of Wisconsin / Uni Basel): Consumer
Neurotechnology and Our Incumbent Anthropological Identity Crisis

12:55 - 13:15 – Orsolya Friedrich (FernUni Hagen): Implications of Consumer Neurotechnology
for Autonomy and Agency

13:15 - 14:00 – Lunch

Session 2 - Chair: Fabrice Jotterand

14:00 - 14:30 – Rafael Yuste (Columbia University) [via Videocall]: Neurorights and the Technocratic Oath

14:30 – 15:30 – Group Discussion (telepresence of Rafael Yuste) and presentation of break-out groups.

15:30 – 15:50 – Coffee break

15:50 - 16:10 –Philipp Kellmeyer (Uni Freiburg): Ethics of Machine Learning and Brain Data Analytics

16:10 – 18:00 – Break-out groups (16:10-17:30) and plenary group presentations (17:30-18:00)

19:00 – 21:00 – Dinner

Tuesday, November 26, 2019
Day 2: Brain Data from Consumer Neurotechnology and the Need for Legal and Technical Standards

Session 1 - Chair: Silja Vöneky

9:00 - 9:20 – Fruzsina Molnár-Gábor (Heidelberg Academy of Sciences, Heidelberg):
Brain Data and Genetic Data: A comparative legal analysis

9:20 - 9:40 – Marcello Ienca (ETH Zurich): Brain Data, Mental Information and Neurorights: the Need for Adaptive and Multi-level Governance

9:40 - 10:00 – Silja Vöneky (Uni Freiburg): The Legal Boundaries of Neurotechnology

10:00 - 10:20 – Coffee break

10:20 - 10:40 – Joseph J. Fins (Weill Cornell Medical College): Regulating Neurotechnology a
US Perspective

10:40 - 11:00 – Group discussion on presentations

11:00 – 12:30 – Collective writing of draft guideline document (Part I): Writing down of results from group discussions, logistics, and delineation of first steps

12:30 - 13:30 – Lunch

Session 2 - Chair: Marcello Ienca

13:30 - 13:50 – Ralf J. Jox (Uni Lausanne): What is Specific About Brain Data that Warrants
Special Protection?

13:50 - 14:10 – Ricardo Chavarriaga (EPFL): Developing Standards for Neurotechnology and
Algorithms

14:10 - 14:30 – James Scheibner (ETH Zurich): Balancing Data Protection and Technical Solutions: Insights from the DPPH Project

14:30 - 14:50 – Claude Castelluccia (INRIA): Cognitive Security

14:50 - 15:10 – Coffee break

15:10 - 16:30 – Group Discussion & Working Groups: Discussing the afternoon talks and the
results of the first round of drafting

16:30 - 17:00 – Hank Greely (Stanford) [via Videocall]: Neuroethics & International Brain Data Governance

17:00 - 18:00 – Collective writing of draft guideline document (Part II): Format, structure and identification of priorities


19:00 – Dinner

Wednesday, November 27, 2019
Day 3: Closing the Regulatory Gap

Session 1 - Chair: Philipp Kellmeyer

9:00 - 9:20 – Roberto Andorno (Uni Zurich): Brain Data and Human Rights

9:20 - 9:40 – Hervé Chneiweiss (INSERM): Neurotechnologies and Identity: from Games to
Bias? Ethical Issues Between Preserving Autonomy and Preventing Vulnerability

9:40 - 10:00 – Jean-Marc Rickli (GCSP): Security Issues of AI and Neurotechnology

10:40 - 11:00 – Coffee Break

11:00 - 13:00 – Break-out groups and collective writing of draft guideline document (Part III): Content, timeline and implementation

13:00 - 14:00 – Lunch

14:00 - 15:30 – Plenary session on guidelines development & definition of next steps

15:30 – Farewell

Place:

Brocher Foundation

Organizers:

The combination of big data and advanced machine learning—often referred to as a form of artificial intelligence (AI) or “intelligent systems”—may increase the efficiency and accuracy of automated systems and of economic processes and is therefore of great interest to industries across all sectors, especially health care and medical technology (Mittelstadt et al., 2016; Echeverría and Tabarés, 2017).

In the area of medical technology, concomitant progress in microsystems engineering has turbocharged the field of intelligent neurotechnology, i.e. devices for decoding brain data for clinical, consumer or military applications. Big information technology and software companies, as well as many “neurotech” startups, are now actively developing neurotechnological systems directly targeted at consumers, often for “paramedical” applications, for example neurofeedback for relieving stress or anxiety or for brain stimulation (Ienca et al., 2018; Kellmeyer, 2018; Wexler, 2017; Piwek et al., 2016). As these devices and applications typically fall outside of medical device regulation regimes, a growing (grey) market of direct-to-consumer (DTC) neurotechnology is emerging that creates a number of ethical, legal, social and political challenges (Kellmeyer, 2018; Ienca et al., 2018; Yuste et al., 2017).

With regard to intelligent neurotechnologies, scholars have recently raised ethical concerns and indicated issues of privacy, agency, identity, data security, human enhancement, algorithmic biases and discrimination as the main ethical, legal and sociopolitical challenges in this domain (Kellmeyer, 2018, Yuste et al., 2017, Jotterand & Ienca 2017). In particular, intelligent neurotechnological devices raise concerns regarding control and responsibility, e.g. in terms of gaps in moral and legal accountability in cases in which decision-making capacity is relegated from human users to a device (or software-based decision-support system), for example in an intelligent brain-computer interface (Kellmeyer et al., 2016; Grübler, 2011). In terms of conceptual philosophical foundations, neurotechnological devices that interact closely with individuals, such as brain-computer interfaces for medical or entertainment purposes, also challenge concepts of agency, autonomy and identity (Friedrich et al., 2018; Kellmeyer et al., 2016; Jotterand 2016; Gilbert, 2015). From the perspective of legal studies and in terms of regulatory guidance, the legitimacy of using intransparent algorithms in safety-critical applications is questioned (Voeneky and Neuman, 2018). Furthermore, algorithm-based evidence might be inconclusive, inscrutable, or misguided and therefore pose significant barriers for establishing trust between human users and the intelligent device (Kellmeyer et al., 2018; Gaudiello et al., 2016; Battaglia et al., 2014).

Especially the collection of large amounts of brain data in the hands of private companies raises concerns about the security of these data from unwarranted access and misuse. The recent case of data abuse by Facebook has raised awareness for the general risks associated with the acquisition and storage of large quantities of personal data. Particularly, it is unclear whether existing legal frameworks for data protection and governance suffice in protecting consumers from these effects (Ienca et al., 2018; Kellmeyer, 2018). Apart from individual privacy, especially mental privacy, the ability of advanced machine learning algorithms to learn on aggregated data collected from many individuals also raises questions on group privacy (Taylor et al., 2017) as well as questions on the privacy of first-person subjective experience (“mental privacy”). Among other sequelae, these concerns have spawned a debate on the moral and legal status of mental states, e.g. the question whether the right to mental privacy should be framed in the context of human rights (Ienca and Andorno, 2017).

This project aims at addressing these concerns by engaging in multidisciplinary reflections to examine philosophical, ethical, legal and social challenges arising intelligent neurotechnologies, specifically in the context of consumer applications.

Literature

Battaglia F, Mukerji N, Nida-Rümelin J. Rethinking Responsibility in Science and Technology. Pisa University Press; 2014.

Bechmann G, Decker M, Fiedeler U, Krings B-J. Technology assessment in a complex world. International Journal of Foresight and Innovation Policy 2006[cited 2018 Jul 4] Available from: https://www.inderscienceonline.com/doi/abs/10.1504/IJFIP.2007.011419

Bowman DM, Garden H, Stroud C, Winickoff DE. The neurotechnology and society interface: responsible innovation in an international context. Journal of Responsible Innovation 2018; 5: 1–12.

Davis J, Nathan LP. Value Sensitive Design: Applications, Adaptations, and Critiques. In: van den Hoven J, Vermaas PE, van de Poel I, editor(s). Handbook of Ethics, Values, and Technological Design. Dordrecht: Springer Netherlands; 2015. p. 11–40.

Decker M, Grunwald A. Rational Technology Assessment as Interdisciplinary Research. In: Interdisciplinarity in Technology Assessment. Springer, Berlin, Heidelberg; 2001. p. 33–60.

Echeverría J, Tabarés R. Artificial Intelligence, Cybercities and Technosocieties. Minds & Machines 2017; 27: 473–493.

Fisher E, Rip A. Responsible Innovation: Multi-Level Dynamics and Soft Intervention Practices. In: Owen R, Bessant J, Heintz M, editor(s). Responsible Innovation. Chichester, UK: John Wiley & Sons, Ltd; 2013. p. 165–183.

Friedrich O, Racine E, Steinert S, Pömsl J, Jox RJ. An Analysis of the Impact of Brain-Computer Interfaces on Autonomy. Neuroethics 2018: 1–13.

Gaudiello I, Zibetti E, Lefort S, Chetouani M, Ivaldi S. Trust as indicator of robot functional and social acceptance. An experimental study on user conformation to iCub answers. Computers in Human Behavior 2016; 61: 633–655.

Gilbert F. A Threat to Autonomy? The Intrusion of Predictive Brain Implants. AJOB Neurosci 2015; 6: 4–11.

Grübler G. Beyond the responsibility gap. Discussion note on responsibility and liability in the use of brain-computer interfaces. AI & SOCIETY 2011; 26: 377–382.

Ienca M, Andorno R. Towards new human rights in the age of neuroscience and neurotechnology. Life Sciences, Society and Policy 2017; 13: 5.

Ienca M, Haselager P, Emanuel EJ. Brain leaks and consumer neurotechnology. Nature Biotechnology 2018; 36: 805–810.

Jotterand F, Ienca, M. (2017). “The Biopolitics of Neuroethics” in Racine, E. & Aspler, J. (Eds.). The Debate about Neuroethics: Perspectives on the Field’s Development, Focus, and Future. Dordrecht: Springer.

Jotterand, F. (2016). “Moral Enhancement, Neuroessentialism, and Moral Content” (pp. 42-56) in F. Jotterand & V. Dubljevic (Eds.). Cognitive Enhancement: Ethical and Policy Implications in International Perspectives. Oxford University Press.

Kellmeyer P. Big Brain Data: On the Responsible Use of Brain Data from Clinical and Consumer-Directed Neurotechnological Devices. Neuroethics 2018 Available from: http://link.springer.com/10.1007/s12152-018-9371-x

Kellmeyer P, Cochrane T, Müller O, Mitchell C, Ball T, Fins JJ, et al. The Effects of Closed-Loop Medical Devices on the Autonomy and Accountability of Persons and Systems. Camb Q Healthc Ethics 2016; 25: 623–633.

Kellmeyer P, Mueller O, Feingold-Polak R, Levy-Tzedek S. Social robots in rehabilitation: A question of trust. Science Robotics 2018; 3: eaat1587.

Mittelstadt BD, Allo P, Taddeo M, Wachter S, Floridi L. The ethics of algorithms: Mapping the debate. Big Data & Society 2016; 3: 205395171667967.

Piwek L, Ellis DA, Andrews S, Joinson A. The Rise of Consumer Health Wearables: Promises and Barriers. PLOS Medicine 2016; 13: e1001953.

Voeneky S, Neuman GL. Human rights, democracy, and legitimacy in a world of disorder. 2018.

Wexler A. Who Uses Direct-to-Consumer Brain Stimulation Products, and Why? A Study of Home Users of tDCS Devices. J Cogn Enhanc 2017: 1–21.

Yuste R, Goering S, Arcas BA y, Bi G, Carmena JM, Carter A, et al. Four ethical priorities for neurotechnologies and AI. Nature News 2017; 551: 159.